Solid Earth science in the 1990s. Volume 3: Measurement techniques and technology
NASA Technical Reports Server (NTRS)
1991-01-01
Reports are contained from the NASA Workshop on Solid Earth Science in the 1990s. The techniques and technologies needed to address the program objectives are discussed. The Measurement Technique and Technology Panel identified (1) candidate measurement systems for each of the measurements required for the Solid Earth Science Program that would fall under the NASA purview; (2) the capabilities and limitations of each technique; and (3) the developments necessary for each technique to meet the science panel requirements. In nearly all cases, current technology or a development path with existing technology was identified as capable of meeting the requirements of the science panels. These technologies and development paths are discussed.
Three dimensional scattering center imaging techniques
NASA Technical Reports Server (NTRS)
Younger, P. R.; Burnside, W. D.
1991-01-01
Two methods to image scattering centers in 3-D are presented. The first method uses 2-D images generated from Inverse Synthetic Aperture Radar (ISAR) measurements taken by two vertically offset antennas. This technique is shown to provide accurate 3-D imaging capability which can be added to an existing ISAR measurement system, requiring only the addition of a second antenna. The second technique uses target impulse responses generated from wideband radar measurements from three slightly different offset antennas. This technique is shown to identify the dominant scattering centers on a target in nearly real time. The number of measurements required to image a target using this technique is very small relative to traditional imaging techniques.
Does the use of automated fetal biometry improve clinical work flow efficiency?
Espinoza, Jimmy; Good, Sara; Russell, Evie; Lee, Wesley
2013-05-01
This study was designed to compare the work flow efficiency of manual measurements of 5 fetal parameters with a novel technique that automatically measures these parameters from 2-dimensional sonograms. This prospective study included 200 singleton pregnancies between 15 and 40 weeks' gestation. Patients were randomly allocated to either manual (n = 100) or automatic (n = 100) fetal biometry. The automatic measurement was performed using a commercially available software application. A digital video recorder captured all on-screen activity associated with the sonographic examination. The examination time and number of steps required to obtain fetal measurements were compared between manual and automatic methods. The mean time required to obtain the biometric measurements was significantly shorter using the automated technique than the manual approach (P < .001 for all comparisons). Similarly, the mean number of steps required to perform these measurements was significantly fewer with automatic measurements compared to the manual technique (P < .001). In summary, automated biometry reduced the examination time required for standard fetal measurements. This approach may improve work flow efficiency in busy obstetric sonography practices.
Fracture toughness testing on ferritic alloys using the electropotential technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, F.H.; Wire, G.L.
1981-06-11
Fracture toughness measurements as done conventionally require large specimens (5 x 5 x 2.5 cm) which would be prohibitively expensive to irradiate over the fluence and temperature ranges required for first wall design. To overcome this difficulty a single specimen technique for J intergral fracture toughness measurements on miniature specimens (1.6 cm OD x 0.25 cm thick) was developed. Comparisons with specimens three times as thick show that the derived J/sub 1c/ is constant, validating the specimen for first wall applications. The electropotential technique was used to obtain continuous crack extension measurements, allowing a ductile fracture resistence curve to bemore » constructed from a single specimen. The irradiation test volume required for fracture toughness measurements using both miniature specimens and single specimen J measurements was reduced a factor of 320, making it possible to perform a systematic exploration of irradiation temperature and dose variables as required for qualification of HT-9 and 9Cr-1Mo base metal and welds for first wall application. Fracture toughness test results for HT-9 and 9Cr-1Mo from 25 to 539/sup 0/C are presented to illustrate the single specimen technique.« less
Technique for calibrating angular measurement devices when calibration standards are unavailable
NASA Technical Reports Server (NTRS)
Finley, Tom D.
1991-01-01
A calibration technique is proposed that will allow the calibration of certain angular measurement devices without requiring the use of absolute standard. The technique assumes that the device to be calibrated has deterministic bias errors. A comparison device must be available that meets the same requirements. The two devices are compared; one device is then rotated with respect to the other, and a second comparison is performed. If the data are reduced using the described technique, the individual errors of the two devices can be determined.
The Sine Method: An Alternative Height Measurement Technique
Don C. Bragg; Lee E. Frelich; Robert T. Leverett; Will Blozan; Dale J. Luthringer
2011-01-01
Height is one of the most important dimensions of trees, but few observers are fully aware of the consequences of the misapplication of conventional height measurement techniques. A new approach, the sine method, can improve height measurement by being less sensitive to the requirements of conventional techniques (similar triangles and the tangent method). We studied...
Fantini, Sergio; Sassaroli, Angelo; Tgavalekos, Kristen T.; Kornbluth, Joshua
2016-01-01
Abstract. Cerebral blood flow (CBF) and cerebral autoregulation (CA) are critically important to maintain proper brain perfusion and supply the brain with the necessary oxygen and energy substrates. Adequate brain perfusion is required to support normal brain function, to achieve successful aging, and to navigate acute and chronic medical conditions. We review the general principles of CBF measurements and the current techniques to measure CBF based on direct intravascular measurements, nuclear medicine, X-ray imaging, magnetic resonance imaging, ultrasound techniques, thermal diffusion, and optical methods. We also review techniques for arterial blood pressure measurements as well as theoretical and experimental methods for the assessment of CA, including recent approaches based on optical techniques. The assessment of cerebral perfusion in the clinical practice is also presented. The comprehensive description of principles, methods, and clinical requirements of CBF and CA measurements highlights the potentially important role that noninvasive optical methods can play in the assessment of neurovascular health. In fact, optical techniques have the ability to provide a noninvasive, quantitative, and continuous monitor of CBF and autoregulation. PMID:27403447
Recent flight-test results of optical airdata techniques
NASA Technical Reports Server (NTRS)
Bogue, Rodney K.
1993-01-01
Optical techniques for measuring airdata parameters were demonstrated with promising results on high performance fighter aircraft. These systems can measure the airspeed vector, and some are not as dependent on special in-flight calibration processes as current systems. Optical concepts for measuring freestream static temperature and density are feasible for in-flight applications. The best feature of these concepts is that the air data measurements are obtained nonintrusively, and for the most part well into the freestream region of the flow field about the aircraft. Current requirements for measuring air data at high angle of attack, and future need to measure the same information at hypersonic flight conditions place strains on existing techniques. Optical technology advances show outstanding potential for application in future programs and promise to make common use of optical concepts a reality. Results from several flight-test programs are summarized, and the technology advances required to make optical airdata techniques practical are identified.
Metrology: Calibration and measurement processes guidelines
NASA Technical Reports Server (NTRS)
Castrup, Howard T.; Eicke, Woodward G.; Hayes, Jerry L.; Mark, Alexander; Martin, Robert E.; Taylor, James L.
1994-01-01
The guide is intended as a resource to aid engineers and systems contracts in the design, implementation, and operation of metrology, calibration, and measurement systems, and to assist NASA personnel in the uniform evaluation of such systems supplied or operated by contractors. Methodologies and techniques acceptable in fulfilling metrology quality requirements for NASA programs are outlined. The measurement process is covered from a high level through more detailed discussions of key elements within the process, Emphasis is given to the flowdown of project requirements to measurement system requirements, then through the activities that will provide measurements with defined quality. In addition, innovations and techniques for error analysis, development of statistical measurement process control, optimization of calibration recall systems, and evaluation of measurement uncertainty are presented.
Molecular-Based Optical Measurement Techniques for Transition and Turbulence in High-Speed Flow
NASA Technical Reports Server (NTRS)
Bathel, Brett F.; Danehy, Paul M.; Cutler, Andrew D.
2013-01-01
High-speed laminar-to-turbulent transition and turbulence affect the control of flight vehicles, the heat transfer rate to a flight vehicle's surface, the material selected to protect such vehicles from high heating loads, the ultimate weight of a flight vehicle due to the presence of thermal protection systems, the efficiency of fuel-air mixing processes in high-speed combustion applications, etc. Gaining a fundamental understanding of the physical mechanisms involved in the transition process will lead to the development of predictive capabilities that can identify transition location and its impact on parameters like surface heating. Currently, there is no general theory that can completely describe the transition-to-turbulence process. However, transition research has led to the identification of the predominant pathways by which this process occurs. For a truly physics-based model of transition to be developed, the individual stages in the paths leading to the onset of fully turbulent flow must be well understood. This requires that each pathway be computationally modeled and experimentally characterized and validated. This may also lead to the discovery of new physical pathways. This document is intended to describe molecular based measurement techniques that have been developed, addressing the needs of the high-speed transition-to-turbulence and high-speed turbulence research fields. In particular, we focus on techniques that have either been used to study high speed transition and turbulence or techniques that show promise for studying these flows. This review is not exhaustive. In addition to the probe-based techniques described in the previous paragraph, several other classes of measurement techniques that are, or could be, used to study high speed transition and turbulence are excluded from this manuscript. For example, surface measurement techniques such as pressure and temperature paint, phosphor thermography, skin friction measurements and photogrammetry (for model attitude and deformation measurement) are excluded to limit the scope of this report. Other physical probes such as heat flux gauges, total temperature probes are also excluded. We further exclude measurement techniques that require particle seeding though particle based methods may still be useful in many high speed flow applications. This manuscript details some of the more widely used molecular-based measurement techniques for studying transition and turbulence: laser-induced fluorescence (LIF), Rayleigh and Raman Scattering and coherent anti-Stokes Raman scattering (CARS). These techniques are emphasized, in part, because of the prior experience of the authors. Additional molecular based techniques are described, albeit in less detail. Where possible, an effort is made to compare the relative advantages and disadvantages of the various measurement techniques, although these comparisons can be subjective views of the authors. Finally, the manuscript concludes by evaluating the different measurement techniques in view of the precision requirements described in this chapter. Additional requirements and considerations are discussed to assist with choosing an optical measurement technique for a given application.
Silt fences: An economical technique for measuring hillslope soil erosion
Peter R. Robichaud; Robert E. Brown
2002-01-01
Measuring hillslope erosion has historically been a costly, time-consuming practice. An easy to install low-cost technique using silt fences (geotextile fabric) and tipping bucket rain gauges to measure onsite hillslope erosion was developed and tested. Equipment requirements, installation procedures, statistical design, and analysis methods for measuring hillslope...
New technique for the direct measurement of core noise from aircraft engines
NASA Technical Reports Server (NTRS)
Krejsa, E. A.
1981-01-01
A new technique is presented for directly measuring the core noise levels from gas turbine aircraft engines. The technique requires that fluctuating pressures be measured in the far-field and at two locations within the engine core. The cross-spectra of these measurements are used to determine the levels of the far-field noise that propagated from the engine core. The technique makes it possible to measure core noise levels even when other noise sources dominate. The technique was applied to signals measured from an AVCO Lycoming YF102 turbofan engine. Core noise levels as a function of frequency and radiation angle were measured and are presented over a range of power settings.
Use of an ultrasonic-acoustic technique for nondestructive evaluation of fiber composite strength
NASA Technical Reports Server (NTRS)
Vary, A.; Bowles, K. J.
1978-01-01
Details of the method used to measure the stress wave factor are described. Frequency spectra of the stress waves are analyzed in order to clarify the nature of the wave phenomena involved. The stress wave factor was measured with simple contact probes requiring only one-side access to a part. This is beneficial in nondestructive evaluations because the waves can run parallel to fiber directions and thus measure material properties in directions assumed by actual loads. The technique can be applied where conventional through transmission techniques are impractical or where more quantitative data are required. The stress wave factor was measured for a series of graphite/polyimide composite panels, and results obtained are compared with through transmission immersion ultrasonic scans.
USDA-ARS?s Scientific Manuscript database
Passive acoustic techniques for the measurement of Sediment-Generated Noise (SGN) in gravel-bed rivers present a promising alternative to traditional bedload measurement techniques. Where traditional methods are often prohibitively costly, particularly in labor requirements, and produce point-scale ...
Riek, A; Klinkert, A; Gerken, M; Hummel, J; Moors, E; Südekum, K-H
2013-03-01
Despite the fact that llamas have become increasingly popular as companion and farm animals in both Europe and North America, scientific knowledge on their nutrient requirements is scarce. Compared with other livestock species, relatively little is known especially about the nutrient and energy requirements for lactating llamas. Therefore, we aimed to measure milk output in llama dams using an isotope dilution technique and relate it to energy intakes at different stages of lactation. We also validated the dilution technique by measuring total water turnover (TWT) directly and comparing it with values estimated by the isotope dilution technique. Our study involved 5 lactating llama dams and their suckling young. Milk output and TWT were measured at 4 stages of lactation (wk 3, 10, 18, and 26 postpartum). The method involved the application of the stable hydrogen isotope deuterium ((2)H) to the lactating dam. Drinking water intake and TWT decreased significantly with lactation stage, whether estimated by the isotope dilution technique or calculated from drinking water and water ingested from feeds. In contrast, lactation stage had no effect on dry matter intake, metabolizable energy (ME) intake, or the milk water fraction (i.e., the ratio between milk water excreted and TWT). The ratios between TWT measured and TWT estimated (by isotope dilution) did not differ with lactation stage and were close to 100% in all measurement weeks, indicating that the D(2)O dilution technique estimated TWT with high accuracy and only small variations. Calculating the required ME intakes for lactation from milk output data and gross energy content of milk revealed that, with increasing lactation stage, ME requirements per day for lactation decreased but remained constant per kilogram of milk output. Total measured ME intakes at different stages of lactation were similar to calculated ME intakes from published recommendation models for llamas. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Measurement of surface microtopography
NASA Technical Reports Server (NTRS)
Wall, S. D.; Farr, T. G.; Muller, J.-P.; Lewis, P.; Leberl, F. W.
1991-01-01
Acquisition of ground truth data for use in microwave interaction modeling requires measurement of surface roughness sampled at intervals comparable to a fraction of the microwave wavelength and extensive enough to adequately represent the statistics of a surface unit. Sub-centimetric measurement accuracy is thus required over large areas, and existing techniques are usually inadequate. A technique is discussed for acquiring the necessary photogrammetric data using twin film cameras mounted on a helicopter. In an attempt to eliminate tedious data reduction, an automated technique was applied to the helicopter photographs, and results were compared to those produced by conventional stereogrammetry. Derived root-mean-square (RMS) roughness for the same stereo-pair was 7.5 cm for the automated technique versus 6.5 cm for the manual method. The principal source of error is probably due to vegetation in the scene, which affects the automated technique but is ignored by a human operator.
Relative Utility of Selected Software Requirement Metrics
1991-12-01
testing . They can also help in deciding if and how to use complexity reduction techniques. In summary, requirement metrics can be useful because they...answer items in a test instrument. In order to differentiate between misinterpretation and comprehension, the measurement technique must be able to...effectively test a requirement, it is verifiable. Ramamoorthy and others have proposed requirements complexity metrics that can be used to infer the
NASA Technical Reports Server (NTRS)
Edwards, Lawrence G.
1994-01-01
Subcritical cryogens such as liquid hydrogen (LH2) and liquid oxygen (LO2) are required for space based transportation propellant, reactant, and life support systems. Future long-duration space missions will require on-orbit systems capable of long-term cryogen storage and efficient fluid transfer capabilities. COLD-SAT, which stands for cryogenic orbiting liquid depot-storage acquisition and transfer, is a free-flying liquid hydrogen management flight experiment. Experiments to determine optimum methods of fluid storage and transfer will be performed on the COLD-SAT mission. The success of the mission is directly related to the type and accuracy of measurements made. The instrumentation and measurement techniques used are therefore critical to the success of the mission. This paper presents the results of the COLD-SAT experiment subsystem instrumentation and wire harness design effort. Candidate transducers capable of fulfilling the COLD-SAT experiment measurement requirements are identified. Signal conditioning techniques, data acquisition requirements, and measurement uncertainty analysis are presented. Electrical harnessing materials and wiring techniques for the instrumentation designed to minimize heat conduction to the cryogenic tanks and provide optimum measurement accuracy are listed.
High-resolution hot-film measurement of surface heat flux to an impinging jet
NASA Astrophysics Data System (ADS)
O'Donovan, T. S.; Persoons, T.; Murray, D. B.
2011-10-01
To investigate the complex coupling between surface heat transfer and local fluid velocity in convective heat transfer, advanced techniques are required to measure the surface heat flux at high spatial and temporal resolution. Several established flow velocity techniques such as laser Doppler anemometry, particle image velocimetry and hot wire anemometry can measure fluid velocities at high spatial resolution (µm) and have a high-frequency response (up to 100 kHz) characteristic. Equivalent advanced surface heat transfer measurement techniques, however, are not available; even the latest advances in high speed thermal imaging do not offer equivalent data capture rates. The current research presents a method of measuring point surface heat flux with a hot film that is flush mounted on a heated flat surface. The film works in conjunction with a constant temperature anemometer which has a bandwidth of 100 kHz. The bandwidth of this technique therefore is likely to be in excess of more established surface heat flux measurement techniques. Although the frequency response of the sensor is not reported here, it is expected to be significantly less than 100 kHz due to its physical size and capacitance. To demonstrate the efficacy of the technique, a cooling impinging air jet is directed at the heated surface, and the power required to maintain the hot-film temperature is related to the local heat flux to the fluid air flow. The technique is validated experimentally using a more established surface heat flux measurement technique. The thermal performance of the sensor is also investigated numerically. It has been shown that, with some limitations, the measurement technique accurately measures the surface heat transfer to an impinging air jet with improved spatial resolution for a wide range of experimental parameters.
Investigation of laser Doppler anemometry in developing a velocity-based measurement technique
NASA Astrophysics Data System (ADS)
Jung, Ki Won
2009-12-01
Acoustic properties, such as the characteristic impedance and the complex propagation constant, of porous materials have been traditionally characterized based on pressure-based measurement techniques using microphones. Although the microphone techniques have evolved since their introduction, the most general form of the microphone technique employs two microphones in characterizing the acoustic field for one continuous medium. The shortcomings of determining the acoustic field based on only two microphones can be overcome by using numerous microphones. However, the use of a number of microphones requires a careful and intricate calibration procedure. This dissertation uses laser Doppler anemometry (LDA) to establish a new measurement technique which can resolve issues that microphone techniques have: First, it is based on a single sensor, thus the calibration is unnecessary when only overall ratio of the acoustic field is required for the characterization of a system. This includes the measurements of the characteristic impedance and the complex propagation constant of a system. Second, it can handle multiple positional measurements without calibrating the signal at each position. Third, it can measure three dimensional components of velocity even in a system with a complex geometry. Fourth, it has a flexible adaptability which is not restricted to a certain type of apparatus only if the apparatus is transparent. LDA is known to possess several disadvantages, such as the requirement of a transparent apparatus, high cost, and necessity of seeding particles. The technique based on LDA combined with a curvefitting algorithm is validated through measurements on three systems. First, the complex propagation constant of the air is measured in a rigidly terminated cylindrical pipe which has very low dissipation. Second, the radiation impedance of an open-ended pipe is measured. These two parameters can be characterized by the ratio of acoustic field measured at multiple locations. Third, the power dissipated in a variable RLC load is measured. The three experiments validate the LDA technique proposed. The utility of the LDA method is then extended to the measurement of the complex propagation constant of the air inside a 100 ppi reticulated vitreous carbon (RVC) sample. Compared to measurements in the available studies, the measurement with the 100 ppi RVC sample supports the LDA technique in that it can achieve a low uncertainty in the determined quantity. This dissertation concludes with using the LDA technique for modal decomposition of the plane wave mode and the (1,1) mode that are driven simultaneously. This modal decomposition suggests that the LDA technique surpasses microphone-based techniques, because they are unable to determine the acoustic field based on an acoustic model with unconfined propagation constants for each modal component.
Solar Cell Calibration and Measurement Techniques
NASA Technical Reports Server (NTRS)
Bailey, Sheila; Brinker, Dave; Curtis, Henry; Jenkins, Phillip; Scheiman, Dave
1997-01-01
The increasing complexity of space solar cells and the increasing international markets for both cells and arrays has resulted in workshops jointly sponsored by NASDA, ESA and NASA. These workshops are designed to obtain international agreement on standardized values for the AMO spectrum and constant, recommend laboratory measurement practices and establish a set of protocols for international comparison of laboratory measurements. A working draft of an ISO standard, WDI 5387, 'Requirements for Measurement and Calibration Procedures for Space Solar Cells' was discussed with a focus on the scope of the document, a definition of primary standard cell, and required error analysis for all measurement techniques. Working groups addressed the issues of Air Mass Zero (AMO) solar constant and spectrum, laboratory measurement techniques, and the international round robin methodology. A summary is presented of the current state of each area and the formulation of the ISO document.
Solar Cell Calibration and Measurement Techniques
NASA Technical Reports Server (NTRS)
Bailey, Sheila; Brinker, Dave; Curtis, Henry; Jenkins, Phillip; Scheiman, Dave
2004-01-01
The increasing complexity of space solar cells and the increasing international markets for both cells and arrays has resulted in workshops jointly sponsored by NASDA, ESA and NASA. These workshops are designed to obtain international agreement on standardized values for the AMO spectrum and constant, recommend laboratory measurement practices and establish a set of protocols for international comparison of laboratory measurements. A working draft of an ISO standard, WD15387, "Requirements for Measurement and Calibration Procedures for Space Solar Cells" was discussed with a focus on the scope of the document, a definition of primary standard cell, and required error analysis for all measurement techniques. Working groups addressed the issues of Air Mass Zero (AMO) solar constant and spectrum, laboratory measurement techniques, and te international round robin methodology. A summary is presented of the current state of each area and the formulation of the ISO document.
Veenstra, Richard D
2016-01-01
The development of the patch clamp technique has enabled investigators to directly measure gap junction conductance between isolated pairs of small cells with resolution to the single channel level. The dual patch clamp recording technique requires specialized equipment and the acquired skill to reliably establish gigaohm seals and the whole cell recording configuration with high efficiency. This chapter describes the equipment needed and methods required to achieve accurate measurement of macroscopic and single gap junction channel conductances. Inherent limitations with the dual whole cell recording technique and methods to correct for series access resistance errors are defined as well as basic procedures to determine the essential electrical parameters necessary to evaluate the accuracy of gap junction conductance measurements using this approach.
An Investigation of a Photographic Technique of Measuring High Surface Temperatures
NASA Technical Reports Server (NTRS)
Siviter, James H., Jr.; Strass, H. Kurt
1960-01-01
A photographic method of temperature determination has been developed to measure elevated temperatures of surfaces. The technique presented herein minimizes calibration procedures and permits wide variation in emulsion developing techniques. The present work indicates that the lower limit of applicability is approximately 1,400 F when conventional cameras, emulsions, and moderate exposures are used. The upper limit is determined by the calibration technique and the accuracy required.
A high temperature testing system for ceramic composites
NASA Technical Reports Server (NTRS)
Hemann, John
1994-01-01
Ceramic composites are presently being developed for high temperature use in heat engine and space power system applications. The operating temperature range is expected to be 1090 to 1650 C (2000 F to 3000 F). Very little material data is available at these temperatures and, therefore, it is desirable to thoroughly characterize the basic unidirectional fiber reinforced ceramic composite. This includes testing mainly for mechanical material properties at high temperatures. The proper conduct of such characterization tests requires the development of a tensile testing system includes unique gripping, heating, and strain measuring devices which require special considerations. The system also requires an optimized specimen shape. The purpose of this paper is to review various techniques for measuring displacements or strains, preferably at elevated temperatures. Due to current equipment limitations it is assumed that the specimen is to be tested at a temperature of 1430 C (2600F) in an oxidizing atmosphere. For the most part, previous high temperature material characterization tests, such as flexure and tensile tests, have been performed in inert atmospheres. Due to the harsh environment in which the ceramic specimen is to be tested, many conventional strain measuring techniques can not be applied. Initially a brief description of the more commonly used mechanical strain measuring techniques is given. Major advantages and disadvantages with their application to high temperature tensile testing of ceramic composites are discussed. Next, a general overview is given for various optical techniques. Advantages and disadvantages which are common to these techniques are noted. The optical methods for measuring strain or displacement are categorized into two sections. These include real-time techniques. Finally, an optical technique which offers optimum performance with the high temperature tensile testing of ceramic composites is recommended.
An ultrasonic technique for measuring stress in fasteners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, K. J.; Day, P.; Byron, D.
1999-12-02
High temperature bolting alloys are extensively used in the thermal power generation industry as for example, reheat ESV and Governor valve studs. Remnant life assessment methodologies and plant maintenance procedures require the monitoring of the operational stress levels in these fasteners. Some conventional ultrasonic techniques require longitudinal wave measurements to be undertaken when the nut on the bolt is loosened and then re-tightened. Other techniques use a combination of shear waves and longitudinal waves. In this paper, the problems and pitfalls associated with various ultrasonic techniques for measuring stress in bolts, is discussed. An ultrasonic technique developed for measuring themore » stress in Durehete 1055 bolts is presented. Material from a textured rolled bar has been used as a test bed in the development work. The technique uses shear wave birefringence and compression waves at several frequencies to measure texture, fastener length and the average stress. The technique was developed by making ultrasonic measurements on bolts tensioned in universal testing machines and a hydraulic nut. The ultrasonic measurements of residual stress have been checked against strain gauge measurements. The Durehete bolts have a hollow cylinder geometry of restricted dimensions, which significantly alters compression and shear wave velocities from bulk values and introduces hoop stresses which can be measured by rotating the polarization of the shear wave probe. Modelling of the experimental results has been undertaken using theories for the elastic wave propagation through waveguides. The dispersion equations allow the velocity and length of the fastener to be measured ultrasonically in some situations where the length of the fastener can not be measured directly with a vernier caliper or micrometer and/or where it is undesirable to loosen nuts to take calibration readings of the shear and compression wave velocities.« less
Neutron/Gamma-ray discrimination through measures of fit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amiri, Moslem; Prenosil, Vaclav; Cvachovec, Frantisek
2015-07-01
Statistical tests and their underlying measures of fit can be utilized to separate neutron/gamma-ray pulses in a mixed radiation field. In this article, first the application of a sample statistical test is explained. Fit measurement-based methods require true pulse shapes to be used as reference for discrimination. This requirement makes practical implementation of these methods difficult; typically another discrimination approach should be employed to capture samples of neutrons and gamma-rays before running the fit-based technique. In this article, we also propose a technique to eliminate this requirement. These approaches are applied to several sets of mixed neutron and gamma-ray pulsesmore » obtained through different digitizers using stilbene scintillator in order to analyze them and measure their discrimination quality. (authors)« less
Survey of Temperature Measurement Techniques For Studying Underwater Shock Waves
NASA Technical Reports Server (NTRS)
Danehy, Paul M.; Alderfer, David W.
2004-01-01
Several optical methods for measuring temperature near underwater shock waves are reviewed and compared. The relative merits of the different techniques are compared, considering accuracy, precision, ease of use, applicable temperature range, maturity, spatial resolution, and whether or not special additives are required.
Overlay metrology for double patterning processes
NASA Astrophysics Data System (ADS)
Leray, Philippe; Cheng, Shaunee; Laidler, David; Kandel, Daniel; Adel, Mike; Dinu, Berta; Polli, Marco; Vasconi, Mauro; Salski, Bartlomiej
2009-03-01
The double patterning (DPT) process is foreseen by the industry to be the main solution for the 32 nm technology node and even beyond. Meanwhile process compatibility has to be maintained and the performance of overlay metrology has to improve. To achieve this for Image Based Overlay (IBO), usually the optics of overlay tools are improved. It was also demonstrated that these requirements are achievable with a Diffraction Based Overlay (DBO) technique named SCOLTM [1]. In addition, we believe that overlay measurements with respect to a reference grid are required to achieve the required overlay control [2]. This induces at least a three-fold increase in the number of measurements (2 for double patterned layers to the reference grid and 1 between the double patterned layers). The requirements of process compatibility, enhanced performance and large number of measurements make the choice of overlay metrology for DPT very challenging. In this work we use different flavors of the standard overlay metrology technique (IBO) as well as the new technique (SCOL) to address these three requirements. The compatibility of the corresponding overlay targets with double patterning processes (Litho-Etch-Litho-Etch (LELE); Litho-Freeze-Litho-Etch (LFLE), Spacer defined) is tested. The process impact on different target types is discussed (CD bias LELE, Contrast for LFLE). We compare the standard imaging overlay metrology with non-standard imaging techniques dedicated to double patterning processes (multilayer imaging targets allowing one overlay target instead of three, very small imaging targets). In addition to standard designs already discussed [1], we investigate SCOL target designs specific to double patterning processes. The feedback to the scanner is determined using the different techniques. The final overlay results obtained are compared accordingly. We conclude with the pros and cons of each technique and suggest the optimal metrology strategy for overlay control in double patterning processes.
Updates on measurements and modeling techniques for expendable countermeasures
NASA Astrophysics Data System (ADS)
Gignilliat, Robert; Tepfer, Kathleen; Wilson, Rebekah F.; Taczak, Thomas M.
2016-10-01
The potential threat of recently-advertised anti-ship missiles has instigated research at the United States (US) Naval Research Laboratory (NRL) into the improvement of measurement techniques for visual band countermeasures. The goal of measurements is the collection of radiometric imagery for use in the building and validation of digital models of expendable countermeasures. This paper will present an overview of measurement requirements unique to the visual band and differences between visual band and infrared (IR) band measurements. A review of the metrics used to characterize signatures in the visible band will be presented and contrasted to those commonly used in IR band measurements. For example, the visual band measurements require higher fidelity characterization of the background, including improved high-transmittance measurements and better characterization of solar conditions to correlate results more closely with changes in the environment. The range of relevant engagement angles has also been expanded to include higher altitude measurements of targets and countermeasures. In addition to the discussion of measurement techniques, a top-level qualitative summary of modeling approaches will be presented. No quantitative results or data will be presented.
Automated measurement of respiratory gas exchange by an inert gas dilution technique
NASA Technical Reports Server (NTRS)
Sawin, C. F.; Rummel, J. A.; Michel, E. L.
1974-01-01
A respiratory gas analyzer (RGA) has been developed wherein a mass spectrometer is the sole transducer required for measurement of respiratory gas exchange. The mass spectrometer maintains all signals in absolute phase relationships, precluding the need to synchronize flow and gas composition as required in other systems. The RGA system was evaluated by comparison with the Douglas bag technique. The RGA system established the feasibility of the inert gas dilution method for measuring breath-by-breath respiratory gas exchange. This breath-by-breath analytical capability permits detailed study of transient respiratory responses to exercise.
Nelson, Jr. Ralph M.
1982-01-01
Eighteen experimental fires were used to compare measured and calculated values for emission factors and fuel consumption to evaluate the carbon balance technique. The technique is based on a model for the emission factor of carbon dioxide, corrected for the production of other emissions, and which requires measurements of effluent concentrations and air volume in the...
NASA Technical Reports Server (NTRS)
1976-01-01
Sensitivity requirements of the various measurements obtained by microwave sensors, and radiometry techniques are described. Analytical techniques applied to detailed sharing analyses are discussed. A bibliography of publications pertinent to the scientific justification of frequency requirements for passive microwave remote sensing is included.
NASA Astrophysics Data System (ADS)
Lin, Zhongmin S.; Avinash, Gopal; McMillan, Kathryn; Yan, Litao; Minoshima, Satoshi
2014-03-01
Cortical thinning and metabolic reduction can be possible imaging biomarkers for Alzheimer's disease (AD) diagnosis and monitoring. Many techniques have been developed for the cortical measurement and widely used for the clinical statistical studies. However, the measurement consistency of individuals, an essential requirement for a clinically useful technique, requires proper further investigation. Here we leverage our previously developed BSIM technique 1 to measure cortical thickness and thinning and use it with longitudinal MRI from ADNI to investigate measurement consistency and spatial resolution. 10 normal, 10 MCI, and 10 AD subjects in their 70s were selected for the study. Consistent cortical thinning patterns were observed in all baseline and follow up images. Rapid cortical thinning was shown in some MCI and AD cases. To evaluate the correctness of the cortical measurement, we compared longitudinal cortical thinning with clinical diagnosis and longitudinal PET metabolic reduction measured using 3D-SSP technique2 for the same person. Longitudinal MR cortical thinning and corresponding PET metabolic reduction showed high level pattern similarity revealing certain correlations worthy of further studies. Severe cortical thinning that might link to disease conversion from MCI to AD was observed in two cases. In summary, our results suggest that consistent cortical measurements using our technique may provide means for clinical diagnosis and monitoring at individual patient's level and MR cortical thinning measurement can complement PET metabolic reduction measurement.
Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U
2011-04-01
In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Optical Measurement Technique for Space Column Characterization
NASA Technical Reports Server (NTRS)
Barrows, Danny A.; Watson, Judith J.; Burner, Alpheus W.; Phelps, James E.
2004-01-01
A simple optical technique for the structural characterization of lightweight space columns is presented. The technique is useful for determining the coefficient of thermal expansion during cool down as well as the induced strain during tension and compression testing. The technique is based upon object-to-image plane scaling and does not require any photogrammetric calibrations or computations. Examples of the measurement of the coefficient of thermal expansion are presented for several lightweight space columns. Examples of strain measured during tension and compression testing are presented along with comparisons to results obtained with Linear Variable Differential Transformer (LVDT) position transducers.
Herler, Jürgen; Dirnwöber, Markus
2011-10-31
Estimating the impacts of global and local threats on coral reefs requires monitoring reef health and measuring coral growth and calcification rates at different time scales. This has traditionally been mostly performed in short-term experimental studies in which coral fragments were grown in the laboratory or in the field but measured ex situ. Practical techniques in which growth and measurements are performed over the long term in situ are rare. Apart from photographic approaches, weight increment measurements have also been applied. Past buoyant weight measurements under water involved a complicated and little-used apparatus. We introduce a new method that combines previous field and laboratory techniques to measure the buoyant weight of entire, transplanted corals under water. This method uses an electronic balance fitted into an acrylic glass underwater housing and placed atop of an acrylic glass cube. Within this cube, corals transplanted onto artificial bases can be attached to the balance and weighed at predetermined intervals while they continue growth in the field. We also provide a set of simple equations for the volume and weight determinations required to calculate net growth rates. The new technique is highly accurate: low error of weight determinations due to variation of coral density (< 0.08%) and low standard error (< 0.01%) for repeated measurements of the same corals. We outline a transplantation technique for properly preparing corals for such long-term in situ experiments and measurements.
NASA Astrophysics Data System (ADS)
Bleck, W.; Larour, P.
2003-09-01
Crash behaviour and light weight have become the major design criteria for car bodies. Modem high strength steels offer appropriate solutions for these requirements. The prediction of the crash behaviour in simulation programs requires the information on materials behaviour during dynamic testing. The reduction of the signal waviness and the inertia effects at strain rates above 50s^{-1} are major issues in dynamic tensile testing. Damping techniques or load measurement on the sample itself are the common way to reduce oscillations. Strain measurement from the piston displacement or from optical devices on the specimen itself are also compared. Advantages and drawbacks of those various measurement techniques are presented.
STRATEGIES FOR QUANTIFYING PET IMAGING DATA FROM TRACER STUDIES OF BRAIN RECEPTORS AND ENZYMES.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Logan, J.
2001-04-02
A description of some of the methods used in neuroreceptor imaging to distinguish changes in receptor availability has been presented in this chapter. It is necessary to look beyond regional uptake of the tracer since uptake generally is affected by factors other than the number of receptors for which the tracer has affinity. An exception is the infusion method producing an equilibrium state. The techniques vary in complexity some requiring arterial blood measurements of unmetabolized tracer and multiple time uptake data. Others require only a few plasma and uptake measurements and those based on a reference region require no plasmamore » measurements. We have outlined some of the limitations of the different methods. Laruelle (1999) has pointed out that test/retest studies to which various methods can be applied are crucial in determining the optimal method for a particular study. The choice of method will also depend upon the application. In a clinical setting, methods not involving arterial blood sampling are generally preferred. In the future techniques for externally measuring arterial plasma radioactivity with only a few blood samples for metabolite correction will extend the modeling options of clinical PET. Also since parametric images can provide information beyond that of ROI analysis, improved techniques for generating such images will be important, particularly for ligands requiring more than a one-compartment model. Techniques such as the wavelet transform proposed by Turkheimer et al. (2000) may prove to be important in reducing noise and improving quantitation.« less
Photogrammetric techniques for aerospace applications
NASA Astrophysics Data System (ADS)
Liu, Tianshu; Burner, Alpheus W.; Jones, Thomas W.; Barrows, Danny A.
2012-10-01
Photogrammetric techniques have been used for measuring the important physical quantities in both ground and flight testing including aeroelastic deformation, attitude, position, shape and dynamics of objects such as wind tunnel models, flight vehicles, rotating blades and large space structures. The distinct advantage of photogrammetric measurement is that it is a non-contact, global measurement technique. Although the general principles of photogrammetry are well known particularly in topographic and aerial survey, photogrammetric techniques require special adaptation for aerospace applications. This review provides a comprehensive and systematic summary of photogrammetric techniques for aerospace applications based on diverse sources. It is useful mainly for aerospace engineers who want to use photogrammetric techniques, but it also gives a general introduction for photogrammetrists and computer vision scientists to new applications.
Laser Doppler velocimetry primer
NASA Technical Reports Server (NTRS)
Bachalo, William D.
1985-01-01
Advanced research in experimental fluid dynamics required a familiarity with sophisticated measurement techniques. In some cases, the development and application of new techniques is required for difficult measurements. Optical methods and in particular, the laser Doppler velocimeter (LDV) are now recognized as the most reliable means for performing measurements in complex turbulent flows. And such, the experimental fluid dynamicist should be familiar with the principles of operation of the method and the details associated with its application. Thus, the goals of this primer are to efficiently transmit the basic concepts of the LDV method to potential users and to provide references that describe the specific areas in greater detail.
The micrometeorological flux measurement technique known as relaxed eddy accumulation (REA) holds promise as a powerful new tool for ecologists. The more popular eddy covariance (eddy correlation) technique requires the use of sensors that can respond at fast rates (10 Hz), and t...
Pen Ink as an Ultraviolet Dosimeter
ERIC Educational Resources Information Center
Downs, Nathan; Turner, Joanna; Parisi, Alfio; Spence, Jenny
2008-01-01
A technique for using highlighter ink as an ultraviolet dosimeter has been developed for use by secondary school students. The technique requires the students to measure the percentage of colour fading in ink drawn onto strips of paper that have been exposed to sunlight, which can be calibrated to measurements of the ultraviolet irradiance using…
Calibration Experiments for a Computer Vision Oyster Volume Estimation System
ERIC Educational Resources Information Center
Chang, G. Andy; Kerns, G. Jay; Lee, D. J.; Stanek, Gary L.
2009-01-01
Calibration is a technique that is commonly used in science and engineering research that requires calibrating measurement tools for obtaining more accurate measurements. It is an important technique in various industries. In many situations, calibration is an application of linear regression, and is a good topic to be included when explaining and…
Industrial metrology as applied to large physics experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veal, D.
1993-05-01
A physics experiment is a large complex 3-D object (typ. 1200 m{sup 3}, 35000 tonnes), with sub-millimetric alignment requirements. Two generic survey alignment tasks can be identified; first, an iterative positioning of the apparatus subsystems in space and, second, a quantification of as-built parameters. The most convenient measurement technique is industrial triangulation but the complexity of the measured object and measurement environment constraints frequently requires a more sophisticated approach. To enlarge the ``survey alignment toolbox`` measurement techniques commonly associated with other disciplines such as geodesy, applied geodesy for accelerator alignment, and mechanical engineering are also used. Disparate observables require amore » heavy reliance on least squares programs for campaign pre-analysis and calculation. This paper will offer an introduction to the alignment of physics experiments and will identify trends for the next generation of SSC experiments.« less
Prediction of quantitative intrathoracic fluid volume to diagnose pulmonary oedema using LabVIEW.
Urooj, Shabana; Khan, M; Ansari, A Q; Lay-Ekuakille, Aimé; Salhan, Ashok K
2012-01-01
Pulmonary oedema is a life-threatening disease that requires special attention in the area of research and clinical diagnosis. Computer-based techniques are rarely used to quantify the intrathoracic fluid volume (IFV) for diagnostic purposes. This paper discusses a software program developed to detect and diagnose pulmonary oedema using LabVIEW. The software runs on anthropometric dimensions and physiological parameters, mainly transthoracic electrical impedance (TEI). This technique is accurate and faster than existing manual techniques. The LabVIEW software was used to compute the parameters required to quantify IFV. An equation relating per cent control and IFV was obtained. The results of predicted TEI and measured TEI were compared with previously reported data to validate the developed program. It was found that the predicted values of TEI obtained from the computer-based technique were much closer to the measured values of TEI. Six new subjects were enrolled to measure and predict transthoracic impedance and hence to quantify IFV. A similar difference was also observed in the measured and predicted values of TEI for the new subjects.
Advanced intensity-modulation continuous-wave lidar techniques for ASCENDS CO2 column measurements
NASA Astrophysics Data System (ADS)
Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.; Harrison, F. W.; Obland, Michael D.; Meadows, Byron
2015-10-01
Global atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity- Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud contamination. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating the need to correct for sidelobe bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These results are extended to include Richardson-Lucy deconvolution techniques to extend the resolution of the lidar beyond that implied by limit of the bandwidth of the modulation, where it is shown useful for making tree canopy measurements.
Advanced Intensity-Modulation Continuous-Wave Lidar Techniques for ASCENDS O2 Column Measurements
NASA Technical Reports Server (NTRS)
Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.; Harrison, F. Wallace; Obland, Michael D.; Meadows, Byron
2015-01-01
Global atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity- Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud contamination. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating the need to correct for sidelobe bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These results are extended to include Richardson-Lucy deconvolution techniques to extend the resolution of the lidar beyond that implied by limit of the bandwidth of the modulation, where it is shown useful for making tree canopy measurements.
Hopkins, Susan R; Prisk, G Kim
2010-12-01
Since the lung receives the entire cardiac output, sophisticated imaging techniques are not required in order to measure total organ perfusion. However, for many years studying lung function has required physiologists to consider the lung as a single entity: in imaging terms as a single voxel. Since imaging, and in particular functional imaging, allows the acquisition of spatial information important for studying lung function, these techniques provide considerable promise and are of great interest for pulmonary physiologists. In particular, despite the challenges of low proton density and short T2* in the lung, noncontrast MRI techniques to measure pulmonary perfusion have several advantages including high reliability and the ability to make repeated measurements under a number of physiologic conditions. This brief review focuses on the application of a particular arterial spin labeling (ASL) technique, ASL-FAIRER (flow sensitive inversion recovery with an extra radiofrequency pulse), to answer physiologic questions related to pulmonary function in health and disease. The associated measurement of regional proton density to correct for gravitational-based lung deformation (the "Slinky" effect (Slinky is a registered trademark of Pauf-Slinky incorporated)) and issues related to absolute quantification are also discussed. Copyright © 2010 Wiley-Liss, Inc.
Experimental Techniques for Thermodynamic Measurements of Ceramics
NASA Technical Reports Server (NTRS)
Jacobson, Nathan S.; Putnam, Robert L.; Navrotsky, Alexandra
1999-01-01
Experimental techniques for thermodynamic measurements on ceramic materials are reviewed. For total molar quantities, calorimetry is used. Total enthalpies are determined with combustion calorimetry or solution calorimetry. Heat capacities and entropies are determined with drop calorimetry, differential thermal methods, and adiabatic calorimetry . Three major techniques for determining partial molar quantities are discussed. These are gas equilibration techniques, Knudsen cell methods, and electrochemical techniques. Throughout this report, issues unique to ceramics are emphasized. Ceramic materials encompass a wide range of stabilities and this must be considered. In general data at high temperatures is required and the need for inert container materials presents a particular challenge.
Parkison, Adam J.; Nelson, Andrew Thomas
2016-01-11
An analytical technique is presented with the goal of measuring reaction kinetics during steam oxidation reactions for three cases in which obtaining kinetics information often requires a prohibitive amount of time and cost. The technique presented relies on coupling thermogravimetric analysis (TGA) with a quantitative hydrogen measurement technique using quadrupole mass spectrometry (QMS). The first case considered is in differentiating between the kinetics of steam oxidation reactions and those for simultaneously reacting gaseous impurities such as nitrogen or oxygen. The second case allows one to independently measure the kinetics of oxide and hydride formation for systems in which both ofmore » these reactions are known to take place during steam oxidation. The third case deals with measuring the kinetics of formation for competing volatile and non-volatile oxides during certain steam oxidation reactions. In order to meet the requirements of the coupled technique, a methodology is presented which attempts to provide quantitative measurement of hydrogen generation using QMS in the presence of an interfering fragmentation species, namely water vapor. This is achieved such that all calibrations and corrections are performed during the TGA baseline and steam oxidation programs, making system operation virtually identical to standard TGA. Benchmarking results showed a relative error in hydrogen measurement of 5.7–8.4% following the application of a correction factor. Lastly, suggestions are made for possible improvements to the presented technique so that it may be better applied to the three cases presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parkison, Adam J.; Nelson, Andrew Thomas
An analytical technique is presented with the goal of measuring reaction kinetics during steam oxidation reactions for three cases in which obtaining kinetics information often requires a prohibitive amount of time and cost. The technique presented relies on coupling thermogravimetric analysis (TGA) with a quantitative hydrogen measurement technique using quadrupole mass spectrometry (QMS). The first case considered is in differentiating between the kinetics of steam oxidation reactions and those for simultaneously reacting gaseous impurities such as nitrogen or oxygen. The second case allows one to independently measure the kinetics of oxide and hydride formation for systems in which both ofmore » these reactions are known to take place during steam oxidation. The third case deals with measuring the kinetics of formation for competing volatile and non-volatile oxides during certain steam oxidation reactions. In order to meet the requirements of the coupled technique, a methodology is presented which attempts to provide quantitative measurement of hydrogen generation using QMS in the presence of an interfering fragmentation species, namely water vapor. This is achieved such that all calibrations and corrections are performed during the TGA baseline and steam oxidation programs, making system operation virtually identical to standard TGA. Benchmarking results showed a relative error in hydrogen measurement of 5.7–8.4% following the application of a correction factor. Lastly, suggestions are made for possible improvements to the presented technique so that it may be better applied to the three cases presented.« less
NASA Technical Reports Server (NTRS)
Faller, K. H.
1976-01-01
A technique for the detection and measurement of surface feature interfaces in remotely acquired data was developed and evaluated. A computer implementation of this technique was effected to automatically process classified data derived from various sources such as the LANDSAT multispectral scanner and other scanning sensors. The basic elements of the operational theory of the technique are described, followed by the details of the procedure. An example of an application of the technique to the analysis of tidal shoreline length is given with a breakdown of manpower requirements.
Advanced IMCW Lidar Techniques for ASCENDS CO2 Column Measurements
NASA Astrophysics Data System (ADS)
Campbell, Joel; lin, bing; nehrir, amin; harrison, fenton; obland, michael
2015-04-01
Global atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity-Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud contamination. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating the need to correct for sidelobe bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These results are extended to include Richardson-Lucy deconvolution techniques to extend the resolution of the lidar beyond that implied by limit of the bandwidth of the modulation.
Advanced Intensity-Modulation Continuous-Wave Lidar Techniques for Column CO2 Measurements
NASA Astrophysics Data System (ADS)
Campbell, J. F.; Lin, B.; Nehrir, A. R.; Obland, M. D.; Liu, Z.; Browell, E. V.; Chen, S.; Kooi, S. A.; Fan, T. F.
2015-12-01
Global and regional atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission and Atmospheric Carbon and Transport (ACT) - America airborne investigation are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity-Modulated Continuous-Wave (IM-CW) lidar techniques are being investigated as a means of facilitating CO2 measurements from space and airborne platforms to meet the mission science measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud returns. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of intervening optically thin clouds, thereby minimizing bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the Earth's surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques and provides very high (at sub-meter level) range resolution. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These techniques are used in a new data processing architecture to support the ASCENDS CarbonHawk Experiment Simulator (ACES) and ACT-America programs.
Statistical photocalibration of photodetectors for radiometry without calibrated light sources
NASA Astrophysics Data System (ADS)
Yielding, Nicholas J.; Cain, Stephen C.; Seal, Michael D.
2018-01-01
Calibration of CCD arrays for identifying bad pixels and achieving nonuniformity correction is commonly accomplished using dark frames. This kind of calibration technique does not achieve radiometric calibration of the array since only the relative response of the detectors is computed. For this, a second calibration is sometimes utilized by looking at sources with known radiances. This process can be used to calibrate photodetectors as long as a calibration source is available and is well-characterized. A previous attempt at creating a procedure for calibrating a photodetector using the underlying Poisson nature of the photodetection required calculations of the skewness of the photodetector measurements. Reliance on the third moment of measurement meant that thousands of samples would be required in some cases to compute that moment. A photocalibration procedure is defined that requires only first and second moments of the measurements. The technique is applied to image data containing a known light source so that the accuracy of the technique can be surmised. It is shown that the algorithm can achieve accuracy of nearly 2.7% of the predicted number of photons using only 100 frames of image data.
Cavity mode-width spectroscopy with widely tunable ultra narrow laser.
Cygan, Agata; Lisak, Daniel; Morzyński, Piotr; Bober, Marcin; Zawada, Michał; Pazderski, Eugeniusz; Ciuryło, Roman
2013-12-02
We explore a cavity-enhanced spectroscopic technique based on determination of the absorbtion coefficient from direct measurement of spectral width of the mode of the optical cavity filled with absorbing medium. This technique called here the cavity mode-width spectroscopy (CMWS) is complementary to the cavity ring-down spectroscopy (CRDS). While both these techniques use information on interaction time of the light with the cavity to determine absorption coefficient, the CMWS does not require to measure very fast signals at high absorption conditions. Instead the CMWS method require a very narrow line width laser with precise frequency control. As an example a spectral line shape of P7 Q6 O₂ line from the B-band was measured with use of an ultra narrow laser system based on two phase-locked external cavity diode lasers (ECDL) having tunability of ± 20 GHz at wavelength range of 687 to 693 nm.
Three axis vector atomic magnetometer utilizing polarimetric technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pradhan, Swarupananda, E-mail: spradhan@barc.gov.in, E-mail: pradhans75@gmail.com
2016-09-15
The three axis vector magnetic field measurement based on the interaction of a single elliptically polarized light beam with an atomic system is described. The magnetic field direction dependent atomic responses are extracted by the polarimetric detection in combination with laser frequency modulation and magnetic field modulation techniques. The magnetometer geometry offers additional critical requirements like compact size and large dynamic range for space application. Further, the three axis magnetic field is measured using only the reflected signal (one polarization component) from the polarimeter and thus can be easily expanded to make spatial array of detectors and/or high sensitivity fieldmore » gradient measurement as required for biomedical application.« less
NASA Technical Reports Server (NTRS)
Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.; Harrison, F. Wallace; Obland, Michael D.; Ismail, Syed
2014-01-01
Global atmospheric carbon dioxide (CO2) measurements through the Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) Decadal Survey recommended space mission are critical for improving our understanding of CO2 sources and sinks. IM-CW (Intensity Modulated Continuous Wave) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS science requirements. In previous laboratory and flight experiments we have successfully used linear swept frequency modulation to discriminate surface lidar returns from intermediate aerosol and cloud contamination. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate clouds, which is a requirement for the inversion of the CO2 column-mixing ratio from the instrument optical depth measurements, has been demonstrated with the linear swept frequency modulation technique. We are concurrently investigating advanced techniques to help improve the auto-correlation properties of the transmitted waveform implemented through physical hardware to make cloud rejection more robust in special restricted scenarios. Several different carrier based modulation techniques are compared including orthogonal linear swept, orthogonal non-linear swept, and Binary Phase Shift Keying (BPSK). Techniques are investigated that reduce or eliminate sidelobes. These techniques have excellent auto-correlation properties while possessing a finite bandwidth (by way of a new cyclic digital filter), which will reduce bias error in the presence of multiple scatterers. Our analyses show that the studied modulation techniques can increase the accuracy of CO2 column measurements from space. A comparison of various properties such as signal to noise ratio (SNR) and time-bandwidth product are discussed.
NASA Astrophysics Data System (ADS)
Escalona, Luis; Díaz-Montiel, Paulina; Venkataraman, Satchi
2016-04-01
Laminated carbon fiber reinforced polymer (CFRP) composite materials are increasingly used in aerospace structures due to their superior mechanical properties and reduced weight. Assessing the health and integrity of these structures requires non-destructive evaluation (NDE) techniques to detect and measure interlaminar delamination and intralaminar matrix cracking damage. The electrical resistance change (ERC) based NDE technique uses the inherent changes in conductive properties of the composite to characterize internal damage. Several works that have explored the ERC technique have been limited to thin cross-ply laminates with simple linear or circular electrode arrangements. This paper investigates a method of optimum selection of electrode configurations for delamination detection in thick cross-ply laminates using ERC. Inverse identification of damage requires numerical optimization of the measured response with a model predicted response. Here, the electrical voltage field in the CFRP composite laminate is calculated using finite element analysis (FEA) models for different specified delamination size and locations, and location of ground and current electrodes. Reducing the number of sensor locations and measurements is needed to reduce hardware requirements, and computational effort needed for inverse identification. This paper explores the use of effective independence (EI) measure originally proposed for sensor location optimization in experimental vibration modal analysis. The EI measure is used for selecting the minimum set of resistance measurements among all possible combinations of selecting a pair of electrodes among the n electrodes. To enable use of EI to ERC required, it is proposed in this research a singular value decomposition SVD to obtain a spectral representation of the resistance measurements in the laminate. The effectiveness of EI measure in eliminating redundant electrode pairs is demonstrated by performing inverse identification of damage using the full set of resistance measurements and the reduced set of measurements. The investigation shows that the EI measure is effective for optimally selecting the electrode pairs needed for resistance measurements in ERC based damage detection.
NASA Technical Reports Server (NTRS)
Burner, Alpheus W.; Lokos, William A.; Barrows, Danny A.
2005-01-01
The adaptation of a proven wind tunnel test technique, known as Videogrammetry, to flight testing of full-scale vehicles is presented. A description is presented of the technique used at NASA's Dryden Flight Research Center for the measurement of the change in wing twist and deflection of an F/A-18 research aircraft as a function of both time and aerodynamic load. Requirements for in-flight measurements are compared and contrasted with those for wind tunnel testing. The methodology for the flight-testing technique and differences compared to wind tunnel testing are given. Measurement and operational comparisons to an older in-flight system known as the Flight Deflection Measurement System (FDMS) are presented.
NASA Technical Reports Server (NTRS)
Santavicca, Dom A.; Coy, E.
1990-01-01
Droplet turbulence interactions directly affect the vaporization and dispersion of droplets in liquid sprays and therefore play a major role in fuel oxidizer mixing in liquid fueled combustion systems. Proper characterization of droplet turbulence interactions in vaporizing sprays require measurement of droplet size velocity and size temperature correlations. A planar, fluorescence imaging technique is described which is being developed for simultaneously measuring the size, velocity, and temperature of individual droplets in vaporizing sprays. Preliminary droplet size velocity correlation measurements made with this technique are presented. These measurements are also compared to and show very good agreement with measurements made in the same spray using a phase Doppler particle analyzer.
Energy-based dosimetry of low-energy, photon-emitting brachytherapy sources
NASA Astrophysics Data System (ADS)
Malin, Martha J.
Model-based dose calculation algorithms (MBDCAs) for low-energy, photon-emitting brachytherapy sources have advanced to the point where the algorithms may be used in clinical practice. Before these algorithms can be used, a methodology must be established to verify the accuracy of the source models used by the algorithms. Additionally, the source strength metric for these algorithms must be established. This work explored the feasibility of verifying the source models used by MBDCAs by measuring the differential photon fluence emitted from the encapsulation of the source. The measured fluence could be compared to that modeled by the algorithm to validate the source model. This work examined how the differential photon fluence varied with position and angle of emission from the source, and the resolution that these measurements would require for dose computations to be accurate to within 1.5%. Both the spatial and angular resolution requirements were determined. The techniques used to determine the resolution required for measurements of the differential photon fluence were applied to determine why dose-rate constants determined using a spectroscopic technique disagreed with those computed using Monte Carlo techniques. The discrepancy between the two techniques had been previously published, but the cause of the discrepancy was not known. This work determined the impact that some of the assumptions used by the spectroscopic technique had on the accuracy of the calculation. The assumption of isotropic emission was found to cause the largest discrepancy in the spectroscopic dose-rate constant. Finally, this work improved the instrumentation used to measure the rate at which energy leaves the encapsulation of a brachytherapy source. This quantity is called emitted power (EP), and is presented as a possible source strength metric for MBDCAs. A calorimeter that measured EP was designed and built. The theoretical framework that the calorimeter relied upon to measure EP was established. Four clinically relevant 125I brachytherapy sources were measured with the instrument. The accuracy of the measured EP was compared to an air-kerma strength-derived EP to test the accuracy of the instrument. The instrument was accurate to within 10%, with three out of the four source measurements accurate to within 4%.
NASA Technical Reports Server (NTRS)
Lesco, Daniel J.
1991-01-01
The applied research effort required to develop new nonintrusive measurement techniques capable of obtaining the data required by aerospace propulsion researchers and of operating in the harsh environments encountered in research and test facilities is discussed and illustrated through several ongoing projects at NASA's Lewis Research Center. Factors including length of development time, funding levels, and collaborative support from fluid-thermal researchers are cited. Progress in developing new instrumentation via a multi-path approach, including NASA research, grant, and government-sponsored research through mechanisms like the Small Business Innovative Research program, is also described.
A technique for fast and accurate measurement of hand volumes using Archimedes' principle.
Hughes, S; Lau, J
2008-03-01
A new technique for measuring hand volumes using Archimedes principle is described. The technique involves the immersion of a hand in a water container placed on an electronic balance. The volume is given by the change in weight divided by the density of water. This technique was compared with the more conventional technique of immersing an object in a container with an overflow spout and collecting and weighing the volume of overflow water. The hand volume of two subjects was measured. Hand volumes were 494 +/- 6 ml and 312 +/- 7 ml for the immersion method and 476 +/- 14 ml and 302 +/- 8 ml for the overflow method for the two subjects respectively. Using plastic test objects, the mean difference between the actual and measured volume was -0.3% and 2.0% for the immersion and overflow techniques respectively. This study shows that hand volumes can be obtained more quickly than the overflow method. The technique could find an application in clinics where frequent hand volumes are required.
Measurement Techniques and Instruments Suitable for Life-prediction Testing of Photovoltaic Arrays
NASA Technical Reports Server (NTRS)
Noel, G. T.; Wood, V. E.; Mcginniss, V. D.; Hassell, J. A.; Richard, N. A.; Gaines, G. B.; Carmichael, D. C.
1979-01-01
The validation of a 20-year service life for low-cost photovoltaic arrays is a critical requirement in the Low-Cost Solar Array (LSA) Project. The validation is accomplished through accelerated life-prediction tests. A two-phase study was conducted to address the needs before such tests are carried out. The results and recommended techniques from the Phase 1 investigation are summarized in the appendix. Phase 2 of the study is covered in this report and consisted of experimental evaluations of three techniques selected from these recommended as a results of the Phase 1 findings. The three techniques evaluated were specular and nonspecular optical reflectometry, chemiluminescence measurements, and electric current noise measurements.
Techniques for determining total body water using deuterium oxide
NASA Technical Reports Server (NTRS)
Bishop, Phillip A.
1990-01-01
The measurement of total body water (TBW) is fundamental to the study of body fluid changes consequent to microgravity exposure or treatment with microgravity countermeasures. Often, the use of radioactive isotopes is prohibited for safety or other reasons. It was selected and implemented for use by some Johnson Space Center (JCS) laboratories, which permitted serial measurements over a 14 day period which was accurate enough to serve as a criterion method for validating new techniques. These requirements resulted in the selection of deuterium oxide dilution as the method of choice for TBW measurement. The development of this technique at JSC is reviewed. The recommended dosage, body fluid sampling techniques, and deuterium assay options are described.
Enhanced sensitivity for optical loss measurement in planar thin-films (Conference Presentation)
NASA Astrophysics Data System (ADS)
Yuan, Hua-Kang
2016-09-01
An organic-inorganic hybrid material benefits from processing advantages of organics and high refractive indices of inorganics. We focus on a titanium oxide hydrate system combined with common bulk polymers. In particular, we target thin-film structures of a few microns in thickness. Traditional Beer-Lambert approaches for measuring optical losses can only provide an upper limit estimate. This sensitivity is highly limited when considering the low-losses required for mid-range optical applications, on the order of 0.1 cm-1. For intensity based measurements, improving the sensitivity requires an increase in the optical path length. Instead, a new sensitive technique suitable for simple planar thin films is required. A number of systems were modelled to measure optical losses in films of 1 micron thick. The presented techniques utilise evanescent waves and total internal reflection to increase optical path length through the material. It was found that a new way of using prism coupling provides the greatest improvement in sensitivity. In keeping the requirements on the material simple, this method for measuring loss is well suited to any future developments of new materials in thin-film structures.
Liba, Amir; Wanagat, Jonathan
2014-11-01
Complex diseases such as heart disease, stroke, cancer, and aging are the primary causes of death in the US. These diseases cause heterogeneous conditions among cells, conditions that cannot be measured in tissue homogenates and require single cell approaches. Understanding protein levels within tissues is currently assayed using various molecular biology techniques (e.g., Western blots) that rely on milligram to gram quantities of tissue homogenates or immunofluorescent (IF) techniques that are limited by spectral overlap. Tissue homogenate studies lack references to tissue structure and mask signals from individual or rare cellular events. Novel techniques are required to bring protein measurement sensitivity to the single cell level and offer spatiotemporal resolution and scalability. We are developing a novel approach to protein quantification by exploiting the inherently low concentration of rare earth elements (REE) in biological systems. By coupling REE-antibody immunolabeling of cells with laser capture microdissection (LCM) and ICP-QQQ, we are achieving multiplexed protein measurement in histological sections of single cells. This approach will add to evolving single cell techniques and our ability to understand cellular heterogeneity in complex biological systems and diseases.
NACA Conference on Aerodynamic Problems of Transonic Airplane Design
NASA Technical Reports Server (NTRS)
1949-01-01
During the past several years it has been necessary for aeronautical research workers to exert a good portion of their effort in developing the means for conducting research in the high-speed range. The transonic range particularly has presented a very acute problem because of the choking phenomena in wind tunnels at speeds close to the speed of sound. At the same time, the multiplicity of design problems for aircraft introduced by the peculiar flow problems of the transonic speed range has given rise to an enormous demand for detail design data. Substantial progress has been made, however, in developing the required research techniques and in supplying the demand for aerodynamic data required for design purposes. In meeting this demand, it has been necessary to resort to new techniques possessing such novel features that the results obtained have had to be viewed with caution. Furthermore, the kinds of measurements possible with these various techniques are so varied that the correlation of results obtained by different techniques generally becomes an indirect process that can only be accomplished in conjunction with the application of estimates of the extent to which the results of measurements by any given technique are modified by differences that are inherent in the techniques. Thus, in the establishment of the validity and applicability of data obtained by any given technique, direct comparisons between data from different sources are a supplement to but not a substitute for the detailed knowledge required of the characteristics of each technique and fundamental aerodynamic flow phenomena.
Laser Doppler flowmetry for measurement of laminar capillary blood flow in the horse
NASA Astrophysics Data System (ADS)
Adair, Henry S., III
1998-07-01
Current methods for in vivo evaluation of digital hemodynamics in the horse include angiography, scintigraphy, Doppler ultrasound, electromagnetic flow and isolated extracorporeal pump perfused digit preparations. These techniques are either non-quantifiable, do not allow for continuous measurement, require destruction of the horse orare invasive, inducing non- physiologic variables. In vitro techniques have also been reported for the evaluation of the effects of vasoactive agents on the digital vessels. The in vitro techniques are non-physiologic and have evaluated the vasculature proximal to the coronary band. Lastly, many of these techniques require general anesthesia or euthanasia of the animal. Laser Doppler flowmetry is a non-invasive, continuous measure of capillary blood flow. Laser Doppler flowmetry has been used to measure capillary blood flow in many tissues. The principle of this method is to measure the Doppler shift, that is, the frequency change that light undergoes when reflected by moving objects, such as red blood cells. Laser Doppler flowmetry records a continuous measurement of the red cell motion in the outer layer of the tissue under study, with little or no influence on physiologic blood flow. This output value constitutes the flux of red cells and is reported as capillary perfusion units. No direct information concerning oxygen, nutrient or waste metabolite exchange in the surrounding tissue is obtained. The relationship between the flowmeter output signal and the flux of red blood cells is linear. The principles of laser Doppler flowmetry will be discussed and the technique for laminar capillary blood flow measurements will be presented.
Coronal Axis Measurement of the Optic Nerve Sheath Diameter Using a Linear Transducer.
Amini, Richard; Stolz, Lori A; Patanwala, Asad E; Adhikari, Srikar
2015-09-01
The true optic nerve sheath diameter cutoff value for detecting elevated intracranial pressure is variable. The variability may stem from the technique used to acquire sonographic measurements of the optic nerve sheath diameter as well as sonographic artifacts inherent to the technique. The purpose of this study was to compare the traditional visual axis technique to an infraorbital coronal axis technique for assessing the optic nerve sheath diameter using a high-frequency linear array transducer. We conducted a cross-sectional study at an academic medical center. Timed optic nerve sheath diameter measurements were obtained on both eyes of healthy adult volunteers with a 10-5-MHz broadband linear array transducer using both traditional visual axis and coronal axis techniques. Optic nerve sheath diameter measurements were obtained by 2 sonologists who graded the difficulty of each technique and were blinded to each other's measurements for each participant. A total of 42 volunteers were enrolled, yielding 84 optic nerve sheath diameter measurements. There were no significant differences in the measurements between the techniques on either eye (P = .23 [right]; P = .99 [left]). Additionally, there was no difference in the degree of difficulty obtaining the measurements between the techniques (P = .16). There was a statistically significant difference in the time required to obtain the measurements between the traditional and coronal techniques (P < .05). Infraorbital coronal axis measurements are similar to measurements obtained in the traditional visual axis. The infraorbital coronal axis technique is slightly faster to perform and is not technically challenging. © 2015 by the American Institute of Ultrasound in Medicine.
Attachment of Free Filament Thermocouples for Temperature Measurements on CMC
NASA Technical Reports Server (NTRS)
Lei, Jih-Fen; Cuy, Michael D.; Wnuk, Stephen P.
1997-01-01
Ceramic Matrix Composites (CMC) are being developed for use as enabling materials for advanced aeropropulsion engine and high speed civil transport applications. The characterization and testing of these advanced materials in hostile, high-temperature environments require accurate measurement of the material temperatures. Commonly used wire Thermo-Couples (TC) can not be attached to this ceramic based material via conventional spot-welding techniques. Attachment of wire TC's with commercially available ceramic cements fail to provide sufficient adhesion at high temperatures. While advanced thin film TC technology provides minimally intrusive surface temperature measurement and has good adhesion on the CMC, its fabrication requires sophisticated and expensive facilities and is very time consuming. In addition, the durability of lead wire attachments to both thin film TC's and the substrate materials requires further improvement. This paper presents a newly developed attachment technique for installation of free filament wire TC's with a unique convoluted design on ceramic based materials such as CMC's. Three CMC's (SiC/SiC CMC and alumina/alumina CMC) instrumented with type IC, R or S wire TC's were tested in a Mach 0.3 burner rig. The CMC temperatures measured from these wire TC's were compared to that from the facility pyrometer and thin film TC's. There was no sign of TC delamination even after several hours exposure to 1200 C. The test results proved that this new technique can successfully attach wire TC's on CMC's and provide temperature data in hostile environments. The sensor fabrication process is less expensive and requires very little time compared to that of the thin film TC's. The same installation technique/process can also be applied to attach lead wires for thin film sensor systems.
Electromagnetic Counter-Counter Measure (ECCM) Techniques of the Digital Microwave Radio.
1982-05-01
Frequency hopping requires special synthesizers and filter banks. Large bandwidth expansion in a microwave radio relay application can best be achieved with...34 processing gain " performance as a function of jammer modulation type " pulse jammer performance • emission bandwidth and spectral shaping 0... spectral efficiency, implementation complexity, and suitability for ECCK techniques will be considered. A sumary of the requirements and characteristics of
Sound Source Identification Through Flow Density Measurement and Correlation With Far Field Noise
NASA Technical Reports Server (NTRS)
Panda, J.; Seasholtz, R. G.
2001-01-01
Sound sources in the plumes of unheated round jets, in the Mach number range 0.6 to 1.8, were investigated experimentally using "casuality" approach, where air density fluctuations in the plumes were correlated with the far field noise. The air density was measured using a newly developed Molecular Rayleigh scattering based technique, which did not require any seeding. The reference at the end provides a detailed description of the measurement technique.
An inexpensive active optical remote sensing instrument for assessing aerosol distributions.
Barnes, John E; Sharma, Nimmi C P
2012-02-01
Air quality studies on a broad variety of topics from health impacts to source/sink analyses, require information on the distributions of atmospheric aerosols over both altitude and time. An inexpensive, simple to implement, ground-based optical remote sensing technique has been developed to assess aerosol distributions. The technique, called CLidar (Charge Coupled Device Camera Light Detection and Ranging), provides aerosol altitude profiles over time. In the CLidar technique a relatively low-power laser transmits light vertically into the atmosphere. The transmitted laser light scatters off of air molecules, clouds, and aerosols. The entire beam from ground to zenith is imaged using a CCD camera and wide-angle (100 degree) optics which are a few hundred meters from the laser. The CLidar technique is optimized for low altitude (boundary layer and lower troposphere) measurements where most aerosols are found and where many other profiling techniques face difficulties. Currently the technique is limited to nighttime measurements. Using the CLidar technique aerosols may be mapped over both altitude and time. The instrumentation required is portable and can easily be moved to locations of interest (e.g. downwind from factories or power plants, near highways). This paper describes the CLidar technique, implementation and data analysis and offers specifics for users wishing to apply the technique for aerosol profiles.
HSR Model Deformation Measurements from Subsonic to Supersonic Speeds
NASA Technical Reports Server (NTRS)
Burner, A. W.; Erickson, G. E.; Goodman, W. L.; Fleming, G. A.
1999-01-01
This paper describes the video model deformation technique (VMD) used at five NASA facilities and the projection moire interferometry (PMI) technique used at two NASA facilities. Comparisons between the two techniques for model deformation measurements are provided. Facilities at NASA-Ames and NASA-Langley where deformation measurements have been made are presented. Examples of HSR model deformation measurements from the Langley Unitary Wind Tunnel, Langley 16-foot Transonic Wind Tunnel, and the Ames 12-foot Pressure Tunnel are presented. A study to improve and develop new targeting schemes at the National Transonic Facility is also described. The consideration of milled targets for future HSR models is recommended when deformation measurements are expected to be required. Finally, future development work for VMD and PMI is addressed.
NASA Astrophysics Data System (ADS)
LaBelle, Remi C.; Rochblatt, David J.
2018-06-01
The NASA Deep Space Network (DSN) has recently constructed two new 34-m antennas at the Canberra Deep Space Communications Complex (CDSCC). These new antennas are part of the larger DAEP project to add six new 34-m antennas to the DSN, including two in Madrid, three in Canberra and one in Goldstone (California). The DAEP project included development and implementation of several new technologies for the X, and Ka (32 GHz) -band uplink and downlink electronics. The electronics upgrades were driven by several different considerations, including parts obsolescence, cost reduction, improved reliability and maintainability, and capability to meet future performance requirements. The new antennas are required to support TT&C links for all of the NASA deep-space spacecraft, as well as for several international partners. Some of these missions, such as Voyager 1 and 2, have very limited link budgets, which results in demanding requirements for system G/T performance. These antennas are also required to support radio science missions with several spacecraft, which dictate some demanding requirements for spectral purity, amplitude stability and phase stability for both the uplink and downlink electronics. After completion of these upgrades, a comprehensive campaign of tests and measurements took place to characterize the electronics and calibrate the antennas. Radiometric measurement techniques were applied to characterize, calibrate, and optimize the performance of the antenna parameters. These included optical and RF high-resolution holographic and total power radiometry techniques. The methodology and techniques utilized for the measurement and calibration of the antennas is described in this paper. Lessons learned (not all discussed in this paper) from the commissioning of the first antenna (DSS-35) were applied to the commissioning of the second antenna (DSS-36). These resulted in achieving antenna aperture efficiency of 66% (for DSS-36), at Ka-Band (32-Ghz), which is currently the highest operating frequency for these antennas. The other measurements and results described include antenna noise temperature, photogrammetry and holography alignment of antenna panels, beam-waveguide mirrors, and subreflector, antenna aperture efficiencies and G/T versus frequency, and antenna pointing models. The first antenna (DSS-35) entered into operations in October 2014 and the 2nd antenna (DSS-36) in October 2016. This paper describes the measurement techniques and results of the testing and calibration for both antennas, along with the driving requirements.
Bachim, Brent L; Gaylord, Thomas K
2005-01-20
A new technique, microinterferometric optical phase tomography, is introduced for use in measuring small, asymmetric refractive-index differences in the profiles of optical fibers and fiber devices. The method combines microscopy-based fringe-field interferometry with parallel projection-based computed tomography to characterize fiber index profiles. The theory relating interference measurements to the projection set required for tomographic reconstruction is given, and discrete numerical simulations are presented for three test index profiles that establish the technique's ability to characterize fiber with small, asymmetric index differences. An experimental measurement configuration and specific interferometry and tomography practices employed in the technique are discussed.
NASA Technical Reports Server (NTRS)
Sekula, Martin K.
2012-01-01
Projection moir interferometry (PMI) was employed to measure blade deflections during a hover test of a generic model-scale rotor in the NASA Langley 14x22 subsonic wind tunnel s hover facility. PMI was one of several optical measurement techniques tasked to acquire deflection and flow visualization data for a rotor at several distinct heights above a ground plane. Two of the main objectives of this test were to demonstrate that multiple optical measurement techniques can be used simultaneously to acquire data and to identify and address deficiencies in the techniques. Several PMI-specific technical challenges needed to be addressed during the test and in post-processing of the data. These challenges included developing an efficient and accurate calibration method for an extremely large (65 inch) height range; automating the analysis of the large amount of data acquired during the test; and developing a method to determinate the absolute displacement of rotor blades without a required anchor point measurement. The results indicate that the use of a single-camera/single-projector approach for the large height range reduced the accuracy of the PMI system compared to PMI systems designed for smaller height ranges. The lack of the anchor point measurement (due to a technical issue with one of the other measurement techniques) limited the ability of the PMI system to correctly measure blade displacements to only one of the three rotor heights tested. The new calibration technique reduced the data required by 80 percent while new post-processing algorithms successfully automated the process of locating rotor blades in images, determining the blade quarter chord location, and calculating the blade root and blade tip heights above the ground plane.
Fast measurement of bacterial susceptibility to antibiotics
NASA Technical Reports Server (NTRS)
Chappelle, E. W.; Picciolo, G. L.; Schrock, C. G.
1977-01-01
Method, based on photoanalysis of adenosine triphosphate using light-emitting reaction with luciferase-luciferin technique, saves time by eliminating isolation period required by conventional methods. Technique is also used to determine presence of infection as well as susceptibilities to several antibiotics.
Clarkson, Sean; Wheat, Jon; Heller, Ben; Choppin, Simon
2016-01-01
Use of anthropometric data to infer sporting performance is increasing in popularity, particularly within elite sport programmes. Measurement typically follows standards set by the International Society for the Advancement of Kinanthropometry (ISAK). However, such techniques are time consuming, which reduces their practicality. Schranz et al. recently suggested 3D body scanners could replace current measurement techniques; however, current systems are costly. Recent interest in natural user interaction has led to a range of low-cost depth cameras capable of producing 3D body scans, from which anthropometrics can be calculated. A scanning system comprising 4 depth cameras was used to scan 4 cylinders, representative of the body segments. Girth measurements were calculated from the 3D scans and compared to gold standard measurements. Requirements of a Level 1 ISAK practitioner were met in all 4 cylinders, and ISO standards for scan-derived girth measurements were met in the 2 larger cylinders only. A fixed measurement bias was identified that could be corrected with a simple offset factor. Further work is required to determine comparable performance across a wider range of measurements performed upon living participants. Nevertheless, findings of the study suggest such a system offers many advantages over current techniques, having a range of potential applications.
Monitoring fugitive methane and natural gas emissions, validation of measurement techniques.
NASA Astrophysics Data System (ADS)
Robinson, Rod; Innocenti, Fabrizio; Gardiner, Tom; Helmore, Jon; Finlayson, Andrew; Connor, Andy
2017-04-01
The detection and quantification of fugitive and diffuse methane emissions has become an increasing priority in recent years. As the requirements for routine measurement to support industry initiatives increase there is a growing requirement to assess and validate the performance of fugitive emission measurement technologies. For reported emissions traceability and comparability of measurements is important. This talk will present recent work addressing these needs. Differential Absorption Lidar (DIAL) is a laser based remote sensing technology, able to map the concentration of gases in the atmosphere and determine emission fluxes for fugitive emissions. A description of the technique and its application for determining fugitive emissions of methane from oil and gas operations and waste management sites will be given. As DIAL has gained acceptance as a powerful tool for the measurement and quantification of fugitive emissions, and given the rich data it produces, it is being increasingly used to assess and validate other measurement approaches. In addition, to support the validation of technologies, we have developed a portable controlled release facility able to simulate the emissions from area sources. This has been used to assess and validate techniques which are used to monitor emissions. The development and capabilities of the controlled release facility will be described. This talk will report on recent studies using DIAL and the controlled release facility to validate fugitive emission measurement techniques. This includes side by side comparisons of two DIAL systems, the application of both the DIAL technique and the controlled release facility in a major study carried out in 2015 by South Coast Air Quality Management District (SCAQMD) in which a number of optical techniques were assessed and the development of a prototype method validation approach for techniques used to measure methane emissions from shale gas sites. In conclusion the talk will provide an update on the current status in the development of a European Standard for the measurement of fugitive emissions of VOCs and the use of validation data in the standardisation process and discuss the application of this to methane measurement.
Evaluation of new laser spectrometer techniques for in-situ carbon monoxide measurements
NASA Astrophysics Data System (ADS)
Zellweger, C.; Steinbacher, M.; Buchmann, B.
2012-10-01
Long-term time series of the atmospheric composition are essential for environmental research and thus require compatible, multi-decadal monitoring activities. The current data quality objectives of the World Meteorological Organization (WMO) for carbon monoxide (CO) in the atmosphere are very challenging to meet with the measurement techniques that have been used until recently. During the past few years, new spectroscopic techniques came to market with promising properties for trace gas analytics. The current study compares three instruments that have recently become commercially available (since 2011) with the best currently available technique (Vacuum UV Fluorescence) and provides a link to previous comparison studies. The instruments were investigated for their performance regarding repeatability, reproducibility, drift, temperature dependence, water vapour interference and linearity. Finally, all instruments were examined during a short measurement campaign to assess their applicability for long-term field measurements. It could be shown that the new techniques perform considerably better compared to previous techniques, although some issues, such as temperature influence and cross sensitivities, need further attention.
NASA Astrophysics Data System (ADS)
Shrivastava, Akash; Mohanty, A. R.
2018-03-01
This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.
Measurements and Diagnostics of Diamond Films and Coatings
NASA Technical Reports Server (NTRS)
Miyoshi, Kazuhisa; Wu, Richard L. C.
1999-01-01
The commercial potential of chemical-vapor-deposited (CVD) diamond films has been established and a number of applications have been identified through university, industry, and government research studies. This paper discusses the methodologies used for property measurement and diagnostic of CVD diamond films and coatings. Measurement and diagnostic techniques studied include scanning electron microscopy, transmission electron microscopy, atomic force microscopy, stylus profilometry, x-ray diffraction, electron diffraction, Raman spectroscopy, Rutherford backscattering, elastic recoil spectroscopy, and friction examination. Each measurement and diagnostic technique provides unique information. A combination of techniques can provide the technical information required to understand the quality and properties of CVD diamond films, which are important to their application in specific component systems and environments. In this study the combination of measurement and diagnostic techniques was successfully applied to correlate deposition parameters and resultant diamond film composition, crystallinity, grain size, surface roughness, and coefficient of friction.
Validation of helicopter noise prediction techniques
NASA Technical Reports Server (NTRS)
Succi, G. P.
1981-01-01
The current techniques of helicopter rotor noise prediction attempt to describe the details of the noise field precisely and remove the empiricisms and restrictions inherent in previous methods. These techniques require detailed inputs of the rotor geometry, operating conditions, and blade surface pressure distribution. The purpose of this paper is to review those techniques in general and the Farassat/Nystrom analysis in particular. The predictions of the Farassat/Nystrom noise computer program, using both measured and calculated blade surface pressure data, are compared to measured noise level data. This study is based on a contract from NASA to Bolt Beranek and Newman Inc. with measured data from the AH-1G Helicopter Operational Loads Survey flight test program supplied by Bell Helicopter Textron.
Bistatic radar sea state monitoring
NASA Technical Reports Server (NTRS)
Ruck, G. T.; Barrick, D. E.; Kaliszewski, T.
1972-01-01
Bistatic radar techniques were examined for remote measurement of the two-dimensional surface wave height spectrum of the ocean. One technique operates at high frequencies (HF), 3-30 MHz, and the other at ultrahigh frequencies (UHF), approximately 1 GHz. Only a preliminary theoretical examination of the UHF technique was performed; however the principle underlying the HF technique was demonstrated experimentally with results indicating that an HF bistatic system using a surface transmitter and an orbital receiver would be capable of measuring the two-dimensional wave height spectrum in the vicinity of the transmitter. An HF bistatic system could also be used with an airborne receiver for ground truth ocean wave spectrum measurements. Preliminary system requirements and hardware configurations are discussed for both an orbital system and an aircraft verification experiment.
Sjodahl, Mikael; Amer, Eynas
2018-05-10
The two techniques of lateral shear interferometry and speckle deflectometry are analyzed in a common optical system for their ability to measure phase gradient fields of a thin phase object. The optical system is designed to introduce a shear in the frequency domain of a telecentric imaging system that gives a sensitivity of both techniques in proportion to the defocus introduced. In this implementation, both techniques successfully measure the horizontal component of the phase gradient field. The response of both techniques scales linearly with the defocus distance, and the precision is comparative, with a random error in the order of a few rad/mm. It is further concluded that the precision of the two techniques relates to the transverse speckle size in opposite ways. While a large spatial coherence width, and correspondingly a large lateral speckle size, makes lateral shear interferometry less susceptible to defocus, a large lateral speckle size is detrimental for speckle correlation. The susceptibility for the magnitude of the defocus is larger for the lateral shear interferometry technique as compared to the speckle deflectometry technique. The two techniques provide the same type of information; however, there are a few fundamental differences. Lateral shear interferometry relies on a special hardware configuration in which the shear angle is intrinsically integrated into the system. The design of a system sensitive to both in-plane phase gradient components requires a more complex configuration and is not considered in this paper. Speckle deflectometry, on the other hand, requires no special hardware, and both components of the phase gradient field are given directly from the measured speckle deformation field.
Advanced techniques for determining long term compatibility of materials with propellants
NASA Technical Reports Server (NTRS)
Green, R. L.
1972-01-01
The search for advanced measurement techniques for determining long term compatibility of materials with propellants was conducted in several parts. A comprehensive survey of the existing measurement and testing technology for determining material-propellant interactions was performed. Selections were made from those existing techniques which were determined could meet or be made to meet the requirements. Areas of refinement or changes were recommended for improvement of others. Investigations were also performed to determine the feasibility and advantages of developing and using new techniques to achieve significant improvements over existing ones. The most interesting demonstration was that of the new technique, the volatile metal chelate analysis. Rivaling the neutron activation analysis in terms of sensitivity and specificity, the volatile metal chelate technique was fully demonstrated.
Comparative evaluation of workload estimation techniques in piloting tasks
NASA Technical Reports Server (NTRS)
Wierwille, W. W.
1983-01-01
Techniques to measure operator workload in a wide range of situations and tasks were examined. The sensitivity and intrusion of a wide variety of workload assessment techniques in simulated piloting tasks were investigated. Four different piloting tasks, psychomotor, perceptual, mediational, and communication aspects of piloting behavior were selected. Techniques to determine relative sensitivity and intrusion were applied. Sensitivity is the relative ability of a workload estimation technique to discriminate statistically significant differences in operator loading. High sensitivity requires discriminable changes in score means as a function of load level and low variation of the scores about the means. Intrusion is an undesirable change in the task for which workload is measured, resulting from the introduction of the workload estimation technique or apparatus.
Enterprise Professional Development--Evaluating Learning
ERIC Educational Resources Information Center
Murphy, Gerald A.; Calway, Bruce A.
2010-01-01
Whilst professional development (PD) is an activity required by many regulatory authorities, the value that enterprises obtain from PD is often unknown, particularly when it involves development of knowledge. This paper discusses measurement techniques and processes and provides a review of established evaluation techniques, highlighting…
A NOVEL TECHNIQUE APPLYING SPECTRAL ESTIMATION TO JOHNSON NOISE THERMOMETRY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ezell, N Dianne Bull; Britton Jr, Charles L; Roberts, Michael
Johnson noise thermometry (JNT) is one of many important measurements used to monitor the safety levels and stability in a nuclear reactor. However, this measurement is very dependent on the electromagnetic environment. Properly removing unwanted electromagnetic interference (EMI) is critical for accurate drift free temperature measurements. The two techniques developed by Oak Ridge National Laboratory (ORNL) to remove transient and periodic EMI are briefly discussed in this document. Spectral estimation is a key component in the signal processing algorithm utilized for EMI removal and temperature calculation. Applying these techniques requires the simple addition of the electronics and signal processing tomore » existing resistive thermometers.« less
NASA Astrophysics Data System (ADS)
Mariscal, Jean-François; Bruneau, Didier; Pelon, Jacques; Van Haecke, Mathilde; Blouzon, Frédéric; Montmessin, Franck; Chepfer, Hélène
2018-04-01
We present the measurement principle and the optical design of a Quad Mach Zehnder (QMZ) interferometer as HSRL technique, allowing simultaneous measurements of particle backscattering and wind velocity. Key features of this concept is to operate with a multimodal laser and do not require any frequency stabilization. These features are relevant especially for space applications for which high technical readiness level is required.
NASA Astrophysics Data System (ADS)
OBrien, R. E.; Ridley, K. J.; Canagaratna, M. R.; Croteau, P.; Budisulistiorini, S. H.; Cui, T.; Green, H. S.; Surratt, J. D.; Jayne, J. T.; Kroll, J. H.
2016-12-01
A thorough understanding of the sources, evolution, and budgets of atmospheric organic aerosol requires widespread measurements of the amount and chemical composition of atmospheric organic carbon in the condensed phase (within particles and water droplets). Collecting such datasets requires substantial spatial and temporal (long term) coverage, which can be challenging when relying on online measurements by state-of-the-art research-grade instrumentation (such as those used in atmospheric chemistry field studies). Instead, samples are routinely collected using relatively low-cost techniques, such as aerosol filters, for offline analysis of their chemical composition. However, measurements made by online and offline instruments can be fundamentally different, leading to disparities between data from field studies and those from more routine monitoring. To better connect these two approaches, and take advantage of the benefits of each, we have developed a method to introduce collected samples into online aerosol instruments using nebulization. Because nebulizers typically require tens to hundreds of milliliters of solution, limiting this technique to large samples, we developed a new, ultrasonic micro-nebulizer that requires only small volumes (tens of microliters) of sample for chemical analysis. The nebulized (resuspended) sample is then sent into a high-resolution Aerosol Mass Spectrometer (AMS), a widely-used instrument that provides key information on the chemical composition of aerosol particulate matter (elemental ratios, carbon oxidation state, etc.), measurements that are not typically made for collected atmospheric samples. Here, we compare AMS data collected using standard on-line techniques with our offline analysis, demonstrating the utility of this new technique to aerosol filter samples. We then apply this approach to organic aerosol filter samples collected in remote regions, as well as rainwater samples from across the US. This data provides information on the sample composition and changes in key chemical characteristics across locations and seasons.
Wireless passive radiation sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfeifer, Kent B; Rumpf, Arthur N; Yelton, William G
2013-12-03
A novel measurement technique is employed using surface acoustic wave (SAW) devices, passive RF, and radiation-sensitive films to provide a wireless passive radiation sensor that requires no batteries, outside wiring, or regular maintenance. The sensor is small (<1 cm.sup.2), physically robust, and will operate unattended for decades. In addition, the sensor can be insensitive to measurement position and read distance due to a novel self-referencing technique eliminating the need to measure absolute responses that are dependent on RF transmitter location and power.
NASA Technical Reports Server (NTRS)
Gasiewski, Albin J.
1992-01-01
This technique for electronically rotating the polarization basis of an orthogonal-linear polarization radiometer is based on the measurement of the first three feedhorn Stokes parameters, along with the subsequent transformation of this measured Stokes vector into a rotated coordinate frame. The technique requires an accurate measurement of the cross-correlation between the two orthogonal feedhorn modes, for which an innovative polarized calibration load was developed. The experimental portion of this investigation consisted of a proof of concept demonstration of the technique of electronic polarization basis rotation (EPBR) using a ground based 90-GHz dual orthogonal-linear polarization radiometer. Practical calibration algorithms for ground-, aircraft-, and space-based instruments were identified and tested. The theoretical effort consisted of radiative transfer modeling using the planar-stratified numerical model described in Gasiewski and Staelin (1990).
NASA Technical Reports Server (NTRS)
Garmestai, H.; Harris, K.; Lourenco, L.
1997-01-01
Representation of morphology and evolution of the microstructure during processing and their relation to properties requires proper experimental techniques. Residual strains, lattice distortion, and texture (micro-texture) at the interface and the matrix of a layered structure or a functionally gradient material and their variation are among parameters important in materials characterization but hard to measure with present experimental techniques. Current techniques available to measure changes in interred material parameters (residual stress, micro-texture, microplasticity) produce results which are either qualitative or unreliable. This problem becomes even more complicated in the case of a temperature variation. These parameters affect many of the mechanical properties of advanced materials including stress-strain relation, ductility, creep, and fatigue. A review of some novel experimental techniques using recent advances in electron microscopy is presented here to measure internal stress, (micro)texture, interracial strength and (sub)grain formation and realignment. Two of these techniques are combined in the chamber of an Environmental Scanning Electron Microscope to measure strain and orientation gradients in advanced materials. These techniques which include Backscattered Kikuchi Diffractometry (BKD) and Microscopic Strain Field Analysis are used to characterize metallic and intermetallic matrix composites and superplastic materials. These techniques are compared with the more conventional x-ray diffraction and indentation techniques.
Biomedical and Human Factors Requirements for a Manned Earth-Orbiting Station
NASA Technical Reports Server (NTRS)
Reynolds, J. B.
1963-01-01
The study reported here has presented a measurement data pool for the determination of biomedical and behavioral effects of long-term exposure to weightlessness. This includes measures, techniques, equipment, and requirements in terms of weight, power, volume, time, crew activities, subsystem interfaces and experimental programs and designs, and confidence ratings for their effectiveness for determining weightlessness effects.
Casey, D T; Frenje, J A; Séguin, F H; Li, C K; Rosenberg, M J; Rinderknecht, H; Manuel, M J-E; Gatu Johnson, M; Schaeffer, J C; Frankel, R; Sinenian, N; Childs, R A; Petrasso, R D; Glebov, V Yu; Sangster, T C; Burke, M; Roberts, S
2011-07-01
A magnetic recoil spectrometer (MRS) has been built and successfully used at OMEGA for measurements of down-scattered neutrons (DS-n), from which an areal density in both warm-capsule and cryogenic-DT implosions have been inferred. Another MRS is currently being commissioned on the National Ignition Facility (NIF) for diagnosing low-yield tritium-hydrogen-deuterium implosions and high-yield DT implosions. As CR-39 detectors are used in the MRS, the principal sources of background are neutron-induced tracks and intrinsic tracks (defects in the CR-39). The coincidence counting technique was developed to reduce these types of background tracks to the required level for the DS-n measurements at OMEGA and the NIF. Using this technique, it has been demonstrated that the number of background tracks is reduced by a couple of orders of magnitude, which exceeds the requirement for the DS-n measurements at both facilities.
NASA Technical Reports Server (NTRS)
Craig, Roger A.; Davy, William C.; Whiting, Ellis E.
1994-01-01
This paper describes the techniques developed for measuring stagnation-point radiation in NASA's cancelled Aeroassist Flight Experiment (AFE). It specifies the need for such a measurement; the types and requirements for the needed instruments; the Radiative Heating Experiment (RHE) developed for the AFE; the requirements, design parameters, and performance of the window developed for the RHE; the procedures and summary of the technique; and results of the arc-jet wind tunnel experiment conducted to demonstrate the overall concept. Subjects emphasized are the commercial implications of the knowledge to be gained by this experiment in connection with the Aeroassisted Space Transfer Vehicle (ASTV), the nonequilibrium nature of the radiation, concerns over the contribution of vacuum-ultraviolet radiation to the overall radiation, and the limit on the flight environment of the vehicle imposed by the limitations on the window material. Results show that a technique exists with which the stagnation-point radiation can be measured in flight in an environment of interest to commercial ASTV applications.
1997-12-01
implement, they may also serve as interim measures for reducing risk prior to subsequent phytoremediation or remediation by other techniques. 2.0...are essentially non-leachable and pass leachate and TCLP testing protocols. 2.6 Time Requirements Most of the stabilization techniques mentioned above...site. In addition to the attractive low cost of both phytoremediation techniques, these techniques may also be less invasive and more quickly
Measurement of water pressure and deformation with time domain reflectometry cables
NASA Astrophysics Data System (ADS)
Dowding, Charles H.; Pierce, Charles E.
1995-05-01
Time domain reflectometry (TDR) techniques can be deployed to measure water pressures and relative dam abutment displacement with an array of coaxial cables either drilled and grouted or retrofitted through existing passages. Application of TDR to dam monitoring requires determination of appropriate cable types and methods to install these cables in existing dams or during new construction. This paper briefly discusses currently applied and developing TDR techniques and describes initial design considerations for TDR-based dam instrumentation. Water pressure at the base of or within the dam can be determined by measuring the water level within a hollow or air-filled coaxial cable. The ability to retrofit existing porous stone-tipped piezometers is an attractive attribute of the TDR system. Measurement of relative lateral movement can be accomplished by monitoring local shearing of a solid polyethylene-filled coaxial cable at the interface of the dam base and foundation materials or along adversely oriented joints. Uplift can be recorded by measuring cable extension as the dam displaces upward off its foundation. Since each monitoring technique requires measurements with different types of coaxial cables, a variety may be installed within the array. Multiplexing of these cables will allow monitoring from a single pulser, and measurements can be recorded on site or remotely via a modem at any time.
NASA Astrophysics Data System (ADS)
Richards, Simon D.; Leighton, Timothy G.; Brown, Niven R.
2003-10-01
Knowledge of the particle size distribution is required in order to predict ultrasonic absorption in polydisperse particulate suspensions. This paper shows that the method used to measure the particle size distribution can lead to important differences in the predicted absorption. A reverberation technique developed for measuring ultrasonic absorption by suspended particles is used to measure the absorption in suspensions of nonspherical particles. Two types of particulates are studied: (i) kaolin (china clay) particles which are platelike in form; and (ii) calcium carbonate particles which are more granular. Results are compared to theoretical predictions of visco-inertial absorption by suspensions of spherical particles. The particle size distributions, which are required for these predictions, are measured by laser diffraction, gravitational sedimentation and centrifugal sedimentation, all of which assume spherical particles. For a given sample, each sizing technique yields a different size distribution, leading to differences in the predicted absorption. The particle size distributions obtained by gravitational and centrifugal sedimentation are reinterpreted to yield a representative size distribution of oblate spheroids, and predictions for absorption by these spheroids are compared with the measurements. Good agreement between theory and measurement for the flat kaolin particles is obtained, demonstrating that these particles can be adequately represented by oblate spheroids.
High-throughput electrical characterization for robust overlay lithography control
NASA Astrophysics Data System (ADS)
Devender, Devender; Shen, Xumin; Duggan, Mark; Singh, Sunil; Rullan, Jonathan; Choo, Jae; Mehta, Sohan; Tang, Teck Jung; Reidy, Sean; Holt, Jonathan; Kim, Hyung Woo; Fox, Robert; Sohn, D. K.
2017-03-01
Realizing sensitive, high throughput and robust overlay measurement is a challenge in current 14nm and advanced upcoming nodes with transition to 300mm and upcoming 450mm semiconductor manufacturing, where slight deviation in overlay has significant impact on reliability and yield1). Exponentially increasing number of critical masks in multi-patterning lithoetch, litho-etch (LELE) and subsequent LELELE semiconductor processes require even tighter overlay specification2). Here, we discuss limitations of current image- and diffraction- based overlay measurement techniques to meet these stringent processing requirements due to sensitivity, throughput and low contrast3). We demonstrate a new electrical measurement based technique where resistance is measured for a macro with intentional misalignment between two layers. Overlay is quantified by a parabolic fitting model to resistance where minima and inflection points are extracted to characterize overlay control and process window, respectively. Analyses using transmission electron microscopy show good correlation between actual overlay performance and overlay obtained from fitting. Additionally, excellent correlation of overlay from electrical measurements to existing image- and diffraction- based techniques is found. We also discuss challenges of integrating electrical measurement based approach in semiconductor manufacturing from Back End of Line (BEOL) perspective. Our findings open up a new pathway for accessing simultaneous overlay as well as process window and margins from a robust, high throughput and electrical measurement approach.
Non-contact Creep Resistance Measurement for Ultra-High Temperature Materials
NASA Technical Reports Server (NTRS)
Lee, J.; Bradshaw, C.; Rogers, J. R.; Rathz, T. J.; Wall, J. J.; Choo, H.; Liaw, P. K.; Hyers, R. W.
2005-01-01
Conventional techniques for measuring creep are limited to about 1700 C, so a new technique is required for higher temperatures. This technique is based on electrostatic levitation (ESL) of a spherical sample, which is rotated quickly enough to cause creep deformation by centrifugal acceleration. Creep of samples has been demonstrated at up to 2300 C in the ESL facility at NASA MSFC, while ESL itself has been applied at over 3000 C, and has no theoretical maximum temperature. The preliminary results and future directions of this NASA-funded research collaboration will be presented.
Fault detection techniques for complex cable shield topologies
NASA Astrophysics Data System (ADS)
Coonrod, Kurt H.; Davis, Stuart L.; McLemore, Donald P.
1994-09-01
This document presents the results of a basic principles study which investigated technical approaches for developing fault detection techniques for use on cables with complex shielding topologies. The study was limited to those approaches which could realistically be implemented on a fielded cable, i.e., approaches which would require partial disassembly of a cable were not pursued. The general approach used was to start with present transfer impedance measurement techniques and modify their use to achieve the best possible measurement range. An alternative test approach, similar to a sniffer type test, was also investigated.
Boccuni, Fabio; Gagliardi, Diana; Ferrante, Riccardo; Rondinone, Bruna Maria; Iavicoli, Sergio
2017-10-01
Nanotechnology offers many opportunities but there is still considerable uncertainty about the health risks and how to assess these.In the field of risk analysis for workers potentially exposed to nano-objects and their agglomerates and aggregates (NOAA) different methodological approaches to measure airborne NOAA have been proposed.This study proposes a systematic review of scientific literature on occupational exposure to NOAA in the workplace with the aim to identify techniques of exposure measurement to be recommended in low- and medium-income countries.We gathered scientific papers reporting techniques of NOAA exposure measurements in the workplace, we summarized the data for each eligible technique according to PRISMA guidelines, and we rated the quality of evidence following an adapted GRADE approach.We found 69 eligible studies to be included in qualitative synthesis: the majority of studies reported a moderate quality and only two studies demonstrated the use of a high quality exposure measurement technique.The review demonstrates that a basic exposure measurement, i.e. evidence for the presence or absence of NOAA in the workplace air, can be achieved with moderate (40 techniques) to high (2 techniques) quality; comprehensive exposure measurement, that allow the quantification of NOAA in the workplace, can be achieved with moderate (11 techniques) to high (2 techniques) quality.The findings of the study also allowed to finalize a list of requirements that must be fulfilled by an effective measurement technique (either basic or comprehensive) and to highlight the main weaknesses that need to be tackled for an effective affordability evaluation of measurement techniques to be recommended in low- and medium-income countries. Copyright © 2017 Elsevier GmbH. All rights reserved.
Satellite altimetric measurements of the ocean. Report of the TOPEX Science Working Group
NASA Technical Reports Server (NTRS)
Stewart, R.
1981-01-01
The scientific usefulness of satellite measurements of ocean topography for the study of ocean circulation was investigated. The following topics were studied: (1) scientific problems which use altimetric measurements of ocean topography; (2) the extent in which in situ measurements are complementary or required; (3) accuracy, precision, and spatial and temporal resolutions which are required of the topographic measurements; (4) errors associated with measurement techniques; and (5) influences of these errors on scientific problems. An operational system for measuring ocean topography, was defined and the cost of conducting such a topographic experiment, was estimated.
NASA Technical Reports Server (NTRS)
Flanagan, P. M.; Atherton, W. J.
1985-01-01
A robotic system to automate the detection, location, and quantification of gear noise using acoustic intensity measurement techniques has been successfully developed. Major system components fabricated under this grant include an instrumentation robot arm, a robot digital control unit and system software. A commercial, desktop computer, spectrum analyzer and two microphone probe complete the equipment required for the Robotic Acoustic Intensity Measurement System (RAIMS). Large-scale acoustic studies of gear noise in helicopter transmissions cannot be performed accurately and reliably using presently available instrumentation and techniques. Operator safety is a major concern in certain gear noise studies due to the operating environment. The man-hours needed to document a noise field in situ is another shortcoming of present techniques. RAIMS was designed to reduce the labor and hazard in collecting data and to improve the accuracy and repeatability of characterizing the acoustic field by automating the measurement process. Using RAIMS a system operator can remotely control the instrumentation robot to scan surface areas and volumes generating acoustic intensity information using the two microphone technique. Acoustic intensity studies requiring hours of scan time can be performed automatically without operator assistance. During a scan sequence, the acoustic intensity probe is positioned by the robot and acoustic intensity data is collected, processed, and stored.
Muscle activity characterization by laser Doppler Myography
NASA Astrophysics Data System (ADS)
Scalise, Lorenzo; Casaccia, Sara; Marchionni, Paolo; Ercoli, Ilaria; Primo Tomasini, Enrico
2013-09-01
Electromiography (EMG) is the gold-standard technique used for the evaluation of muscle activity. This technique is used in biomechanics, sport medicine, neurology and rehabilitation therapy and it provides the electrical activity produced by skeletal muscles. Among the parameters measured with EMG, two very important quantities are: signal amplitude and duration of muscle contraction, muscle fatigue and maximum muscle power. Recently, a new measurement procedure, named Laser Doppler Myography (LDMi), for the non contact assessment of muscle activity has been proposed to measure the vibro-mechanical behaviour of the muscle. The aim of this study is to present the LDMi technique and to evaluate its capacity to measure some characteristic features proper of the muscle. In this paper LDMi is compared with standard superficial EMG (sEMG) requiring the application of sensors on the skin of each patient. sEMG and LDMi signals have been simultaneously acquired and processed to test correlations. Three parameters has been analyzed to compare these techniques: Muscle activation timing, signal amplitude and muscle fatigue. LDMi appears to be a reliable and promising measurement technique allowing the measurements without contact with the patient skin.
Dynamic measurements of CO diffusing capacity using discrete samples of alveolar gas.
Graham, B L; Mink, J T; Cotton, D J
1983-01-01
It has been shown that measurements of the diffusing capacity of the lung for CO made during a slow exhalation [DLCO(exhaled)] yield information about the distribution of the diffusing capacity in the lung that is not available from the commonly measured single-breath diffusing capacity [DLCO(SB)]. Current techniques of measuring DLCO(exhaled) require the use of a rapid-responding (less than 240 ms, 10-90%) CO meter to measure the CO concentration in the exhaled gas continuously during exhalation. DLCO(exhaled) is then calculated using two sample points in the CO signal. Because DLCO(exhaled) calculations are highly affected by small amounts of noise in the CO signal, filtering techniques have been used to reduce noise. However, these techniques reduce the response time of the system and may introduce other errors into the signal. We have developed an alternate technique in which DLCO(exhaled) can be calculated using the concentration of CO in large discrete samples of the exhaled gas, thus eliminating the requirement of a rapid response time in the CO analyzer. We show theoretically that this method is as accurate as other DLCO(exhaled) methods but is less affected by noise. These findings are verified in comparisons of the discrete-sample method of calculating DLCO(exhaled) to point-sample methods in normal subjects, patients with emphysema, and patients with asthma.
Interactive Tax Reform: Simulating Impacts of Legislative Proposals.
ERIC Educational Resources Information Center
Downing, Roger H.; And Others
1985-01-01
Pennsylvania's mathematical modeling technique to evaluate tax reform proposals requires the following data: personal income at the local level and measures of the breakdown of property tax payment by land use classification. The simulation technique could be readily adapted in reorganizing educational finance systems. (MLF)
Preliminary Findings of the Photovoltaic Cell Calibration Experiment on Pathfinder Flight 95-3
NASA Technical Reports Server (NTRS)
Vargas-Aburto, Carlos
1997-01-01
The objective of the photovoltaic (PV) cell calibration experiment for Pathfinder was to develop an experiment compatible with an ultralight UAV to predict the performance of PV cells at AM0, the solar spectrum in space, using the Langley plot technique. The Langley plot is a valuable technique for this purpose and requires accurate measurements of air mass (pressure), cell temperature, solar irradiance, and current-voltage(IV) characteristics with the cells directed normal to the direct ray of the sun. Pathfinder's mission objective (95-3) of 65,000 ft. maximum altitude, is ideal for performing the Langley plot measurements. Miniaturization of electronic data acquisition equipment enabled the design and construction of an accurate and light weight measurement system that meets Pathfinder's low payload weight requirements.
Skin Friction and Transition Location Measurement on Supersonic Transport Models
NASA Technical Reports Server (NTRS)
Kennelly, Robert A., Jr.; Goodsell, Aga M.; Olsen, Lawrence E. (Technical Monitor)
2000-01-01
Flow visualization techniques were used to obtain both qualitative and quantitative skin friction and transition location data in wind tunnel tests performed on two supersonic transport models at Mach 2.40. Oil-film interferometry was useful for verifying boundary layer transition, but careful monitoring of model surface temperatures and systematic examination of the effects of tunnel start-up and shutdown transients will be required to achieve high levels of accuracy for skin friction measurements. A more common technique, use of a subliming solid to reveal transition location, was employed to correct drag measurements to a standard condition of all-turbulent flow on the wing. These corrected data were then analyzed to determine the additional correction required to account for the effect of the boundary layer trip devices.
Microwave, Millimeter, Submillimeter, and Far Infrared Spectral Databases
NASA Technical Reports Server (NTRS)
Pearson, J. C.; Pickett, H. M.; Drouin, B. J.; Chen, P.; Cohen, E. A.
2002-01-01
The spectrum of most known astrophysical molecules is derived from transitions between a few hundred to a few hundred thousand energy levels populated at room temperature. In the microwave and millimeter wave regions. spectroscopy is almost always performed with traditional microwave techniques. In the submillimeter and far infrared microwave technique becomes progressively more technologically challenging and infrared techniques become more widely employed as the wavelength gets shorter. Infrared techniques are typically one to two orders of magnitude less precise but they do generate all the strong features in the spectrum. With microwave technique, it is generally impossible and rarely necessary to measure every single transition of a molecular species, so careful fitting of quantum mechanical Hamiltonians to the transitions measured are required to produce the complete spectral picture of the molecule required by astronomers. The fitting process produces the most precise data possible and is required in the interpret heterodyne observations. The drawback of traditional microwave technique is that precise knowledge of the band origins of low lying excited states is rarely gained. The fitting of data interpolates well for the range of quantum numbers where there is laboratory data, but extrapolation is almost never precise. The majority of high resolution spectroscopic data is millimeter or longer in wavelength and a very limited number of molecules have ever been studied with microwave techniques at wavelengths shorter than 0.3 millimeters. The situation with infrared technique is similarly dire in the submillimeter and far infrared because the black body sources used are competing with a very significant thermal background making the signal to noise poor. Regardless of the technique used the data must be archived in a way useful for the interpretation of observations.
Digital techniques for ULF wave polarization analysis
NASA Technical Reports Server (NTRS)
Arthur, C. W.
1979-01-01
Digital power spectral and wave polarization analysis are powerful techniques for studying ULF waves in the earth's magnetosphere. Four different techniques for using the spectral matrix to perform such an analysis have been presented in the literature. Three of these techniques are similar in that they require transformation of the spectral matrix to the principal axis system prior to performing the polarization analysis. The differences in the three techniques lie in the manner in which determine this transformation. A comparative study of these three techniques using both simulated and real data has shown them to be approximately equal in quality of performance. The fourth technique does not require transformation of the spectral matrix. Rather, it uses the measured spectral matrix and state vectors for a desired wave type to design a polarization detector function in the frequency domain. The design of various detector functions and their application to both simulated and real data will be presented.
van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald
2017-12-04
Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the measurement of hours worked each week by GPs strongly varied according to the number of GPs included and the frequency of measurements per GP during the week measured. The best balance between both dimensions will depend upon different circumstances, such as the target group and the budget available.
Coherent lidar wind measurements from the Space Station base using 1.5 m all-reflective optics
NASA Technical Reports Server (NTRS)
Bilbro, J. W.; Beranek, R. G.
1987-01-01
This paper discusses the space-based measurement of atmospheric winds from the point of view of the requirements of the optical system of a coherent CO2 lidar. A brief description of the measurement technique is given and a discussion of previous study results provided. The telescope requirements for a Space Station based lidar are arrived at through discussions of the desired system sensitivity and the need for lag angle compensation.
NASA Technical Reports Server (NTRS)
Rey, Charles A.
1991-01-01
The development of high temperature containerless processing equipment and the design and evaluation of associated systems required for microgravity materials processing and property measurements are discussed. Efforts were directed towards the following task areas: design and development of a High Temperature Acoustic Levitator (HAL) for containerless processing and property measurements at high temperatures; testing of the HAL module to establish this technology for use as a positioning device for microgravity uses; construction and evaluation of a brassboard hot wall Acoustic Levitation Furnace; construction and evaluation of a noncontact temperature measurement (NCTM) system based on AGEMA thermal imaging camera; construction of a prototype Division of Amplitude Polarimetric Pyrometer for NCTM of levitated specimens; evaluation of and recommendations for techniques to control contamination in containerless materials processing chambers; and evaluation of techniques for heating specimens to high temperatures for containerless materials experimentation.
New signal processing technique for density profile reconstruction using reflectometry.
Clairet, F; Ricaud, B; Briolle, F; Heuraux, S; Bottereau, C
2011-08-01
Reflectometry profile measurement requires an accurate determination of the plasma reflected signal. Along with a good resolution and a high signal to noise ratio of the phase measurement, adequate data analysis is required. A new data processing based on time-frequency tomographic representation is used. It provides a clearer separation between multiple components and improves isolation of the relevant signals. In this paper, this data processing technique is applied to two sets of signals coming from two different reflectometer devices used on the Tore Supra tokamak. For the standard density profile reflectometry, it improves the initialization process and its reliability, providing a more accurate profile determination in the far scrape-off layer with density measurements as low as 10(16) m(-1). For a second reflectometer, which provides measurements in front of a lower hybrid launcher, this method improves the separation of the relevant plasma signal from multi-reflection processes due to the proximity of the plasma.
NASA Astrophysics Data System (ADS)
Rey, Charles A.
1991-03-01
The development of high temperature containerless processing equipment and the design and evaluation of associated systems required for microgravity materials processing and property measurements are discussed. Efforts were directed towards the following task areas: design and development of a High Temperature Acoustic Levitator (HAL) for containerless processing and property measurements at high temperatures; testing of the HAL module to establish this technology for use as a positioning device for microgravity uses; construction and evaluation of a brassboard hot wall Acoustic Levitation Furnace; construction and evaluation of a noncontact temperature measurement (NCTM) system based on AGEMA thermal imaging camera; construction of a prototype Division of Amplitude Polarimetric Pyrometer for NCTM of levitated specimens; evaluation of and recommendations for techniques to control contamination in containerless materials processing chambers; and evaluation of techniques for heating specimens to high temperatures for containerless materials experimentation.
3D shape measurement of moving object with FFT-based spatial matching
NASA Astrophysics Data System (ADS)
Guo, Qinghua; Ruan, Yuxi; Xi, Jiangtao; Song, Limei; Zhu, Xinjun; Yu, Yanguang; Tong, Jun
2018-03-01
This work presents a new technique for 3D shape measurement of moving object in translational motion, which finds applications in online inspection, quality control, etc. A low-complexity 1D fast Fourier transform (FFT)-based spatial matching approach is devised to obtain accurate object displacement estimates, and it is combined with single shot fringe pattern prolometry (FPP) techniques to achieve high measurement performance with multiple captured images through coherent combining. The proposed technique overcomes some limitations of existing ones. Specifically, the placement of marks on object surface and synchronization between projector and camera are not needed, the velocity of the moving object is not required to be constant, and there is no restriction on the movement trajectory. Both simulation and experimental results demonstrate the effectiveness of the proposed technique.
NASA Technical Reports Server (NTRS)
Lal, Ravindra
1994-01-01
The first technical report for the period 1 Jan. 1993 till 31 Dec. 1993 for the research entitled, 'Direct observation of crystal growth from solution using Optical Investigation of a growing crystal Face' is presented. The work on the project did not start till 1 June 1993 due to the non-availability of the required personnel. The progress of the work during the period 1 June 1993 till the end of 1993 is described. Significant progress was made for testing various optical diagnostic techniques for monitoring crystal solution. Some of the techniques that are being tested are: heterodyne detection technique, in which changes in phase are measured as a interferometric function of time/crystal growth; a conventional technique, in which a fringe brightness is measured as a function of crystal growth/time; and a Mach-Zehnder interferometric technique in which a fringe brightness is measured as a function of time to obtain information on concentration changes. During the second year it will be decided to incorporate the best interferometric technique along with the ellipsometric technique, to obtain real time in-situ growth rate measurements. A laboratory mock-up of the first two techniques were made and tested.
NASA Technical Reports Server (NTRS)
Bridges, James
2002-01-01
As part of the Advanced Subsonic Technology Program, a series of experiments was conducted at NASA Glenn Research Center on the effect of mixing enhancement devices on the aeroacoustic performance of separate flow nozzles. Initial acoustic evaluations of the devices showed that they reduced jet noise significantly, while creating very little thrust loss. The explanation for the improvement required that turbulence measurements, namely single point mean and RMS statistics and two-point spatial correlations, be made to determine the change in the turbulence caused by the mixing enhancement devices that lead to the noise reduction. These measurements were made in the summer of 2000 in a test program called Separate Nozzle Flow Test 2000 (SFNT2K) supported by the Aeropropulsion Research Program at NASA Glenn Research Center. Given the hot high-speed flows representative of a contemporary bypass ratio 5 turbofan engine, unsteady flow field measurements required the use of an optical measurement method. To achieve the spatial correlations, the Particle Image Velocimetry technique was employed, acquiring high-density velocity maps of the flows from which the required statistics could be derived. This was the first successful use of this technique for such flows, and shows the utility of this technique for future experimental programs. The extensive statistics obtained were likewise unique and give great insight into the turbulence which produces noise and how the turbulence can be modified to reduce jet noise.
Technique for measurement of energy loss of proton in target medium
NASA Astrophysics Data System (ADS)
Khadke, U. V.
2018-05-01
Energy loss (EL) of charged particles in target medium needs special attention, when measurements are required to be done repeatedly over periods of couple of days. It is imperative to ensure that the measurements are not affected by the long term drifts of the accelerator beam energy and the associated electronic modules. For one such situation in measurement of EL of proton beam in thick target, we optimised and standardized the technique of measuring most probable energy loss of 24.774 MeV proton in aluminium target of thickness 330 mg/cm2. The paper described the method that we developed to ensure that our EL measurements were free from effects of drifts due to any associated electronic modules. The details of the energy spectrometer, basic principle and technique for energy loss measurements in target medium are described in this paper.
Global Atmosphere Watch Workshop on Measurement-Model ...
The World Meteorological Organization’s (WMO) Global Atmosphere Watch (GAW) Programme coordinates high-quality observations of atmospheric composition from global to local scales with the aim to drive high-quality and high-impact science while co-producing a new generation of products and services. In line with this vision, GAW’s Scientific Advisory Group for Total Atmospheric Deposition (SAG-TAD) has a mandate to produce global maps of wet, dry and total atmospheric deposition for important atmospheric chemicals to enable research into biogeochemical cycles and assessments of ecosystem and human health effects. The most suitable scientific approach for this activity is the emerging technique of measurement-model fusion for total atmospheric deposition. This technique requires global-scale measurements of atmospheric trace gases, particles, precipitation composition and precipitation depth, as well as predictions of the same from global/regional chemical transport models. The fusion of measurement and model results requires data assimilation and mapping techniques. The objective of the GAW Workshop on Measurement-Model Fusion for Global Total Atmospheric Deposition (MMF-GTAD), an initiative of the SAG-TAD, was to review the state-of-the-science and explore the feasibility and methodology of producing, on a routine retrospective basis, global maps of atmospheric gas and aerosol concentrations as well as wet, dry and total deposition via measurement-model
Fu, Yu; Pedrini, Giancarlo
2014-01-01
In recent years, optical interferometry-based techniques have been widely used to perform noncontact measurement of dynamic deformation in different industrial areas. In these applications, various physical quantities need to be measured in any instant and the Nyquist sampling theorem has to be satisfied along the time axis on each measurement point. Two types of techniques were developed for such measurements: one is based on high-speed cameras and the other uses a single photodetector. The limitation of the measurement range along the time axis in camera-based technology is mainly due to the low capturing rate, while the photodetector-based technology can only do the measurement on a single point. In this paper, several aspects of these two technologies are discussed. For the camera-based interferometry, the discussion includes the introduction of the carrier, the processing of the recorded images, the phase extraction algorithms in various domains, and how to increase the temporal measurement range by using multiwavelength techniques. For the detector-based interferometry, the discussion mainly focuses on the single-point and multipoint laser Doppler vibrometers and their applications for measurement under extreme conditions. The results show the effort done by researchers for the improvement of the measurement capabilities using interferometry-based techniques to cover the requirements needed for the industrial applications. PMID:24963503
NASA Technical Reports Server (NTRS)
Sacksteder, Kurt
1988-01-01
Current efforts of the Microgravity Combustion Working Group are summarized and the temperature measurement requirements for the combustion studies are defined. Many of the combustion systems that are studied in the low gravity environment are near-limit systems, that is, systems that are acting near the limit of flammability in terms of oxygen concentration or fuel concentration. Systems of this type are normally weak in the sense that there is a delicate balance between the heat released in the flame and the heat required to sustain the flame. Intrusive or perturbative temperature measurement probes can be inaccurate in these situations and in the limiting case extinguish the flame. Noncontact techniques then become the only way to obtain the required measurements. Noncontact measurement requirements for each of the three thermodynamic phases are described in terms of spatial and temporal resolution and temperature range.
Guide to measurement of winds with instrumented aircraft
NASA Technical Reports Server (NTRS)
Frost, Walter; Paige, Terry S.; Nelius, Andrew E.
1991-01-01
Aircraft measurement techniques are reviewed. Review of past and present applications of instrument aircraft to atmospheric observations is presented. Questions to be answered relative to measuring mean wind profiles as contrasted to turbulence measurements are then addressed. Requirements of instrumentation and accuracy, data reduction, data acquisition, and theoretical and certainty analysis are considered.
High resolution spectroscopy in the microwave and far infrared
NASA Technical Reports Server (NTRS)
Pickett, Herbert M.
1990-01-01
High resolution rotational spectroscopy has long been central to remote sensing techniques in atmospheric sciences and astronomy. As such, laboratory measurements must supply the required data to make direct interpretation of data for instruments which sense atmospheres using rotational spectra. Spectral measurements in the microwave and far infrared regions are also very powerful tools when combined with infrared measurements for characterizing the rotational structure of vibrational spectra. In the past decade new techniques were developed which have pushed high resolution spectroscopy into the wavelength region between 25 micrometers and 2 mm. Techniques to be described include: (1) harmonic generation of microwave sources, (2) infrared laser difference frequency generation, (3) laser sideband generation, and (4) ultrahigh resolution interferometers.
Real-time monitoring of CO2 storage sites: Application to Illinois Basin-Decatur Project
Picard, G.; Berard, T.; Chabora, E.; Marsteller, S.; Greenberg, S.; Finley, R.J.; Rinck, U.; Greenaway, R.; Champagnon, C.; Davard, J.
2011-01-01
Optimization of carbon dioxide (CO2) storage operations for efficiency and safety requires use of monitoring techniques and implementation of control protocols. The monitoring techniques consist of permanent sensors and tools deployed for measurement campaigns. Large amounts of data are thus generated. These data must be managed and integrated for interpretation at different time scales. A fast interpretation loop involves combining continuous measurements from permanent sensors as they are collected to enable a rapid response to detected events; a slower loop requires combining large datasets gathered over longer operational periods from all techniques. The purpose of this paper is twofold. First, it presents an analysis of the monitoring objectives to be performed in the slow and fast interpretation loops. Second, it describes the implementation of the fast interpretation loop with a real-time monitoring system at the Illinois Basin-Decatur Project (IBDP) in Illinois, USA. ?? 2011 Published by Elsevier Ltd.
Evaluation of three new laser spectrometer techniques for in-situ carbon monoxide measurements
NASA Astrophysics Data System (ADS)
Zellweger, C.; Steinbacher, M.; Buchmann, B.
2012-07-01
Long-term time series of the atmospheric composition are essential for environmental research and thus require compatible, multi-decadal monitoring activities. However, the current data quality objectives of the World Meteorological Organization (WMO) for carbon monoxide (CO) in the atmosphere are very challenging to meet with the measurement techniques that have been used until recently. During the past few years, new spectroscopic techniques came on the market with promising properties for trace gas analytics. The current study compares three instruments that are recently commercially available (since 2011) with the up to now best available technique (vacuum UV fluorescence) and provides a link to previous comparison studies. The instruments were investigated for their performance regarding repeatability, reproducibility, drift, temperature dependence, water vapour interference and linearity. Finally, all instruments were examined during a short measurement campaign to assess their applicability for long-term field measurements. It could be shown that the new techniques provide a considerably better performance compared to previous techniques, although some issues such as temperature influence and cross sensitivities need further attention.
NASA Astrophysics Data System (ADS)
Choukulkar, Aditya; Brewer, W. Alan; Sandberg, Scott P.; Weickmann, Ann; Bonin, Timothy A.; Hardesty, R. Michael; Lundquist, Julie K.; Delgado, Ruben; Valerio Iungo, G.; Ashton, Ryan; Debnath, Mithu; Bianco, Laura; Wilczak, James M.; Oncley, Steven; Wolfe, Daniel
2017-01-01
Accurate three-dimensional information of wind flow fields can be an important tool in not only visualizing complex flow but also understanding the underlying physical processes and improving flow modeling. However, a thorough analysis of the measurement uncertainties is required to properly interpret results. The XPIA (eXperimental Planetary boundary layer Instrumentation Assessment) field campaign conducted at the Boulder Atmospheric Observatory (BAO) in Erie, CO, from 2 March to 31 May 2015 brought together a large suite of in situ and remote sensing measurement platforms to evaluate complex flow measurement strategies. In this paper, measurement uncertainties for different single and multi-Doppler strategies using simple scan geometries (conical, vertical plane and staring) are investigated. The tradeoffs (such as time-space resolution vs. spatial coverage) among the different measurement techniques are evaluated using co-located measurements made near the BAO tower. Sensitivity of the single-/multi-Doppler measurement uncertainties to averaging period are investigated using the sonic anemometers installed on the BAO tower as the standard reference. Finally, the radiometer measurements are used to partition the measurement periods as a function of atmospheric stability to determine their effect on measurement uncertainty. It was found that with an increase in spatial coverage and measurement complexity, the uncertainty in the wind measurement also increased. For multi-Doppler techniques, the increase in uncertainty for temporally uncoordinated measurements is possibly due to requiring additional assumptions of stationarity along with horizontal homogeneity and less representative line-of-sight velocity statistics. It was also found that wind speed measurement uncertainty was lower during stable conditions compared to unstable conditions.
Direct measurement of local material properties within living embryonic tissues
NASA Astrophysics Data System (ADS)
Serwane, Friedhelm; Mongera, Alessandro; Rowghanian, Payam; Kealhofer, David; Lucio, Adam; Hockenbery, Zachary; Campàs, Otger
The shaping of biological matter requires the control of its mechanical properties across multiple scales, ranging from single molecules to cells and tissues. Despite their relevance, measurements of the mechanical properties of sub-cellular, cellular and supra-cellular structures within living embryos pose severe challenges to existing techniques. We have developed a technique that uses magnetic droplets to measure the mechanical properties of complex fluids, including in situ and in vivo measurements within living embryos ,across multiple length and time scales. By actuating the droplets with magnetic fields and recording their deformation we probe the local mechanical properties, at any length scale we choose by varying the droplets' diameter. We use the technique to determine the subcellular mechanics of individual blastomeres of zebrafish embryos, and bridge the gap to the tissue scale by measuring the local viscosity and elasticity of zebrafish embryonic tissues. Using this technique, we show that embryonic zebrafish tissues are viscoelastic with a fluid-like behavior at long time scales. This technique will enable mechanobiology and mechano-transduction studies in vivo, including the study of diseases correlated with tissue stiffness, such as cancer.
Jiang, Baofeng; Jia, Pengjiao; Zhao, Wen; Wang, Wentao
2018-01-01
This paper explores a new method for rapid structural damage inspection of steel tube slab (STS) structures along randomly measured paths based on a combination of compressive sampling (CS) and ultrasonic computerized tomography (UCT). In the measurement stage, using fewer randomly selected paths rather than the whole measurement net is proposed to detect the underlying damage of a concrete-filled steel tube. In the imaging stage, the ℓ1-minimization algorithm is employed to recover the information of the microstructures based on the measurement data related to the internal situation of the STS structure. A numerical concrete tube model, with the various level of damage, was studied to demonstrate the performance of the rapid UCT technique. Real-world concrete-filled steel tubes in the Shenyang Metro stations were detected using the proposed UCT technique in a CS framework. Both the numerical and experimental results show the rapid UCT technique has the capability of damage detection in an STS structure with a high level of accuracy and with fewer required measurements, which is more convenient and efficient than the traditional UCT technique.
Experimental measurement of structural power flow on an aircraft fuselage
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1991-01-01
An experimental technique is used to measure structural intensity through an aircraft fuselage with an excitation load applied near one of the wing attachment locations. The fuselage is a relatively large structure, requiring a large number of measurement locations to analyze the whole of the structure. For the measurement of structural intensity, multiple point measurements are necessary at every location of interest. A tradeoff is therefore required between the number of measurement transducers, the mounting of these transducers, and the accuracy of the measurements. Using four transducers mounted on a bakelite platform, structural intensity vectors are measured at locations distributed throughout the fuselage. To minimize the errors associated with using the four transducer technique, the measurement locations are selected to be away from bulkheads and stiffeners. Furthermore, to eliminate phase errors between the four transducer measurements, two sets of data are collected for each position, with the orientation of the platform with the four transducers rotated by 180 degrees and an average taken between the two sets of data. The results of these measurements together with a discussion of the suitability of the approach for measuring structural intensity on a real structure are presented.
A method for the measurement and analysis of ride vibrations of transportation systems
NASA Technical Reports Server (NTRS)
Catherines, J. J.; Clevenson, S. A.; Scholl, H. F.
1972-01-01
The measurement and recording of ride vibrations which affect passenger comfort in transportation systems and the subsequent data-reduction methods necessary for interpreting the data present exceptional instrumentation requirements and necessitate the use of computers for specialized analysis techniques. A method is presented for both measuring and analyzing ride vibrations of the type encountered in ground and air transportation systems. A portable system for measuring and recording low-frequency, low-amplitude accelerations and specialized data-reduction procedures are described. Sample vibration measurements in the form of statistical parameters representative of typical transportation systems are also presented to demonstrate the utility of the techniques.
Detection of boundary-layer transitions in wind tunnels
NASA Technical Reports Server (NTRS)
Wood, W. R.; Somers, D. M.
1978-01-01
Accelerometer replaces stethoscope in technique for detection of laminar-to-turbulent boundary-layer transitions on wind-tunnel models. Technique allows measurements above or below atmospheric pressure because human operator is not required within tunnel. Data may be taken from accelerometer, and pressure transducer simultaneously, and delivered to systems for analysis.
Development of a robust field technique to quantify the air-void distribution in fresh concrete.
DOT National Transportation Integrated Search
2013-07-01
In order to make concrete frost durable it is common to provide a small and well distributed air void system. Current measuring techniques require weeks to complete on hardened and polished samples of concrete. This report presents the results of a n...
Note: development of high speed confocal 3D profilometer.
Ang, Kar Tien; Fang, Zhong Ping; Tay, Arthur
2014-11-01
A high-speed confocal 3D profilometer based on the chromatic confocal technology and spinning Nipkow disk technique has been developed and tested. It can measure a whole surface topography by taking only one image that requires less than 0.3 s. Surface height information is retrieved based on the ratios of red, green, and blue color information. A new vector projection technique has developed to enhance the vertical resolution of the measurement. The measurement accuracy of the prototype system has been verified via different test samples.
NASA Technical Reports Server (NTRS)
Seshadri, K.; Rosner, D. E.
1985-01-01
An application of an optical polarization technique in a combustion environment is demonstrated by following, in real-time, growth rates of boric oxide condensate on heated platinum ribbons exposed to seeded propane-air combustion gases. The results obtained agree with the results of earlier interference measurements and also with theoretical chemical vapor deposition predictions. In comparison with the interference method, the polarization technique places less stringent requirements on surface quality, which may justify the added optical components needed for such measurements.
NASA Technical Reports Server (NTRS)
Spann, James F.; Moore, Thomas E.
2017-01-01
A Conference on Measurement Techniques for Solar and Space Physics was held on 20-24 April 2015 in Boulder, Colorado, at the National Center for Atmospheric Research Center Green Campus. The present volume collects together the conference papers for photons and ground-based categories. This gathering of over 200 scientists and instrumentalists was born out of the desire to collect in one place the latest experiment and instrument technologies required for advancement of scientific knowledge in the disciplines of solar and space physics. The two goals for this conference and the subsequent publication of its content are (a) to describe measurement techniques and technology development needed to advance high priority science in the fields of solar and space physics; and (b) to provide a survey or reference of techniques for in situ measurement and remote sensing of space plasmas. Towards this end, our goal has always been inspired by the two 1998 Geophysical Monographs (Nos. 102 and 103) entitled, "Measurement Techniques in Space Plasmas" (particles and fields) [Pfaff et al., 1998a, 1998b], which have served as a reference and resource for advanced students, engineers, and scientists who wish to learn the fundamentals of measurement techniques and technology in this field. Those monographs were the product of an American Geophysical Union Chapman Conference that took place in Santa Fe, NM, in 1995: "Measurement Techniques in Space Plasmas-What Works, What Doesn't." Two decades later, we believe that it is appropriate to revisit this subject, in light of recent advances in technology, research platforms, and analysis techniques. Moreover, we now include direct measurements of neutral gases in the upper atmosphere, optical imaging techniques, and remote observations in space and on the ground. Accordingly, the workshop was organized among four areas of measurement techniques: particles, fields, photons, and ground-based. This two-set volume is largely composed of the content of that workshop. Special attention is given to those techniques and technologies that demonstrate promise of significant advancement in measurements that will enable the highest priority science as described in the 2012 National Research Council Decadal Survey [Baker and Zurbuchen et al., 2013]. Additionally, a broad tutorial survey of the current technologies is provided to serve as reference material and as a basis from which advanced and innovative ideas can be discussed and pursued. Included are instrumentation and techniques to observe the solar environment from its interior to its outer atmosphere, the heliosphere out to the interstellar regions, in geospace, and other planetary magnetospheres and atmospheres. To make significant progress in priority science as expressed in the National Research Council solar and space physics decadal survey and recent NASA Heliophysics roadmaps, identification of enabling new measurement techniques and technologies to be developed is required. Also, it is valuable to the community and future scientists and engineers to have a complete survey of the techniques and technologies used by the practitioners of solar and space physics. As with the 1995 conference and subsequent 1998 publication, it is incumbent on the community to identify those measurements that are particularly challenging and still require new techniques to be identified and tested to enable the necessary accuracy and resolution of certain parameters to be achieved. The following is a partial list of the measurement technique categories that are featured in these special publications: Particles; Thermal plasma to MeV energetic particles, neutral gas properties including winds, density, temperature, and composition, and enhanced neutral atom imaging; Fields; DC electric and magnetic fields, plasma waves, and electron drift instruments from which the plasma velocity information provides a measure of the DC electric field; Photons; Instruments sensitive from the near-infrared to X-rays; Contributions of techniques and technology for optical design, optical components, sensors, material selection for cameras, telescopes, and spectrographs; Ground based; Remote sensing methods for solar and geospace activity and space weather. The focus includes solar observatories, all-sky cameras, lidars, and ionosphere thermosphere mesosphere observatory systems such as radars, ionosondes, GPS receivers, magnetometers, conjugate observations, and airborne campaigns. The present volume collects together the papers for photons and ground-based categories. The companion volume collects together the papers for particles and fields categories. It is recognized that there are measurement techniques that overlap among the four categories. For example, use of microchannel plate detectors is used in photon and particle measurement techniques or the observation of visible photons and magnetic fields in space and on the ground share common technologies. Therefore, the reader should consider the entire collection of papers as they seek to understand particular applications. We hope that these volumes will be as valuable as a reference for our community as the earlier 1998 volumes have been.
Introduction: Photons and Ground-Based
NASA Technical Reports Server (NTRS)
Spann, James; Moore, Thomas
2017-01-01
A Conference on Measurement Techniques for Solar and Space Physics was held on 20-24 April 2015 in Boulder, Colorado, at the National Center for Atmospheric Research Center Green Campus. The present volume collects together the conference papers for photons and ground-based categories. This gathering of over 200 scientists and instrumentalists was born out of the desire to collect in one place the latest experiment and instrument technologies required for advancement of scientific knowledge in the disciplines of solar and space physics. The two goals for this conference and the subsequent publication of its content are (a) to describe measurement techniques and technology development needed to advance high priority science in the fields of solar and space physics; and (b) to provide a survey or reference of techniques for in situ measurement and remote sensing of space plasmas. Towards this end, our goal has always been inspired by the two 1998 Geophysical Monographs (Nos. 102 and 103) entitled, "Measurement Techniques in Space Plasmas" (particles and fields) [Pfaff et al., 1998a, 1998b], which have served as a reference and resource for advanced students, engineers, and scientists who wish to learn the fundamentals of measurement techniques and technology in this field. Those monographs were the product of an American Geophysical Union Chapman Conference that took place in Santa Fe, NM, in 1995: "Measurement Techniques in Space Plasmas-What Works, What Doesn't." Two decades later, we believe that it is appropriate to revisit this subject, in light of recent advances in technology, research platforms, and analysis techniques. Moreover, we now include direct measurements of neutral gases in the upper atmosphere, optical imaging techniques, and remote observations in space and on the ground. Accordingly, the workshop was organized among four areas of measurement techniques: particles, fields, photons, and ground-based. This two-set volume is largely composed of the content of that workshop. Special attention is given to those techniques and technologies that demonstrate promise of significant advancement in measurements that will enable the highest priority science as described in the 2012 National Research Council Decadal Survey [Baker and Zurbuchen et al., 2013]. Additionally, a broad tutorial survey of the current technologies is provided to serve as reference material and as a basis from which advanced and innovative ideas can be discussed and pursued. Included are instrumentation and techniques to observe the solar environment from its interior to its outer atmosphere, the heliosphere out to the interstellar regions, in geospace, and other planetary magnetospheres and atmospheres. To make significant progress in priority science as expressed in the National Research Council solar and space physics decadal survey and recent NASA Heliophysics roadmaps, identification of enabling new measurement techniques and technologies to be developed is required. Also, it is valuable to the community and future scientists and engineers to have a complete survey of the techniques and technologies used by the practitioners of solar and space physics. As with the 1995 conference and subsequent 1998 publication, it is incumbent on the community to identify those measurements that are particularly challenging and still require new techniques to be identified and tested to enable the necessary accuracy and resolution of certain parameters to be achieved. The following is a partial list of the measurement technique categories that are featured in these special publications: Particles; Thermal plasma to MeV energetic particles, neutral gas properties including winds, density, temperature, and composition, and enhanced neutral atom imaging; Fields; DC electric and magnetic fields, plasma waves, and electron drift instruments from which the plasma velocity information provides a measure of the DC electric field; Photons; Instruments sensitive from the near-infrared to X-rays; Contributions of techniques and technology for optical design, optical components, sensors, material selection for cameras, telescopes, and spectrographs; Ground based; Remote sensing methods for solar and geospace activity and space weather. The focus includes solar observatories, all-sky cameras, lidars, and ionosphere thermosphere mesosphere observatory systems such as radars, ionosondes, GPS receivers, magnetometers, conjugate observations, and airborne campaigns. The present volume collects together the papers for photons and ground-based categories. The companion volume collects together the papers for particles and fields categories. It is recognized that there are measurement techniques that overlap among the four categories. For example, use of microchannel plate detectors is used in photon and particle measurement techniques or the observation of visible photons and magnetic fields in space and on the ground share common technologies. Therefore, the reader should consider the entire collection of papers as they seek to understand particular applications. We hope that these volumes will be as valuable as a reference for our community as the earlier 1998 volumes have been.
NASA Technical Reports Server (NTRS)
Korb, C. L.; Gentry, Bruce M.
1995-01-01
The goal of the Army Research Office (ARO) Geosciences Program is to measure the three dimensional wind field in the planetary boundary layer (PBL) over a measurement volume with a 50 meter spatial resolution and with measurement accuracies of the order of 20 cm/sec. The objective of this work is to develop and evaluate a high vertical resolution lidar experiment using the edge technique for high accuracy measurement of the atmospheric wind field to meet the ARO requirements. This experiment allows the powerful capabilities of the edge technique to be quantitatively evaluated. In the edge technique, a laser is located on the steep slope of a high resolution spectral filter. This produces large changes in measured signal for small Doppler shifts. A differential frequency technique renders the Doppler shift measurement insensitive to both laser and filter frequency jitter and drift. The measurement is also relatively insensitive to the laser spectral width for widths less than the width of the edge filter. Thus, the goal is to develop a system which will yield a substantial improvement in the state of the art of wind profile measurement in terms of both vertical resolution and accuracy and which will provide a unique capability for atmospheric wind studies.
Shuttle Tethered Aerothermodynamics Research Facility (STARFAC) Instrumentation Requirements
NASA Technical Reports Server (NTRS)
Wood, George M.; Siemers, Paul M.; Carlomagno, Giovanni M.; Hoffman, John
1986-01-01
The instrumentation requirements for the Shuttle Tethered Aerothermodynamic Research Facility (STARFAC) are presented. The typical physical properties of the terrestrial atmosphere are given along with representative atmospheric daytime ion concentrations and the equilibrium and nonequilibrium gas property comparison from a point away from a wall. STARFAC science and engineering measurements are given as are the TSS free stream gas analysis. The potential nonintrusive measurement techniques for hypersonic boundary layer research are outlined along with the quantitative physical measurement methods for aerothermodynamic studies.
[Methods for measuring skin aging].
Zieger, M; Kaatz, M
2016-02-01
Aging affects human skin and is becoming increasingly important with regard to medical, social and aesthetic issues. Detection of intrinsic and extrinsic components of skin aging requires reliable measurement methods. Modern techniques, e.g., based on direct imaging, spectroscopy or skin physiological measurements, provide a broad spectrum of parameters for different applications.
Advanced Intensity-Modulation Continuous-Wave Lidar Techniques for Column CO2 Measurements
NASA Astrophysics Data System (ADS)
Campbell, J. F.; Lin, B.; Obland, M. D.; Liu, Z.; Kooi, S. A.; Fan, T. F.; Nehrir, A. R.; Meadows, B.; Browell, E. V.
2016-12-01
Advanced Intensity-Modulation Continuous-Wave Lidar Techniques for Column CO2 MeasurementsJoel F. Campbell1, Bing Lin1, Michael D. Obland1, Zhaoyan Liu1, Susan Kooi2, Tai-Fang Fan2, Amin R. Nehrir1, Byron Meadows1, Edward V. Browell31NASA Langley Research Center, Hampton, VA 23681 2SSAI, NASA Langley Research Center, Hampton, VA 23681 3STARSS-II Affiliate, NASA Langley Research Center, Hampton, VA 23681 AbstractGlobal and regional atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission and the Atmospheric Carbon and Transport (ACT) - America project are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity-Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space and airborne platforms to meet the ASCENDS and ACT-America science measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud returns. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby minimizing bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new sub-meter hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. These techniques are used in a new data processing architecture written in the C language to support the ASCENDS CarbonHawk Experiment Simulator (ACES) and ACT-America programs. This software is about an order of magnitude faster than the Mathematica code previously used and uses multithreaded parallel processing code that takes advantage of multicore processors.
Validation of two innovative methods to measure contaminant mass flux in groundwater
NASA Astrophysics Data System (ADS)
Goltz, Mark N.; Close, Murray E.; Yoon, Hyouk; Huang, Junqi; Flintoft, Mark J.; Kim, Sehjong; Enfield, Carl
2009-04-01
The ability to quantify the mass flux of a groundwater contaminant that is leaching from a source area is critical to enable us to: (1) evaluate the risk posed by the contamination source and prioritize cleanup, (2) evaluate the effectiveness of source remediation technologies or natural attenuation processes, and (3) quantify a source term for use in models that may be applied to predict maximum contaminant concentrations in downstream wells. Recently, a number of new methods have been developed and subsequently applied to measure contaminant mass flux in groundwater in the field. However, none of these methods has been validated at larger than the laboratory-scale through a comparison of measured mass flux and a known flux that has been introduced into flowing groundwater. A couple of innovative flux measurement methods, the tandem circulation well (TCW) and modified integral pumping test (MIPT) methods, have recently been proposed. The TCW method can measure mass flux integrated over a large subsurface volume without extracting water. The TCW method may be implemented using two different techniques. One technique, the multi-dipole technique, is relatively simple and inexpensive, only requiring measurement of heads, while the second technique requires conducting a tracer test. The MIPT method is an easily implemented method of obtaining volume-integrated flux measurements. In the current study, flux measurements obtained using these two methods are compared with known mass fluxes in a three-dimensional, artificial aquifer. Experiments in the artificial aquifer show that the TCW multi-dipole and tracer test techniques accurately estimated flux, within 2% and 16%, respectively; although the good results obtained using the multi-dipole technique may be fortuitous. The MIPT method was not as accurate as the TCW method, underestimating flux by as much as 70%. MIPT method inaccuracies may be due to the fact that the method assumptions (two-dimensional steady groundwater flow to fully-screened wells) were not well-approximated. While fluxes measured using the MIPT method were consistently underestimated, the method's simplicity and applicability to the field may compensate for the inaccuracies that were observed in this artificial aquifer test.
Lidar Measurements for Desert Dust Characterization: An Overview
NASA Technical Reports Server (NTRS)
Mona, L.; Liu, Z.; Mueller, D.; Omar, A.; Papayannis, A.; Pappalardo, G.; Sugimoto, N.; Vaughan, M.
2012-01-01
We provide an overview of light detection and ranging (lidar) capability for describing and characterizing desert dust. This paper summarizes lidar techniques, observations, and fallouts of desert dust lidar measurements. The main objective is to provide the scientific community, including non-practitioners of lidar observations with a reference paper on dust lidar measurements. In particular, it will fill the current gap of communication between research-oriented lidar community and potential desert dust data users, such as air quality monitoring agencies and aviation advisory centers. The current capability of the different lidar techniques for the characterization of aerosol in general and desert dust in particular is presented. Technical aspects and required assumptions of these techniques are discussed, providing readers with the pros and cons of each technique. Information about desert dust collected up to date using lidar techniques is reviewed. Lidar techniques for aerosol characterization have a maturity level appropriate for addressing air quality and transportation issues, as demonstrated by some first results reported in this paper
The 'sniffer-patch' technique for detection of neurotransmitter release.
Allen, T G
1997-05-01
A wide variety of techniques have been employed for the detection and measurement of neurotransmitter release from biological preparations. Whilst many of these methods offer impressive levels of sensitivity, few are able to combine sensitivity with the necessary temporal and spatial resolution required to study quantal release from single cells. One detection method that is seeing a revival of interest and has the potential to fill this niche is the so-called 'sniffer-patch' technique. In this article, specific examples of the practical aspects of using this technique are discussed along with the procedures involved in calibrating these biosensors to extend their applications to provide quantitative, in addition to simple qualitative, measurements of quantal transmitter release.
Design of an Ultra-High Efficiency GaN High-Power Amplifier for SAR Remote Sensing
NASA Technical Reports Server (NTRS)
Thrivikraman, Tushar; Hoffman, James
2013-01-01
This work describes the development of a high-power amplifier for use with a remote sensing SAR system. The amplifier is intended to meet the requirements for the Sweep-SAR technique for use in the proposed DESDynI SAR instrument. In order to optimize the amplifier design, active load-pull technique is employed to provide harmonic tuning to provide efficiency improvements. In addition, some of the techniques to overcome the challenges of load-pulling high power devices are presented. The design amplifier was measured to have 49 dBm of output power with 75% PAE, which is suitable to meet the proposed system requirements.
NASA Technical Reports Server (NTRS)
Burk, S. M., Jr.; Wilson, C. F., Jr.
1975-01-01
A relatively inexpensive radio-controlled model stall/spin test technique was developed. Operational experiences using the technique are presented. A discussion of model construction techniques, spin-recovery parachute system, data recording system, and movie camera tracking system is included. Also discussed are a method of measuring moments of inertia, scaling of engine thrust, cost and time required to conduct a program, and examples of the results obtained from the flight tests.
Thermophysical Property Measurements in the MSFC ESL
NASA Technical Reports Server (NTRS)
Hyers, R. W.; Rogers, J. R.; Robinson, M. B.; Rathz, T. J.; Curreri, Peter A. (Technical Monitor)
2002-01-01
Electrostatic Levitation (ESL) is an advanced technique for containerless processing of metals, ceramics, and semiconductors. Because no container is required, there is no contamination from reaction with a crucible, allowing processing of high temperature, highly reactive melts. The high vacuum processing environment further reduces possible contamination of the samples. Finally, there is no container to provide heterogeneous nucleation sites, so the undercooled range is also accessible for many materials. For these reasons, ESL provides a unique environment for measuring thermophysical properties of liquid materials. The properties that can be measured in ESL include density, surface tension, viscosity, electrical and thermal conductivity, specific heat, phase diagram, TTT- and CCT- curves, and other thermodynamic properties. In this paper, we present data on surface tension and viscosity, measured by the oscillating drop technique, and density, measured by an automated photographic technique, measured in the ESL at NASA Marshall Space Flight Center.
A measurement of time-averaged aerosol optical depth using air-showers observed in stereo by HiRes
NASA Astrophysics Data System (ADS)
High Resolution Fly'S Eye Collaboration; Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Belov, K.; Belz, J. W.; Benzvi, S.; Bergman, D. R.; Boyer, J. H.; Cannon, C. T.; Cao, Z.; Connolly, B. M.; Fedorova, Y.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Manago, N.; Mannel, E. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Reil, K.; Roberts, M. D.; Schnetzer, S. R.; Seman, M.; Sinnis, G.; Smith, J. D.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.
2006-03-01
Air fluorescence measurements of cosmic ray energy must be corrected for attenuation of the atmosphere. In this paper, we show that the air-showers themselves can yield a measurement of the aerosol attenuation in terms of optical depth, time-averaged over extended periods. Although the technique lacks statistical power to make the critical hourly measurements that only specialized active instruments can achieve, we note the technique does not depend on absolute calibration of the detector hardware, and requires no additional equipment beyond the fluorescence detectors that observe the air showers. This paper describes the technique, and presents results based on analysis of 1258 air-showers observed in stereo by the High Resolution Fly’s Eye over a four year span.
NASA Technical Reports Server (NTRS)
Hayakawa, K. K.; Udell, D. R.; Iwata, M. M.; Lytle, C. F.; Chrisco, R. M.; Greenough, C. S.; Walling, J. A.
1972-01-01
The results are presented of an investigation into the availability and performance capability of measurement components in the area of cryogenic temperature, pressure, flow and liquid detection components and high temperature strain gages. In addition, technical subjects allied to the components were researched and discussed. These selected areas of investigation were: (1) high pressure flange seals, (2) hydrogen embrittlement of pressure transducer diaphragms, (3) The effects of close-coupled versus remote transducer installation on pressure measurement, (4) temperature transducer configuration effects on measurements, and (5) techniques in temperature compensation of strain gage pressure transducers. The purpose of the program was to investigate the latest design and application techniques in measurement component technology and to document this information along with recommendations for upgrading measurement component designs for future S-2 derivative applications. Recommendations are provided for upgrading existing state-of-the-art in component design, where required, to satisfy performance requirements of S-2 derivative vehicles.
Parametric studies and characterization measurements of x-ray lithography mask membranes
NASA Astrophysics Data System (ADS)
Wells, Gregory M.; Chen, Hector T. H.; Engelstad, Roxann L.; Palmer, Shane R.
1991-08-01
The techniques used in the experimental characterization of thin membranes are considered for their potential use as mask blanks for x-ray lithography. Among the parameters of interest for this evaluation are the film's stress, fracture strength, uniformity of thickness, absorption in the x-ray and visible spectral regions and the modulus and grain structure of the material. The experimental techniques used for measuring these properties are described. The accuracy and applicability of the assumptions used to derive the formulas that relate the experimental measurements to the parameters of interest are considered. Experimental results for silicon carbide and diamond films are provided. Another characteristic needed for an x-ray mask carrier is radiation stability. The number of x-ray exposures expected to be performed in the lifetime of an x-ray mask on a production line is on the order of 107. The dimensional stability requirements placed on the membranes during this period are discussed. Interferometric techniques that provide sufficient sensitivity for these stability measurements are described. A comparison is made between the different techniques that have been developed in term of the information that each technique provides, the accuracy of the various techniques, and the implementation issues that are involved with each technique.
Diffraction based overlay metrology for α-carbon applications
NASA Astrophysics Data System (ADS)
Saravanan, Chandra Saru; Tan, Asher; Dasari, Prasad; Goelzer, Gary; Smith, Nigel; Woo, Seouk-Hoon; Shin, Jang Ho; Kang, Hyun Jae; Kim, Ho Chul
2008-03-01
Applications that require overlay measurement between layers separated by absorbing interlayer films (such as α- carbon) pose significant challenges for sub-50nm processes. In this paper scatterometry methods are investigated as an alternative to meet these stringent overlay metrology requirements. In this article, a spectroscopic Diffraction Based Overlay (DBO) measurement technique is used where registration errors are extracted from specially designed diffraction targets. DBO measurements are performed on detailed set of wafers with varying α-carbon (ACL) thicknesses. The correlation in overlay values between wafers with varying ACL thicknesses will be discussed. The total measurement uncertainty (TMU) requirements for these layers are discussed and the DBO TMU results from sub-50nm samples are reviewed.
Intensity Modulation Techniques for Continuous-Wave Lidar for Column CO2 Measurements
NASA Astrophysics Data System (ADS)
Campbell, J. F.; Lin, B.; Obland, M. D.; Kooi, S. A.; Fan, T. F.; Meadows, B.; Browell, E. V.; Erxleben, W. H.; McGregor, D.; Dobler, J. T.; Pal, S.; O'Dell, C.
2017-12-01
Global and regional atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission and the Atmospheric Carbon and Transport (ACT) - America project are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity-Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space and airborne platforms to meet the ASCENDS and ACT-America science measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) and Linear Swept Frequency modulations to uniquely discriminate surface lidar returns from intermediate aerosol and cloud returns. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that take advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques and provides very high (at sub-meter level) range resolution. We compare BPSK to linear swept frequency and introduce a new technique to eliminate sidelobes in situations from linear swept frequency where the SNR is high with results that rival BPSK. We also investigate the effects of non-linear modulators, which can in some circumstances degrade the orthogonality of the waveforms, and show how to avoid this. These techniques are used in a new data processing architecture written in the C language to support the ASCENDS CarbonHawk Experiment Simulator (ACES) and ACT-America programs.
NASA Astrophysics Data System (ADS)
McCarren, Dustin; Vandervort, Robert; Carr, Jerry, Jr.; Scime, Earl
2012-10-01
In this work, we compare two spectroscopic methods for measuring the velocity distribution functions (VDFs) of argon ions and neutrals in a helicon plasma: laser induced florescence (LIF) and continuous wave cavity ring down spectroscopy (CW-CRDS). An established and powerful technique, LIF suffers from the requirement that the initial state of the LIF sequence have a substantial density. In most cases, this requirement limits LIF to ions and atoms with large metastable state densities for the given plasma conditions. CW-CRDS is considerably more sensitive than LIF and can potentially be applied to much lower density populations of ion and atom states. However, CRDS is a line integrated technique that lacks the spatial resolution of LIF. CRDS is a proven, ultra-sensitive, cavity enhanced absorption spectroscopy technique and when combined with a CW diode laser that has a sufficiently narrow linewidth, the Doppler broadened absorption line, i.e., the VDFs, can be measured. We present CW-CRDS and LIF measurements of the VDFs in an argon plasma using the 668.614 nm (in vacuum) line of Ar II and the 667.9125 nm (in vacuum) line of Ar I.
Accurate exposure classification tools are required to link exposure with health effects in epidemiological studies. Long-term, time-integrated exposure measures would be desirable to address the problem of developing appropriate residential childhood exposure classifications. ...
Direct shear mapping - a new weak lensing tool
NASA Astrophysics Data System (ADS)
de Burgh-Day, C. O.; Taylor, E. N.; Webster, R. L.; Hopkins, A. M.
2015-08-01
We have developed a new technique called direct shear mapping (DSM) to measure gravitational lensing shear directly from observations of a single background source. The technique assumes the velocity map of an unlensed, stably rotating galaxy will be rotationally symmetric. Lensing distorts the velocity map making it asymmetric. The degree of lensing can be inferred by determining the transformation required to restore axisymmetry. This technique is in contrast to traditional weak lensing methods, which require averaging an ensemble of background galaxy ellipticity measurements, to obtain a single shear measurement. We have tested the efficacy of our fitting algorithm with a suite of systematic tests on simulated data. We demonstrate that we are in principle able to measure shears as small as 0.01. In practice, we have fitted for the shear in very low redshift (and hence unlensed) velocity maps, and have obtained null result with an error of ±0.01. This high-sensitivity results from analysing spatially resolved spectroscopic images (i.e. 3D data cubes), including not just shape information (as in traditional weak lensing measurements) but velocity information as well. Spirals and rotating ellipticals are ideal targets for this new technique. Data from any large Integral Field Unit (IFU) or radio telescope is suitable, or indeed any instrument with spatially resolved spectroscopy such as the Sydney-Australian-Astronomical Observatory Multi-Object Integral Field Spectrograph (SAMI), the Atacama Large Millimeter/submillimeter Array (ALMA), the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) and the Square Kilometer Array (SKA).
Method and apparatus for phase for and amplitude detection
Cernosek, Richard W.; Frye, Gregory C.; Martin, Stephen J.
1998-06-09
A new class of techniques been developed which allow inexpensive application of SAW-type chemical sensor devices while retaining high sensitivity (ppm) to chemical detection. The new techniques do not require that the sensor be part of an oscillatory circuit, allowing large concentrations of, e.g., chemical vapors in air, to be accurately measured without compromising the capacity to measure trace concentrations. Such devices have numerous potential applications in environmental monitoring, from manufacturing environments to environmental restoration.
NMR studies of multiphase flows II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altobelli, S.A.; Caprihan, A.; Fukushima, E.
NMR techniques for measurements of spatial distribution of material phase, velocity and velocity fluctuation are being developed and refined. Versions of these techniques which provide time average liquid fraction and fluid phase velocity have been applied to several concentrated suspension systems which will not be discussed extensively here. Technical developments required to further extend the use of NMR to the multi-phase flow arena and to provide measurements of previously unobtainable parameters are the focus of this report.
Two color holographic interferometry for microgravity application
NASA Technical Reports Server (NTRS)
Trolinger, James D.; Weber, David C.
1995-01-01
Holographic interferometry is a primary candidate for determining temperature and concentration in crystal growth experiments designed for space. The method measures refractive index changes within the fluid of an experimental test cell resulting from temperature and/or concentration changes. When the refractive index changes are caused by simultaneous temperature and concentration changes, the contributions of the two effects cannot be separated by single wavelength interferometry. By using two wavelengths, however, two independent interferograms can provide the additional independent equation required to determine the two unknowns. There is no other technique available that provides this type of information. The primary objectives of this effort were to experimentally verify the mathematical theory of two color holographic interferometry (TCHI) and to determine the practical value of this technique for space application. In the foregoing study, the theory of TCHI has been tested experimentally over a range of interest for materials processing in space where measurements of temperature and concentration in a solution are required. New techniques were developed and applied to stretch the limits beyond what could be done with existing procedures. The study resulted in the production of one of the most advanced, enhanced sensitivity holographic interferometers in existence. The interferometric measurements made at MSFC represent what is believed to be the most accurate holographic interferometric measurements made in a fluid to date. The tests have provided an understanding of the limitations of the technique in practical use.
Vector Doppler: spatial sampling analysis and presentation techniques for real-time systems
NASA Astrophysics Data System (ADS)
Capineri, Lorenzo; Scabia, Marco; Masotti, Leonardo F.
2001-05-01
The aim of the vector Doppler (VD) technique is the quantitative reconstruction of a velocity field independently of the ultrasonic probe axis to flow angle. In particular vector Doppler is interesting for studying vascular pathologies related to complex blood flow conditions. Clinical applications require a real-time operating mode and the capability to perform Doppler measurements over a defined volume. The combination of these two characteristics produces a real-time vector velocity map. In previous works the authors investigated the theory of pulsed wave (PW) vector Doppler and developed an experimental system capable of producing off-line 3D vector velocity maps. Afterwards, for producing dynamic velocity vector maps, we realized a new 2D vector Doppler system based on a modified commercial echograph. The measurement and presentation of a vector velocity field requires a correct spatial sampling that must satisfy the Shannon criterion. In this work we tackled this problem, establishing a relationship between sampling steps and scanning system characteristics. Another problem posed by the vector Doppler technique is the data representation in real-time that should be easy to interpret for the physician. With this in mine we attempted a multimedia solution that uses both interpolated images and sound to represent the information of the measured vector velocity map. These presentation techniques were experimented for real-time scanning on flow phantoms and preliminary measurements in vivo on a human carotid artery.
Review of chemical separation techniques applicable to alpha spectrometric measurements
NASA Astrophysics Data System (ADS)
de Regge, P.; Boden, R.
1984-06-01
Prior to alpha-spectrometric measurements several chemical manipulations are usually required to obtain alpha-radiating sources with the desired radiochemical and chemical purity. These include sampling, dissolution or leaching of the elements of interest, conditioning of the solution, chemical separation and preparation of the alpha-emitting source. The choice of a particular method is dependent on different criteria but always involves aspects of the selectivity or the quantitative nature of the separations. The availability of suitable tracers or spikes and modern high resolution instruments resulted in the wide-spread application of isotopic dilution techniques to the problems associated with quantitative chemical separations. This enhanced the development of highly elective methods and reagents which led to important simplifications in the separation schemes. The chemical separation methods commonly used in connection with alpha-spectrometric measurements involve precipitation with selected scavenger elements, solvent extraction, ion exchange and electrodeposition techniques or any combination of them. Depending on the purpose of the final measurement and the type of sample available the chemical separation methods have to be adapted to the particular needs of environment monitoring, nuclear chemistry and metrology, safeguards and safety, waste management and requirements in the nuclear fuel cycle. Against the background of separation methods available in the literature the present paper highlights the current developments and trends in the chemical techniques applicable to alpha spectrometry.
Minimum data requirement for neural networks based on power spectral density analysis.
Deng, Jiamei; Maass, Bastian; Stobart, Richard
2012-04-01
One of the most critical challenges ahead for diesel engines is to identify new techniques for fuel economy improvement without compromising emissions regulations. One technique is the precise control of air/fuel ratio, which requires the measurement of instantaneous fuel consumption. Measurement accuracy and repeatability for fuel rate is the key to successfully controlling the air/fuel ratio and real-time measurement of fuel consumption. The volumetric and gravimetric measurement principles are well-known methods for measurement of fuel consumption in internal combustion engines. However, the fuel flow rate measured by these methods is not suitable for either real-time control or real-time measurement purposes because of the intermittent nature of the measurements. This paper describes a technique that can be used to find the minimum data [consisting of data from just 2.5% of the non-road transient cycle (NRTC)] to solve the problem concerning discontinuous data of fuel flow rate measured using an AVL 733S fuel meter for a medium or heavy-duty diesel engine using neural networks. Only torque and speed are used as the input parameters for the fuel flow rate prediction. Power density analysis is used to find the minimum amount of the data. The results show that the nonlinear autoregressive model with exogenous inputs could predict the particulate matter successfully with R(2) above 0.96 using 2.5% NRTC data with only torque and speed as inputs.
Optical techniques for biological triggers and identifiers
NASA Astrophysics Data System (ADS)
Grant, Bruce A. C.
2004-12-01
Optical techniques for the classification and identification of biological particles provide a number of advantages over traditional 'Wet Chemistry" methods, amongst which are speed of response and the reduction/elimination of consumables. These techniques can be employed in both 'Trigger" and 'Identifier" systems. Trigger systems monitor environmental particulates with the aim of detecting 'unusual" changes in the overall environmental composition and providing an indication of threat. At the present time there is no single optical measurement that can distinguish between benign and hostile events. Therefore, in order to distinguish between these 2 classifications, a number of different measurements must be effected and a decision made on the basis of the 'integrated" data. Smiths Detection have developed a data gathering platform capable of measuring multiple optical, physical and electrical parameters of individual airborne biological particles. The data from all these measurements are combined in a hazard classification algorithm based on Bayesian Inference techniques. Identifier systems give a greater level of information and confidence than triggers, -- although they require reagents and are therefore much more expensive to operate -- and typically take upwards of 20 minutes to respond. Ideally, in a continuous flow mode, identifier systems would respond in real-time, and identify a range of pathogens specifically and simultaneously. The results of recent development work -- carried out by Smiths Detection and its collaborators -- to develop an optical device that meets most of these requirements, and has the stretch potential to meet all of the requirements in a 3-5 year time frame will be presented. This technology enables continuous stand-alone operation for both civil and military defense applications and significant miniaturisation can be achieved with further development.
Characterizing Terrestrial Exoplanets
NASA Astrophysics Data System (ADS)
Meadows, V. S.; Lustig-Yaeger, J.; Lincowski, A.; Arney, G. N.; Robinson, T. D.; Schwieterman, E. W.; Deming, L. D.; Tovar, G.
2017-11-01
We will provide an overview of the measurements, techniques, and upcoming missions required to characterize terrestrial planet environments and evolution, and search for signs of habitability and life.
Advances in the Surface Renewal Flux Measurement Method
NASA Astrophysics Data System (ADS)
Shapland, T. M.; McElrone, A.; Paw U, K. T.; Snyder, R. L.
2011-12-01
The measurement of ecosystem-scale energy and mass fluxes between the planetary surface and the atmosphere is crucial for understanding geophysical processes. Surface renewal is a flux measurement technique based on analyzing the turbulent coherent structures that interact with the surface. It is a less expensive technique because it does not require fast-response velocity measurements, but only a fast-response scalar measurement. It is therefore also a useful tool for the study of the global cycling of trace gases. Currently, surface renewal requires calibration against another flux measurement technique, such as eddy covariance, to account for the linear bias of its measurements. We present two advances in the surface renewal theory and methodology that bring the technique closer to becoming a fully independent flux measurement method. The first advance develops the theory of turbulent coherent structure transport associated with the different scales of coherent structures. A novel method was developed for identifying the scalar change rate within structures at different scales. Our results suggest that for canopies less than one meter in height, the second smallest coherent structure scale dominates the energy and mass flux process. Using the method for resolving the scalar exchange rate of the second smallest coherent structure scale, calibration is unnecessary for surface renewal measurements over short canopies. This study forms the foundation for analysis over more complex surfaces. The second advance is a sensor frequency response correction for measuring the sensible heat flux via surface renewal. Inexpensive fine-wire thermocouples are frequently used to record high frequency temperature data in the surface renewal technique. The sensible heat flux is used in conjunction with net radiation and ground heat flux measurements to determine the latent heat flux as the energy balance residual. The robust thermocouples commonly used in field experiments underestimate the sensible heat flux, yielding results that are less than 50% of the sensible heat flux measured with finer sensors. We present the methodology for correcting the thermocouple signal to avoid underestimating the heat flux at both the smallest and the second smallest coherent structure scale.
NASA Astrophysics Data System (ADS)
Cattaneo, Alessandro; Park, Gyuhae; Farrar, Charles; Mascareñas, David
2012-04-01
The acoustic emission (AE) phenomena generated by a rapid release in the internal stress of a material represent a promising technique for structural health monitoring (SHM) applications. AE events typically result in a discrete number of short-time, transient signals. The challenge associated with capturing these events using classical techniques is that very high sampling rates must be used over extended periods of time. The result is that a very large amount of data is collected to capture a phenomenon that rarely occurs. Furthermore, the high energy consumption associated with the required high sampling rates makes the implementation of high-endurance, low-power, embedded AE sensor nodes difficult to achieve. The relatively rare occurrence of AE events over long time scales implies that these measurements are inherently sparse in the spike domain. The sparse nature of AE measurements makes them an attractive candidate for the application of compressed sampling techniques. Collecting compressed measurements of sparse AE signals will relax the requirements on the sampling rate and memory demands. The focus of this work is to investigate the suitability of compressed sensing techniques for AE-based SHM. The work explores estimating AE signal statistics in the compressed domain for low-power classification applications. In the event compressed classification finds an event of interest, ι1 norm minimization will be used to reconstruct the measurement for further analysis. The impact of structured noise on compressive measurements is specifically addressed. The suitability of a particular algorithm, called Justice Pursuit, to increase robustness to a small amount of arbitrary measurement corruption is investigated.
Use of high-order spectral moments in Doppler weather radar
NASA Astrophysics Data System (ADS)
di Vito, A.; Galati, G.; Veredice, A.
Three techniques to estimate the skewness and curtosis of measured precipitation spectra are evaluated. These are: (1) an extension of the pulse-pair technique, (2) fitting the autocorrelation function with a least square polynomial and differentiating it, and (3) the autoregressive spectral estimation. The third technique provides the best results but has an exceedingly large computation burden. The first technique does not supply any useful results due to the crude approximation of the derivatives of the ACF. The second technique requires further study to reduce its variance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michelsen, H. A.; Schulz, C.; Smallwood, G. J.
The understanding of soot formation in combustion processes and the optimization of practical combustion systems require in situ measurement techniques that can provide important characteristics, such as particle concentrations and sizes, under a variety of conditions. Of equal importance are techniques suitable for characterizing soot particles produced from incomplete combustion and emitted into the environment. Also, the production of engineered nanoparticles, such as carbon blacks, may benefit from techniques that allow for online monitoring of these processes.
Performance Analysis of Ranging Techniques for the KPLO Mission
NASA Astrophysics Data System (ADS)
Park, Sungjoon; Moon, Sangman
2018-03-01
In this study, the performance of ranging techniques for the Korea Pathfinder Lunar Orbiter (KPLO) space communication system is investigated. KPLO is the first lunar mission of Korea, and pseudo-noise (PN) ranging will be used to support the mission along with sequential ranging. We compared the performance of both ranging techniques using the criteria of accuracy, acquisition probability, and measurement time. First, we investigated the end-to-end accuracy error of a ranging technique incorporating all sources of errors such as from ground stations and the spacecraft communication system. This study demonstrates that increasing the clock frequency of the ranging system is not required when the dominant factor of accuracy error is independent of the thermal noise of the ranging technique being used in the system. Based on the understanding of ranging accuracy, the measurement time of PN and sequential ranging are further investigated and compared, while both techniques satisfied the accuracy and acquisition requirements. We demonstrated that PN ranging performed better than sequential ranging in the signal-to-noise ratio (SNR) regime where KPLO will be operating, and we found that the T2B (weighted-voting balanced Tausworthe, voting v = 2) code is the best choice among the PN codes available for the KPLO mission.
Next Generation NASA Initiative for Space Geodesy
NASA Technical Reports Server (NTRS)
Merkowitz, S. M.; Desai, S.; Gross, R. S.; Hilliard, L.; Lemoine, F. G.; Long, J. L.; Ma, C.; McGarry J. F.; Murphy, D.; Noll, C. E.;
2012-01-01
Space geodesy measurement requirements have become more and more stringent as our understanding of the physical processes and our modeling techniques have improved. In addition, current and future spacecraft will have ever-increasing measurement capability and will lead to increasingly sophisticated models of changes in the Earth system. Ground-based space geodesy networks with enhanced measurement capability will be essential to meeting these oncoming requirements and properly interpreting the sate1!ite data. These networks must be globally distributed and built for longevity, to provide the robust data necessary to generate improved models for proper interpretation ofthe observed geophysical signals. These requirements have been articulated by the Global Geodetic Observing System (GGOS). The NASA Space Geodesy Project (SGP) is developing a prototype core site as the basis for a next generation Space Geodetic Network (SGN) that would be NASA's contribution to a global network designed to produce the higher quality data required to maintain the Terrestrial Reference Frame and provide information essential for fully realizing the measurement potential of the current and coming generation of Earth Observing spacecraft. Each of the sites in the SGN would include co-located, state of-the-art systems from all four space geodetic observing techniques (GNSS, SLR, VLBI, and DORIS). The prototype core site is being developed at NASA's Geophysical and Astronomical Observatory at Goddard Space Flight Center. The project commenced in 2011 and is scheduled for completion in late 2013. In January 2012, two multiconstellation GNSS receivers, GODS and GODN, were established at the prototype site as part of the local geodetic network. Development and testing are also underway on the next generation SLR and VLBI systems along with a modern DORIS station. An automated survey system is being developed to measure inter-technique vector ties, and network design studies are being performed to define the appropriate number and distribution of these next generation space geodetic core sites that are required to achieve the driving ITRF requirements. We present the status of this prototype next generation space geodetic core site, results from the analysis of data from the established geodetic stations, and results from the ongoing network design studies.
Investigation of advanced phase-shifting projected fringe profilometry techniques
NASA Astrophysics Data System (ADS)
Liu, Hongyu
1999-11-01
The phase-shifting projected fringe profilometry (PSPFP) technique is a powerful tool in the profile measurements of rough engineering surfaces. Compared with other competing techniques, this technique is notable for its full-field measurement capacity, system simplicity, high measurement speed, and low environmental vulnerability. The main purpose of this dissertation is to tackle three important problems, which severely limit the capability and the accuracy of the PSPFP technique, with some new approaches. Chapter 1 provides some background information of the PSPFP technique including the measurement principles, basic features, and related techniques is briefly introduced. The objectives and organization of the thesis are also outlined. Chapter 2 gives a theoretical treatment to the absolute PSPFP measurement. The mathematical formulations and basic requirements of the absolute PSPFP measurement and its supporting techniques are discussed in detail. Chapter 3 introduces the experimental verification of the proposed absolute PSPFP technique. Some design details of a prototype system are discussed as supplements to the previous theoretical analysis. Various fundamental experiments performed for concept verification and accuracy evaluation are introduced together with some brief comments. Chapter 4 presents the theoretical study of speckle- induced phase measurement errors. In this analysis, the expression for speckle-induced phase errors is first derived based on the multiplicative noise model of image- plane speckles. The statistics and the system dependence of speckle-induced phase errors are then thoroughly studied through numerical simulations and analytical derivations. Based on the analysis, some suggestions on the system design are given to improve measurement accuracy. Chapter 5 discusses a new technique combating surface reflectivity variations. The formula used for error compensation is first derived based on a simplified model of the detection process. The techniques coping with two major effects of surface reflectivity variations are then introduced. Some fundamental problems in the proposed technique are studied through simulations. Chapter 6 briefly summarizes the major contributions of the current work and provides some suggestions for the future research.
Motion Estimation and Compensation Strategies in Dynamic Computerized Tomography
NASA Astrophysics Data System (ADS)
Hahn, Bernadette N.
2017-12-01
A main challenge in computerized tomography consists in imaging moving objects. Temporal changes during the measuring process lead to inconsistent data sets, and applying standard reconstruction techniques causes motion artefacts which can severely impose a reliable diagnostics. Therefore, novel reconstruction techniques are required which compensate for the dynamic behavior. This article builds on recent results from a microlocal analysis of the dynamic setting, which enable us to formulate efficient analytic motion compensation algorithms for contour extraction. Since these methods require information about the dynamic behavior, we further introduce a motion estimation approach which determines parameters of affine and certain non-affine deformations directly from measured motion-corrupted Radon-data. Our methods are illustrated with numerical examples for both types of motion.
Evaluation of Flow Biosensor Technology in a Chronically-Instrumented Non-Human Primate Model
NASA Technical Reports Server (NTRS)
Koenig, S. C.; Reister, C.; Schaub, J.; Muniz, G.; Ferguson, T.; Fanton, J. W.
1995-01-01
The Physiology Research Branch of Brooks AFB conducts both human and non-human primate experiments to determine the effects of microgravity and hypergravity on the cardiovascular system and to indentify the particular mechanisms that invoke these responses. Primary investigative research efforts in a non-human primate model require the calculation of total peripheral resistance (TPR), systemic arterial compliance (SAC), and pressure-volume loop characteristics. These calculations require beat-to-beat measurement of aortic flow. We have evaluated commercially available electromagnetic (EMF) and transit-time flow measurement techniques. In vivo and in vitro experiments demonstrated that the average error of these techniques is less than 25 percent for EMF and less than 10 percent for transit-time.
NASA Technical Reports Server (NTRS)
Sellers, William L., III; Dwoyer, Douglas L.
1992-01-01
The design of a hypersonic aircraft poses unique challenges to the engineering community. Problems with duplicating flight conditions in ground based facilities have made performance predictions risky. Computational fluid dynamics (CFD) has been proposed as an additional means of providing design data. At the present time, CFD codes are being validated based on sparse experimental data and then used to predict performance at flight conditions with generally unknown levels of uncertainty. This paper will discuss the facility and measurement techniques that are required to support CFD development for the design of hypersonic aircraft. Illustrations are given of recent success in combining experimental and direct numerical simulation in CFD model development and validation for hypersonic perfect gas flows.
Fischer, Andreas; König, Jörg; Haufe, Daniel; Schlüssler, Raimund; Büttner, Lars; Czarske, Jürgen
2013-08-01
To reduce the noise of machines such as aircraft engines, the development and propagation of sound has to be investigated. Since the applicability of microphones is limited due to their intrusiveness, contactless measurement techniques are required. For this reason, the present study describes an optical method based on the Doppler effect and its application for acoustic particle velocity (APV) measurements. While former APV measurements with Doppler techniques are point measurements, the applied system is capable of simultaneous measurements at multiple points. In its current state, the system provides linear array measurements of one component of the APV demonstrated by multi-tone experiments with tones up to 17 kHz for the first time.
Characterization of fluid flow by digital correlation of scattered light
NASA Technical Reports Server (NTRS)
Gilbert, John A.; Matthys, Donald R.
1989-01-01
The objective is to produce a physical system suitable for a space environment that can measure fluid velocities in a three-dimensional volume by the development of a particle correlation velocimetry technique. Experimental studies were conducted on a field test cell to demonstrate the suitability and accuracy of digital correlation techniques for measuring two-dimensional fluid flows. This objective was satisfied by: (1) the design of an appropriate illumination and detection system for making velocity measurements within a test cell; (2) the design and construction of a test cell; (3) the preliminary evaluations on fluid and seeding requirements; and (4) the performance of controlled tests using a multiple exposure correlation technique. This presentation is represented by viewgraphs with very little text.
LSU: The Library Space Utilization Methodology.
ERIC Educational Resources Information Center
Hall, Richard B.
A computerized research technique for measuring the space utilization of public library facilities provides a behavioral activity and occupancy analysis for library planning purposes. The library space utilization (LSU) methodology demonstrates that significant information about the functional requirements of a library can be measured and…
Local Guided Wavefield Analysis for Characterization of Delaminations in Composites
NASA Technical Reports Server (NTRS)
Rogge, Matthew D.; Campbell Leckey, Cara A.
2012-01-01
Delaminations in composite laminates resulting from impact events may be accompanied by minimal indication of damage at the surface. As such, inspection techniques are required to ensure defects are within allowable limits. Conventional ultrasonic scanning techniques have been shown to effectively characterize the size and depth of delaminations but require physical contact with the structure. Alternatively, a noncontact scanning laser vibrometer may be used to measure guided wave propagation in the laminate structure. A local Fourier domain analysis method is presented for processing guided wavefield data to estimate spatially-dependent wavenumber values, which can be used to determine delamination depth. The technique is applied to simulated wavefields and results are analyzed to determine limitations of the technique with regards to determining defect size and depth. Finally, experimental wavefield data obtained in quasi-isotropic carbon fiber reinforced polymer (CFRP) laminates with impact damage is analyzed and wavenumber is measured to an accuracy of 8.5% in the region of shallow delaminations. Keywords: Ultrasonic wavefield imaging, Windowed Fourier transforms, Guided waves, Structural health monitoring, Nondestructive evaluation
Videometric Applications in Wind Tunnels
NASA Technical Reports Server (NTRS)
Burner, A. W.; Radeztsky, R. H.; Liu, Tian-Shu
1997-01-01
Videometric measurements in wind tunnels can be very challenging due to the limited optical access, model dynamics, optical path variability during testing, large range of temperature and pressure, hostile environment, and the requirements for high productivity and large amounts of data on a daily basis. Other complications for wind tunnel testing include the model support mechanism and stringent surface finish requirements for the models in order to maintain aerodynamic fidelity. For these reasons nontraditional photogrammetric techniques and procedures sometimes must be employed. In this paper several such applications are discussed for wind tunnels which include test conditions with Mach number from low speed to hypersonic, pressures from less than an atmosphere to nearly seven atmospheres, and temperatures from cryogenic to above room temperature. Several of the wind tunnel facilities are continuous flow while one is a short duration blowdown facility. Videometric techniques and calibration procedures developed to measure angle of attack, the change in wing twist and bending induced by aerodynamic load, and the effects of varying model injection rates are described. Some advantages and disadvantages of these techniques are given and comparisons are made with non-optical and more traditional video photogrammetric techniques.
Aeroelastic Deformation Measurements of Flap, Gap, and Overhang on a Semispan Model
NASA Technical Reports Server (NTRS)
Burner, A. W.; Liu, Tian-Shu; Garg, Sanjay; Ghee, Terence A.; Taylor, Nigel J.
2001-01-01
Single-camera, single-view videogrammetry has been used for the first time to determine static aeroelastic deformation of a slotted flap configuration on a semispan model at the National Transonic Facility (NTF). Deformation was determined by comparing wind-off to wind-on spatial data from targets placed on the main element, shroud, and flap of the model. Digitized video images from a camera were recorded and processed to automatically determine target image plane locations that were then corrected for sensor, lens, and frame grabber spatial errors. The videogrammetric technique used for the measurements presented here has been established at NASA facilities as the technique of choice when high-volume static aeroelastic data with minimum impact on data taking is required. However, the primary measurement at the NTF with this technique in the past has been the measurement of the static aeroelastic wing twist of the main wing element on full span models rather than for the measurement of component deformation. Considerations for using the videogrammetric technique for semispan component deformation measurements as well as representative results are presented.
NASA Astrophysics Data System (ADS)
Saito, Terubumi; Tatsuta, Muneaki; Abe, Yamato; Takesawa, Minato
2018-02-01
We have succeeded in the direct measurement for solar cell/module internal conversion efficiency based on a calorimetric method or electrical substitution method by which the absorbed radiant power is determined by replacing the heat absorbed in the cell/module with the electrical power. The technique is advantageous in that the reflectance and transmittance measurements, which are required in the conventional methods, are not necessary. Also, the internal quantum efficiency can be derived from conversion efficiencies by using the average photon energy. Agreements of the measured data with the values estimated from the nominal values support the validity of this technique.
Measurement of lung volumes from supine portable chest radiographs.
Ries, A L; Clausen, J L; Friedman, P J
1979-12-01
Lung volumes in supine nonambulatory patients are physiological parameters often difficult to measure with current techniques (plethysmograph, gas dilution). Existing radiographic methods for measuring lung volumes require standard upright chest radiographs. Accordingly, in 31 normal supine adults, we determined helium-dilution functional residual and total lung capacities and measured planimetric lung field areas (LFA) from corresponding portable anteroposterior and lateral radiographs. Low radiation dose methods, which delivered less than 10% of that from standard portable X-ray technique, were utilized. Correlation between lung volume and radiographic LFA was highly significant (r = 0.96, SEE = 10.6%). Multiple-step regressions using height and chest diameter correction factors reduced variance, but weight and radiographic magnification factors did not. In 17 additional subjects studied for validation, the regression equations accurately predicted radiographic lung volume. Thus, this technique can provide accurate and rapid measurement of lung volume in studies involving supine patients.
3D-Laser-Scanning Technique Applied to Bulk Density Measurements of Apollo Lunar Samples
NASA Technical Reports Server (NTRS)
Macke, R. J.; Kent, J. J.; Kiefer, W. S.; Britt, D. T.
2015-01-01
In order to better interpret gravimetric data from orbiters such as GRAIL and LRO to understand the subsurface composition and structure of the lunar crust, it is import to have a reliable database of the density and porosity of lunar materials. To this end, we have been surveying these physical properties in both lunar meteorites and Apollo lunar samples. To measure porosity, both grain density and bulk density are required. For bulk density, our group has historically utilized sub-mm bead immersion techniques extensively, though several factors have made this technique problematic for our work with Apollo samples. Samples allocated for measurement are often smaller than optimal for the technique, leading to large error bars. Also, for some samples we were required to use pure alumina beads instead of our usual glass beads. The alumina beads were subject to undesirable static effects, producing unreliable results. Other investigators have tested the use of 3d laser scanners on meteorites for measuring bulk volumes. Early work, though promising, was plagued with difficulties including poor response on dark or reflective surfaces, difficulty reproducing sharp edges, and large processing time for producing shape models. Due to progress in technology, however, laser scanners have improved considerably in recent years. We tested this technique on 27 lunar samples in the Apollo collection using a scanner at NASA Johnson Space Center. We found it to be reliable and more precise than beads, with the added benefit that it involves no direct contact with the sample, enabling the study of particularly friable samples for which bead immersion is not possible
Choukulkar, Aditya; Brewer, W. Alan; Sandberg, Scott P.; ...
2017-01-23
Accurate three-dimensional information of wind flow fields can be an important tool in not only visualizing complex flow but also understanding the underlying physical processes and improving flow modeling. However, a thorough analysis of the measurement uncertainties is required to properly interpret results. The XPIA (eXperimental Planetary boundary layer Instrumentation Assessment) field campaign conducted at the Boulder Atmospheric Observatory (BAO) in Erie, CO, from 2 March to 31 May 2015 brought together a large suite of in situ and remote sensing measurement platforms to evaluate complex flow measurement strategies. In this paper, measurement uncertainties for different single and multi-Doppler strategies using simple scanmore » geometries (conical, vertical plane and staring) are investigated. The tradeoffs (such as time–space resolution vs. spatial coverage) among the different measurement techniques are evaluated using co-located measurements made near the BAO tower. Sensitivity of the single-/multi-Doppler measurement uncertainties to averaging period are investigated using the sonic anemometers installed on the BAO tower as the standard reference. Finally, the radiometer measurements are used to partition the measurement periods as a function of atmospheric stability to determine their effect on measurement uncertainty. It was found that with an increase in spatial coverage and measurement complexity, the uncertainty in the wind measurement also increased. For multi-Doppler techniques, the increase in uncertainty for temporally uncoordinated measurements is possibly due to requiring additional assumptions of stationarity along with horizontal homogeneity and less representative line-of-sight velocity statistics. Lastly, it was also found that wind speed measurement uncertainty was lower during stable conditions compared to unstable conditions.« less
NASA Astrophysics Data System (ADS)
Smith, V.
2000-11-01
This report documents the development of analytical techniques required for interpreting and comparing space systems electromagnetic interference test data with commercial electromagnetic interference test data using NASA Specification SSP 30237A "Space Systems Electromagnetic Emission and Susceptibility Requirements for Electromagnetic Compatibility." The PSpice computer simulation results and the laboratory measurements for the test setups under study compare well. The study results, however, indicate that the transfer function required to translate test results of one setup to another is highly dependent on cables and their actual layout in the test setup. Since cables are equipment specific and are not specified in the test standards, developing a transfer function that would cover all cable types (random, twisted, or coaxial), sizes (gauge number and length), and layouts (distance from the ground plane) is not practical.
NASA Technical Reports Server (NTRS)
Smith, V.; Minor, J. L. (Technical Monitor)
2000-01-01
This report documents the development of analytical techniques required for interpreting and comparing space systems electromagnetic interference test data with commercial electromagnetic interference test data using NASA Specification SSP 30237A "Space Systems Electromagnetic Emission and Susceptibility Requirements for Electromagnetic Compatibility." The PSpice computer simulation results and the laboratory measurements for the test setups under study compare well. The study results, however, indicate that the transfer function required to translate test results of one setup to another is highly dependent on cables and their actual layout in the test setup. Since cables are equipment specific and are not specified in the test standards, developing a transfer function that would cover all cable types (random, twisted, or coaxial), sizes (gauge number and length), and layouts (distance from the ground plane) is not practical.
A 4-spot time-of-flight anemometer for small centrifugal compressor velocity measurements
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Skoch, Gary J.
1992-01-01
The application of laser anemometry techniques in turbomachinery facilities is a challenging dilemma requiring an anemometer system with special qualities. Here, we describe the use of a novel laser anemometry technique applied to a small 4.5 kg/s, 4:1 pressure ratio centrifugal compressor. Sample velocity profiles across the blade pitch are presented for a single location along the rotor. The results of the intra-blade passage velocity measurements will ultimately be used to verify CFD 3-D viscous code predictions.
A 4-spot time-of-flight anemometer for small centrifugal compressor velocity measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wernet, M.P.; Skoch, G.J.
1992-07-01
The application of laser anemometry techniques in turbomachinery facilities is a challenging dilemma requiring an anemometer system with special qualities. Here, we describe the use of a novel laser anemometry technique applied to a small 4.5 kg/s, 4:1 pressure ratio centrifugal compressor. Sample velocity profiles across the blade pitch are presented for a single location along the rotor. The results of the intra-blade passage velocity measurements will ultimately be used to verify CFD 3-D viscous code predictions.
SAR antenna calibration techniques
NASA Technical Reports Server (NTRS)
Carver, K. R.; Newell, A. C.
1978-01-01
Calibration of SAR antennas requires a measurement of gain, elevation and azimuth pattern shape, boresight error, cross-polarization levels, and phase vs. angle and frequency. For spaceborne SAR antennas of SEASAT size operating at C-band or higher, some of these measurements can become extremely difficult using conventional far-field antenna test ranges. Near-field scanning techniques offer an alternative approach and for C-band or X-band SARs, give much improved accuracy and precision as compared to that obtainable with a far-field approach.
NASA Technical Reports Server (NTRS)
Stauter, R. C.; Fleeter, S.
1982-01-01
Three dimensional aerodynamic data, required to validate and/or indicate necessary refinements to inviscid and viscous analyses of the flow through turbomachine blade rows, are discussed. Instrumentation and capabilities for pressure measurement, probe insertion and traversing, and flow visualization are reviewed. Advanced measurement techniques including Laser Doppler Anemometers, are considered. Data processing is reviewed. Predictions were correlated with the experimental data. A flow visualization technique using helium filled soap bubbles was demonstrated.
Method and apparatus for phase and amplitude detection
Cernosek, R.W.; Frye, G.C.; Martin, S.J.
1998-06-09
A new class of techniques has been developed which allow inexpensive application of SAW-type chemical sensor devices while retaining high sensitivity (ppm) to chemical detection. The new techniques do not require that the sensor be part of an oscillatory circuit, allowing large concentrations of, e.g., chemical vapors in air, to be accurately measured without compromising the capacity to measure trace concentrations. Such devices have numerous potential applications in environmental monitoring, from manufacturing environments to environmental restoration. 12 figs.
Photography and imagery: a clarification of terms
Robinove, Charles J.
1963-01-01
The increased use of pictorial displays of data in the fields of photogrammetry and photo interpretation has led to some confusion of terms, not so much b photogrammetrists as bu users and interpreters of pictorial data. The terms "remote sensing" and "remote sensing of environment" are being used as general terms to describe "the measurement of some property of an object without having the measuring device physically in contact with the object" (Parker, 1962).Measurements of size and shape by photogrammetric and optical means are common examples of remote sensing and therefore require no elaboration. Other techniques of remote sensing of electromagnetic radiation in and beyond the limits of the visible spectrum require some explanation and differentiation from the techniques used in the visible spectrum.The following definitions of "photography" and "imagery" are proposed to clarify these two terms in hope that this will lead to more precise understanding and explanation of the processes.
NASA Astrophysics Data System (ADS)
Stankunas, Gediminas; Batistoni, Paola; Sjöstrand, Henrik; Conroy, Sean; JET Contributors
2015-07-01
The neutron activation technique is routinely used in fusion experiments to measure the neutron yields. This paper investigates the uncertainty on these measurements as due to the uncertainties on dosimetry and activation reactions. For this purpose, activation cross-sections were taken from the International Reactor Dosimetry and Fusion File (IRDFF-v1.05) in 640 groups ENDF-6 format for several reactions of interest for both 2.5 and 14 MeV neutrons. Activation coefficients (reaction rates) have been calculated using the neutron flux spectra at JET vacuum vessel, both for DD and DT plasmas, calculated by MCNP in the required 640-energy group format. The related uncertainties for the JET neutron spectra are evaluated as well using the covariance data available in the library. These uncertainties are in general small, but not negligible when high accuracy is required in the determination of the fusion neutron yields.
Near-field measurement facility plans at Lewis Research Center
NASA Technical Reports Server (NTRS)
Sharp, R. G.
1983-01-01
The direction of future antenna technology will be toward antennas which are large, both physically and electrically, will operate at frequencies up to 60 GHz, and are non-reciprocal and complex, implementing multiple-beam and scanning beam concepts and monolithic semiconductor devices and techniques. The acquisition of accurate antenna performance measurements is a critical part of the advanced antenna research program and represents a substantial antenna measurement technology challenge, considering the special characteristics of future spacecraft communications antennas. Comparison of various antenna testing techniques and their relative advantages and disadvantages shows that the near-field approach is necessary to meet immediate and long-term testing requirements. The LeRC facilities, the 22 ft x 22 ft horizontal antenna boresight planar scanner and the 60 ft x 60 ft vertical antenna boresight plant scanner (with a 60 GHz frequency and D/lamdba = 3000 electrical size capabilities), will meet future program testing requirements.
Application of the SEM to the measurement of solar cell parameters
NASA Technical Reports Server (NTRS)
Weizer, V. G.; Andrews, C. W.
1977-01-01
A pair of techniques are described which make use of the SEM to measure, respectively, the minority carrier diffusion length and the metallurgical junction depth in silicon solar cells. The former technique permits the measurement of the true bulk diffusion length through the application of highly doped field layers to the back surfaces of the cells being investigated. The technique yields an absolute value of the diffusion length from a knowledge of the collected fraction of the injected carriers and the cell thickness. It is shown that the secondary emission contrast observed in the SEM on a reverse-biased diode can depict the location of the metallurgical junction if the diode has been prepared with the proper beveled geometry. The SEM provides the required contrast and the option of high magnification, permitting the measurement of extremely shallow junction depths.
NASA Technical Reports Server (NTRS)
Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.
1984-01-01
Observations of raw image data, raw radiometric calibration data, and background measurements extracted from the raw data streams on high density tape reveal major shortcomings in a technique proposed by the Canadian Center for Remote Sensing in 1982 for the radiometric correction of TM data. Results are presented which correlate measurements of the DC background with variations in both image data background and calibration samples. The effect on both raw data and data corrected using the earlier proposed technique is explained and the correction required for these factors as a function of individual scan line number for each detector is described. How the revised technique can be incorporated into an operational environment is demonstrated.
Bistatic radar sea state monitoring system design
NASA Technical Reports Server (NTRS)
Ruck, G. T.; Krichbaum, C. K.; Everly, J. O.
1975-01-01
Remote measurement of the two-dimensional surface wave height spectrum of the ocean by the use of bistatic radar techniques was examined. Potential feasibility and experimental verification by field experiment are suggested. The required experimental hardware is defined along with the designing, assembling, and testing of several required experimental hardware components.
NASA Technical Reports Server (NTRS)
Steffes, Paul G.
1989-01-01
Accurate data on microwave and millimeter-wave properties of potential planetary atmospheric constituents is critical for the proper interpretation of radio occultation measurements, and of radio astronomical observations of both continuum and spectral line emissions. Such data is also needed to correct for atmospheric effects on radar studies of surface reflectivity. Since the refractive and absorptive properties of atmospheric constituents often vary drastically from theoretically-predicted profiles, especially under the extreme conditions characteristic of the planetary atmosphere, laboratory measurements under simulated planetary conditions are required. This paper reviews the instrumentation and techniques used for laboratory measurement of the refractivity and absorptivity of atmospheric constituents at wavelengths longward of 1 mm, under simulated planetary conditions (temperature, pressure, and broadening gases). Techniques for measuring both gases and condensates are considered. Also reviewed are the relative accuracies of the various techniques. Laboratory measurements are reviewed which have already been made, and additional measurements which are needed for interpretation of data from Venus and the outer planets, are highlighted.
NCTM of liquids at high temperatures using polarization techniques
NASA Technical Reports Server (NTRS)
Krishnan, Shankar; Weber, J. K. Richard; Nordine, Paul C.; Schiffman, Robert A.
1990-01-01
Temperature measurement and control is extremely important in any materials processing application. However, conventional techniques for non-contact temperature measurement (mainly optical pyrometry) are very uncertain because of unknown or varying surface emittance. Optical properties like other properties change during processing. A dynamic, in-situ measurement of optical properties including the emittance is required. Intersonics is developing new technologies using polarized laser light scattering to determine surface emittance of freely radiating bodies concurrent with conventional optical pyrometry. These are sufficient to determine the true surface temperature of the target. Intersonics is currently developing a system called DAPP, the Division of Amplitude Polarimetric Pyrometer, that uses polarization information to measure the true thermodynamic temperature of freely radiating objects. This instrument has potential use in materials processing applications in ground and space based equipment. Results of thermophysical and thermodynamic measurements using laser reflection as a temperature measuring tool are presented. The impact of these techniques on thermophysical property measurements at high temperature is discussed.
Measurement of total-body cobalt-57 vitamin B12 absorption with a gamma camera.
Cardarelli, J A; Slingerland, D W; Burrows, B A; Miller, A
1985-08-01
Previously described techniques for the measurement of the absorption of [57Co]vitamin B12 by total-body counting have required an iron room equipped with scanning or multiple detectors. The present study uses simplifying modifications which make the technique more available and include the use of static geometry, the measurement of body thickness to correct for attenuation, a simple formula to convert the capsule-in-air count to a 100% absorption count, and finally the use of an adequately shielded gamma camera obviating the need of an iron room.
Skin-friction measurements by laser interferometry
NASA Technical Reports Server (NTRS)
Kim, K.-S.; Settles, G. S.
1989-01-01
The measurement of skin friction in rapidly distorted compressible flows is difficult, and very few reliable techniques are available. A recent development, the laser interferometer skin friction (LISF) meter, promises to be useful for this purpose. This technique interferometrically measures the time rate of thinning of an oil film applied to an aerodynamic surface. Under the proper conditions the wall shear stress may thus be found directly, without reference to flow properties. The applicability of the LISF meter to supersonic boundary layers is examined experimentally. Its accuracy and repeatability are assessed, and conditions required for its successful application are considered.
NASA Astrophysics Data System (ADS)
Abu-Farha, Fadi; Hu, Xiaohua; Sun, Xin; Ren, Yang; Hector, Louis G.; Thomas, Grant; Brown, Tyson W.
2018-05-01
Austenite mechanical stability, i.e., retained austenite volume fraction (RAVF) variation with strain, and transformation behavior were investigated for two third-generation advanced high-strength steels (3GAHSS) under quasi-static uniaxial tension: a 1200 grade, two-phase medium Mn (10 wt pct) TRIP steel, and a 980 grade, three-phase TRIP steel produced with a quenching and partitioning heat treatment. The medium Mn (10 wt pct) TRIP steel deforms inhomogeneously via propagative instabilities (Lüders and Portevin Le Châtelier-like bands), while the 980 grade TRIP steel deforms homogenously up to necking. The dramatically different deformation behaviors of these steels required the development of a new in situ experimental technique that couples volumetric synchrotron X-ray diffraction measurement of RAVF with surface strain measurement using stereo digital image correlation over the beam impingement area. Measurement results with the new technique are compared to those from a more conventional approach wherein strains are measured over the entire gage region, while RAVF measurement is the same as that in the new technique. A determination is made as to the appropriateness of the different measurement techniques in measuring the transformation behaviors for steels with homogeneous and inhomogeneous deformation behaviors. Extension of the new in situ technique to the measurement of austenite transformation under different deformation modes and to higher strain rates is discussed.
NASA Astrophysics Data System (ADS)
Abu-Farha, Fadi; Hu, Xiaohua; Sun, Xin; Ren, Yang; Hector, Louis G.; Thomas, Grant; Brown, Tyson W.
2018-07-01
Austenite mechanical stability, i.e., retained austenite volume fraction (RAVF) variation with strain, and transformation behavior were investigated for two third-generation advanced high-strength steels (3GAHSS) under quasi-static uniaxial tension: a 1200 grade, two-phase medium Mn (10 wt pct) TRIP steel, and a 980 grade, three-phase TRIP steel produced with a quenching and partitioning heat treatment. The medium Mn (10 wt pct) TRIP steel deforms inhomogeneously via propagative instabilities (Lüders and Portevin Le Châtelier-like bands), while the 980 grade TRIP steel deforms homogenously up to necking. The dramatically different deformation behaviors of these steels required the development of a new in situ experimental technique that couples volumetric synchrotron X-ray diffraction measurement of RAVF with surface strain measurement using stereo digital image correlation over the beam impingement area. Measurement results with the new technique are compared to those from a more conventional approach wherein strains are measured over the entire gage region, while RAVF measurement is the same as that in the new technique. A determination is made as to the appropriateness of the different measurement techniques in measuring the transformation behaviors for steels with homogeneous and inhomogeneous deformation behaviors. Extension of the new in situ technique to the measurement of austenite transformation under different deformation modes and to higher strain rates is discussed.
Stress Measurement by Geometrical Optics
NASA Technical Reports Server (NTRS)
Robinson, R. S.; Rossnagel, S. M.
1986-01-01
Fast, simple technique measures stresses in thin films. Sample disk bowed by stress into approximately spherical shape. Reflected image of disk magnified by amount related to curvature and, therefore, stress. Method requires sample substrate, such as cheap microscope cover slide, two mirrors, laser light beam, and screen.
A Practical Method for Identifying Significant Change Scores
ERIC Educational Resources Information Center
Cascio, Wayne F.; Kurtines, William M.
1977-01-01
A test of significance for identifying individuals who are most influenced by an experimental treatment as measured by pre-post test change score is presented. The technique requires true difference scores, the reliability of obtained differences, and their standard error of measurement. (Author/JKS)
Spacecraft Communications System Verification Using On-Axis Near Field Measurement Techniques
NASA Technical Reports Server (NTRS)
Keating, Thomas; Baugh, Mark; Gosselin, R. B.; Lecha, Maria C.; Krebs, Carolyn A. (Technical Monitor)
2000-01-01
Determination of the readiness of a spacecraft for launch is a critical requirement. The final assembly of all subsystems must be verified. Testing of a communications system can mostly be done using closed-circuits (cabling to/from test ports), but the final connections to the antenna require radiation tests. The Tropical Rainfall Measuring Mission (TRMM) Project used a readily available 'near-fleld on-axis' equation to predict the values to be used for comparison with those obtained in a test program. Tests were performed in a 'clean room' environment at both Goddard Space Flight Center (GSFC) and in Japan at the Tanegashima Space Center (TnSC) launch facilities. Most of the measured values agreed with the predicted values to within 0.5 dB. This demonstrates that sometimes you can use relatively simple techniques to make antenna performance measurements when use of the 'far field ranges, anechoic chambers, or precision near-field ranges' are neither available nor practical. Test data and photographs are provided.
Electro optical system to measure strains at high temperature
NASA Technical Reports Server (NTRS)
Sciammarella, Cesar A.
1991-01-01
The measurement of strains at temperatures of the order of 1000 C has become a very important field of research. Technological advances in areas such as the analysis of high speed aircraft structures and high efficiency thermal engines require operational temperatures of this order of magnitude. Current techniques for the measurement of strains, such as electrical strain gages, are at the limit of their useful range and new methods need to be developed. Optical techniques are very attractive in this type of application because of their noncontacting nature. Holography is of particular interest because a minimal preparation of the surfaces is required. Optoelectronics holography is specially suited for this type of application, from the point of view of industrial use. There are a number of technical problems that need to be overcome to measure strains using holographic interferometry at high temperatures. Some of these problems are discussed, and solutions are given. A specimen instrumented with high temperature strains gages is used to compare the results of both technologies.
Electro optical system to measure strains at high temperature
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.
1991-12-01
The measurement of strains at temperatures of the order of 1000 C has become a very important field of research. Technological advances in areas such as the analysis of high speed aircraft structures and high efficiency thermal engines require operational temperatures of this order of magnitude. Current techniques for the measurement of strains, such as electrical strain gages, are at the limit of their useful range and new methods need to be developed. Optical techniques are very attractive in this type of application because of their noncontacting nature. Holography is of particular interest because a minimal preparation of the surfaces is required. Optoelectronics holography is specially suited for this type of application, from the point of view of industrial use. There are a number of technical problems that need to be overcome to measure strains using holographic interferometry at high temperatures. Some of these problems are discussed, and solutions are given. A specimen instrumented with high temperature strains gages is used to compare the results of both technologies.
Atom-optics knife-edge: Measuring sub-nanokelvin momentum distributions
NASA Astrophysics Data System (ADS)
Ramos, Ramon; Spierings, David; Steinberg, Aephraim
2017-04-01
Temperatures below 1 nanokelvin have been achieved in the recent years, enabling new classes of experiments which benefit from the resulting long coherence times. This achievement comes hand in hand with the challenge of measuring such low temperatures. By employing the equivalent of a knife-edge measurement for matter-waves, we have been able to characterize ultra-low momentum widths. We measured a momentum width corresponding to an effective temperature of 900 +/- 200 pK, only limited by our cooling performance. We show that this technique compares favourably with more traditional methods, which would require expansion times of 100's of ms or frequency stability of 10's of Hz. Finally, we show that the effective knife-edge, created by a potential barrier, begins to become ''blunt'' due to tunneling for thin barriers, and we obtain quantitative agreement with a theoretical model. This method is a useful tool for atomic interferometry and other areas in ultracold atoms where a robust and precise technique for characterizing the momentum distribution is required.
Modulation and synchronization technique for MF-TDMA system
NASA Technical Reports Server (NTRS)
Faris, Faris; Inukai, Thomas; Sayegh, Soheil
1994-01-01
This report addresses modulation and synchronization techniques for a multi-frequency time division multiple access (MF-TDMA) system with onboard baseband processing. The types of synchronization techniques analyzed are asynchronous (conventional) TDMA, preambleless asynchronous TDMA, bit synchronous timing with a preamble, and preambleless bit synchronous timing. Among these alternatives, preambleless bit synchronous timing simplifies onboard multicarrier demultiplexer/demodulator designs (about 2:1 reduction in mass and power), requires smaller onboard buffers (10:1 to approximately 3:1 reduction in size), and provides better frame efficiency as well as lower onboard processing delay. Analysis and computer simulation illustrate that this technique can support a bit rate of up to 10 Mbit/s (or higher) with proper selection of design parameters. High bit rate transmission may require Doppler compensation and multiple phase error measurements. The recommended modulation technique for bit synchronous timing is coherent QPSK with differential encoding for the uplink and coherent QPSK for the downlink.
NASA Technical Reports Server (NTRS)
Succi, G. P.
1983-01-01
The techniques of helicopter rotor noise prediction attempt to describe precisely the details of the noise field and remove the empiricisms and restrictions inherent in previous methods. These techniques require detailed inputs of the rotor geometry, operating conditions, and blade surface pressure distribution. The Farassat noise prediction techniques was studied, and high speed helicopter noise prediction using more detailed representations of the thickness and loading noise sources was investigated. These predictions were based on the measured blade surface pressures on an AH-1G rotor and compared to the measured sound field. Although refinements in the representation of the thickness and loading noise sources improve the calculation, there are still discrepancies between the measured and predicted sound field. Analysis of the blade surface pressure data indicates shocks on the blades, which are probably responsible for these discrepancies.
Zhu, Ping; Jafari, Rana; Jones, Travis; Trebino, Rick
2017-10-02
We introduce a simple delay-scanned complete spatiotemporal intensity-and-phase measurement technique based on wavelength-multiplexed holography to characterize long, complex pulses in space and time. We demonstrate it using pulses emerging from multi-mode fiber. This technique extends the temporal range and spectral resolution of the single-frame STRIPED FISH technique without using an otherwise-required expensive ultranarrow-bandpass filter. With this technique, we measured the complete intensity and phase of up to ten fiber modes from a multi-mode fiber (normalized frequency V ≈10) over a ~3ps time range. Spatiotemporal complexities such as intermodal delay, modal dispersion, and material dispersion were also intuitively displayed by the retrieved results. Agreement between the reconstructed color movies and the monitored time-averaged spatial profiles confirms the validity to this delay-scanned STRIPED FISH method.
Implementation of and measurement with the LIPA technique in a subsonic jet
NASA Technical Reports Server (NTRS)
Falco, R. E.
1994-01-01
LIPA (Laser Induced Photochemical Anemometry) was used to measure velocity, vorticity, Reynolds stress, and turbulent intensity distributions in a subsonic jet. The jet region of interest was the area close to the jet-orifice. The LIPA-technique is a nonintrusive quantitative flow visualization technique, consisting of tracking a phosphorescing grid of fluid particles, which is impressed by laser-beams directed into the flow. The phosphorescence of biacetyl gas was used to enable tracking of the impressed light grid. In order to perform measurements in a jet, LIPA was developed and implemented for the specific flow requirements. Nitrogen was used as the carrier gas to avoid quenching of the phosphorescent radiation of the tracer gas biacetyl by ambient oxygen. The use of sulfur dioxide to sensitize phosphorescent emission of biacetyl was examined. Preliminary data was used in a discussion of the potential of the LIPA technique.
Assessing atrophy measurement techniques in dementia: Results from the MIRIAD atrophy challenge.
Cash, David M; Frost, Chris; Iheme, Leonardo O; Ünay, Devrim; Kandemir, Melek; Fripp, Jurgen; Salvado, Olivier; Bourgeat, Pierrick; Reuter, Martin; Fischl, Bruce; Lorenzi, Marco; Frisoni, Giovanni B; Pennec, Xavier; Pierson, Ronald K; Gunter, Jeffrey L; Senjem, Matthew L; Jack, Clifford R; Guizard, Nicolas; Fonov, Vladimir S; Collins, D Louis; Modat, Marc; Cardoso, M Jorge; Leung, Kelvin K; Wang, Hongzhi; Das, Sandhitsu R; Yushkevich, Paul A; Malone, Ian B; Fox, Nick C; Schott, Jonathan M; Ourselin, Sebastien
2015-12-01
Structural MRI is widely used for investigating brain atrophy in many neurodegenerative disorders, with several research groups developing and publishing techniques to provide quantitative assessments of this longitudinal change. Often techniques are compared through computation of required sample size estimates for future clinical trials. However interpretation of such comparisons is rendered complex because, despite using the same publicly available cohorts, the various techniques have been assessed with different data exclusions and different statistical analysis models. We created the MIRIAD atrophy challenge in order to test various capabilities of atrophy measurement techniques. The data consisted of 69 subjects (46 Alzheimer's disease, 23 control) who were scanned multiple (up to twelve) times at nine visits over a follow-up period of one to two years, resulting in 708 total image sets. Nine participating groups from 6 countries completed the challenge by providing volumetric measurements of key structures (whole brain, lateral ventricle, left and right hippocampi) for each dataset and atrophy measurements of these structures for each time point pair (both forward and backward) of a given subject. From these results, we formally compared techniques using exactly the same dataset. First, we assessed the repeatability of each technique using rates obtained from short intervals where no measurable atrophy is expected. For those measures that provided direct measures of atrophy between pairs of images, we also assessed symmetry and transitivity. Then, we performed a statistical analysis in a consistent manner using linear mixed effect models. The models, one for repeated measures of volume made at multiple time-points and a second for repeated "direct" measures of change in brain volume, appropriately allowed for the correlation between measures made on the same subject and were shown to fit the data well. From these models, we obtained estimates of the distribution of atrophy rates in the Alzheimer's disease (AD) and control groups and of required sample sizes to detect a 25% treatment effect, in relation to healthy ageing, with 95% significance and 80% power over follow-up periods of 6, 12, and 24months. Uncertainty in these estimates, and head-to-head comparisons between techniques, were carried out using the bootstrap. The lateral ventricles provided the most stable measurements, followed by the brain. The hippocampi had much more variability across participants, likely because of differences in segmentation protocol and less distinct boundaries. Most methods showed no indication of bias based on the short-term interval results, and direct measures provided good consistency in terms of symmetry and transitivity. The resulting annualized rates of change derived from the model ranged from, for whole brain: -1.4% to -2.2% (AD) and -0.35% to -0.67% (control), for ventricles: 4.6% to 10.2% (AD) and 1.2% to 3.4% (control), and for hippocampi: -1.5% to -7.0% (AD) and -0.4% to -1.4% (control). There were large and statistically significant differences in the sample size requirements between many of the techniques. The lowest sample sizes for each of these structures, for a trial with a 12month follow-up period, were 242 (95% CI: 154 to 422) for whole brain, 168 (95% CI: 112 to 282) for ventricles, 190 (95% CI: 146 to 268) for left hippocampi, and 158 (95% CI: 116 to 228) for right hippocampi. This analysis represents one of the most extensive statistical comparisons of a large number of different atrophy measurement techniques from around the globe. The challenge data will remain online and publicly available so that other groups can assess their methods. Copyright © 2015. Published by Elsevier Inc.
Assessing atrophy measurement techniques in dementia: Results from the MIRIAD atrophy challenge
Cash, David M.; Frost, Chris; Iheme, Leonardo O.; Ünay, Devrim; Kandemir, Melek; Fripp, Jurgen; Salvado, Olivier; Bourgeat, Pierrick; Reuter, Martin; Fischl, Bruce; Lorenzi, Marco; Frisoni, Giovanni B.; Pennec, Xavier; Pierson, Ronald K.; Gunter, Jeffrey L.; Senjem, Matthew L.; Jack, Clifford R.; Guizard, Nicolas; Fonov, Vladimir S.; Collins, D. Louis; Modat, Marc; Cardoso, M. Jorge; Leung, Kelvin K.; Wang, Hongzhi; Das, Sandhitsu R.; Yushkevich, Paul A.; Malone, Ian B.; Fox, Nick C.; Schott, Jonathan M.; Ourselin, Sebastien
2015-01-01
Structural MRI is widely used for investigating brain atrophy in many neurodegenerative disorders, with several research groups developing and publishing techniques to provide quantitative assessments of this longitudinal change. Often techniques are compared through computation of required sample size estimates for future clinical trials. However interpretation of such comparisons is rendered complex because, despite using the same publicly available cohorts, the various techniques have been assessed with different data exclusions and different statistical analysis models. We created the MIRIAD atrophy challenge in order to test various capabilities of atrophy measurement techniques. The data consisted of 69 subjects (46 Alzheimer's disease, 23 control) who were scanned multiple (up to twelve) times at nine visits over a follow-up period of one to two years, resulting in 708 total image sets. Nine participating groups from 6 countries completed the challenge by providing volumetric measurements of key structures (whole brain, lateral ventricle, left and right hippocampi) for each dataset and atrophy measurements of these structures for each time point pair (both forward and backward) of a given subject. From these results, we formally compared techniques using exactly the same dataset. First, we assessed the repeatability of each technique using rates obtained from short intervals where no measurable atrophy is expected. For those measures that provided direct measures of atrophy between pairs of images, we also assessed symmetry and transitivity. Then, we performed a statistical analysis in a consistent manner using linear mixed effect models. The models, one for repeated measures of volume made at multiple time-points and a second for repeated “direct” measures of change in brain volume, appropriately allowed for the correlation between measures made on the same subject and were shown to fit the data well. From these models, we obtained estimates of the distribution of atrophy rates in the Alzheimer's disease (AD) and control groups and of required sample sizes to detect a 25% treatment effect, in relation to healthy ageing, with 95% significance and 80% power over follow-up periods of 6, 12, and 24 months. Uncertainty in these estimates, and head-to-head comparisons between techniques, were carried out using the bootstrap. The lateral ventricles provided the most stable measurements, followed by the brain. The hippocampi had much more variability across participants, likely because of differences in segmentation protocol and less distinct boundaries. Most methods showed no indication of bias based on the short-term interval results, and direct measures provided good consistency in terms of symmetry and transitivity. The resulting annualized rates of change derived from the model ranged from, for whole brain: − 1.4% to − 2.2% (AD) and − 0.35% to − 0.67% (control), for ventricles: 4.6% to 10.2% (AD) and 1.2% to 3.4% (control), and for hippocampi: − 1.5% to − 7.0% (AD) and − 0.4% to − 1.4% (control). There were large and statistically significant differences in the sample size requirements between many of the techniques. The lowest sample sizes for each of these structures, for a trial with a 12 month follow-up period, were 242 (95% CI: 154 to 422) for whole brain, 168 (95% CI: 112 to 282) for ventricles, 190 (95% CI: 146 to 268) for left hippocampi, and 158 (95% CI: 116 to 228) for right hippocampi. This analysis represents one of the most extensive statistical comparisons of a large number of different atrophy measurement techniques from around the globe. The challenge data will remain online and publicly available so that other groups can assess their methods. PMID:26275383
The interfacial strength of carbon nanofiber epoxy composite using single fiber pullout experiments.
Manoharan, M P; Sharma, A; Desai, A V; Haque, M A; Bakis, C E; Wang, K W
2009-07-22
Carbon nanotubes and nanofibers are extensively researched as reinforcing agents in nanocomposites for their multifunctionality, light weight and high strength. However, it is the interface between the nanofiber and the matrix that dictates the overall properties of the nanocomposite. The current trend is to measure elastic properties of the bulk nanocomposite and then compare them with theoretical models to extract the information on the interfacial strength. The ideal experiment is single fiber pullout from the matrix because it directly measures the interfacial strength. However, the technique is difficult to apply to nanocomposites because of the small size of the fibers and the requirement for high resolution force and displacement sensing. We present an experimental technique for measuring the interfacial strength of nanofiber-reinforced composites using the single fiber pullout technique and demonstrate the technique for a carbon nanofiber-reinforced epoxy composite. The experiment is performed in situ in a scanning electron microscope and the interfacial strength for the epoxy composite was measured to be 170 MPa.
Aeroelastic Deformation Measurements of Flap, Gap, and Overhang on a Semispan Model
NASA Technical Reports Server (NTRS)
Burner, A. W.; Liu, Tianshu; Garg, Sanjay; Ghee, Terence A.; Taylor, Nigel J.
2000-01-01
Single-camera, single-view videogrammetry has been used to determine static aeroelastic deformation of a slotted flap configuration on a semispan model at the National Transonic Facility (NTF). Deformation was determined by comparing wind-off to wind-on spatial data from targets placed on the main element, shroud, and flap of the model. Digitized video images from a camera were recorded and processed to automatically determine target image plane locations that were then corrected for sensor, lens, and frame grabber spatial errors. The videogrammetric technique has been established at NASA facilities as the technique of choice when high-volume static aeroelastic data with minimum impact on data taking is required. The primary measurement at the NTF with this technique in the past has been the measurement of static aeroelastic wing twist on full span models. The first results using the videogrammetric technique for the measurement of component deformation during semispan testing at the NTF are presented.
Multidirectional mobilities: Advanced measurement techniques and applications
NASA Astrophysics Data System (ADS)
Ivarsson, Lars Holger
Today high noise-and-vibration comfort has become a quality sign of products in sectors such as the automotive industry, aircraft, components, households and manufacturing. Consequently, already in the design phase of products, tools are required to predict the final vibration and noise levels. These tools have to be applicable over a wide frequency range with sufficient accuracy. During recent decades a variety of tools have been developed such as transfer path analysis (TPA), input force estimation, substructuring, coupling by frequency response functions (FRF) and hybrid modelling. While these methods have a well-developed theoretical basis, their application combined with experimental data often suffers from a lack of information concerning rotational DOFs. In order to measure response in all 6 DOFs (including rotation), a sensor has been developed, whose special features are discussed in the thesis. This transducer simplifies the response measurements, although in practice the excitation of moments appears to be more difficult. Several excitation techniques have been developed to enable measurement of multidirectional mobilities. For rapid and simple measurement of the loaded mobility matrix, a MIMO (Multiple Input Multiple Output) technique is used. The technique has been tested and validated on several structures of different complexity. A second technique for measuring the loaded 6-by-6 mobility matrix has been developed. This technique employs a model of the excitation set-up, and with this model the mobility matrix is determined from sequential measurements. Measurements on ``real'' structures show that both techniques give results of similar quality, and both are recommended for practical use. As a further step, a technique for measuring the unloaded mobilities is presented. It employs the measured loaded mobility matrix in order to calculate compensation forces and moments, which are later applied in order to compensate for the loading of the measurement equipment. The developed measurement techniques have been used in a hybrid coupling of a plate-and-beam structure to study different aspects of the coupling technique. Results show that RDOFs are crucial and have to be included in this case. The importance of stiffness residuals when mobilities are estimated from modal superposition is demonstrated. Finally it is shown that proper curve fitting can correct errors from inconsistently measured data.
Satellite stratospheric aerosol measurement validation
NASA Technical Reports Server (NTRS)
Russell, P. B.; Mccormick, M. P.
1984-01-01
The validity of the stratospheric aerosol measurements made by the satellite sensors SAM II and SAGE was tested by comparing their results with each other and with results obtained by other techniques (lider, dustsonde, filter, and impactor). The latter type of comparison required the development of special techniques that convert the quantity measured by the correlative sensor (e.g. particle backscatter, number, or mass) to that measured by the satellite sensor (extinction) and quantitatively estimate the uncertainty in the conversion process. The results of both types of comparisons show agreement within the measurement and conversion uncertainties. Moreover, the satellite uncertainty is small compared to aerosol natural variability (caused by seasonal changes, volcanoes, sudden warmings, and vortex structure). It was concluded that the satellite measurements are valid.
Jiang, Baofeng; Jia, Pengjiao; Zhao, Wen; Wang, Wentao
2018-01-01
This paper explores a new method for rapid structural damage inspection of steel tube slab (STS) structures along randomly measured paths based on a combination of compressive sampling (CS) and ultrasonic computerized tomography (UCT). In the measurement stage, using fewer randomly selected paths rather than the whole measurement net is proposed to detect the underlying damage of a concrete-filled steel tube. In the imaging stage, the ℓ1-minimization algorithm is employed to recover the information of the microstructures based on the measurement data related to the internal situation of the STS structure. A numerical concrete tube model, with the various level of damage, was studied to demonstrate the performance of the rapid UCT technique. Real-world concrete-filled steel tubes in the Shenyang Metro stations were detected using the proposed UCT technique in a CS framework. Both the numerical and experimental results show the rapid UCT technique has the capability of damage detection in an STS structure with a high level of accuracy and with fewer required measurements, which is more convenient and efficient than the traditional UCT technique. PMID:29293593
Wind Gust Measurement Techniques-From Traditional Anemometry to New Possibilities.
Suomi, Irene; Vihma, Timo
2018-04-23
Information on wind gusts is needed for assessment of wind-induced damage and risks to safety. The measurement of wind gust speed requires a high temporal resolution of the anemometer system, because the gust is defined as a short-duration (seconds) maximum of the fluctuating wind speed. Until the digitalization of wind measurements in the 1990s, the wind gust measurements suffered from limited recording and data processing resources. Therefore, the majority of continuous wind gust records date back at most only by 30 years. Although the response characteristics of anemometer systems are good enough today, the traditional measurement techniques at weather stations based on cup and sonic anemometers are limited to heights and regions where the supporting structures can reach. Therefore, existing measurements are mainly concentrated over densely-populated land areas, whereas from remote locations, such as the marine Arctic, wind gust information is available only from sparse coastal locations. Recent developments of wind gust measurement techniques based on turbulence measurements from research aircraft and from Doppler lidar can potentially provide new information from heights and locations unreachable by traditional measurement techniques. Moreover, fast-developing measurement methods based on Unmanned Aircraft Systems (UASs) may add to better coverage of wind gust measurements in the future. In this paper, we provide an overview of the history and the current status of anemometry from the perspective of wind gusts. Furthermore, a discussion on the potential future directions of wind gust measurement techniques is provided.
Linear Self-Referencing Techiques for Short-Optical-Pulse Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorrer, C.; Kang, I.
2008-04-04
Linear self-referencing techniques for the characterization of the electric field of short optical pulses are presented. The theoretical and practical advantages of these techniques are developed. Experimental implementations are described, and their performance is compared to the performance of their nonlinear counterparts. Linear techniques demonstrate unprecedented sensitivity and are a perfect fit in many domains where the precise, accurate measurement of the electric field of an optical pulse is required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ziemer, B; Hubbard, L; Groves, E
2015-06-15
Purpose: To evaluate a first pass analysis (FPA) technique for CT perfusion measurement in a swine animal and its validation using fractional flow reserve (FFR) as a reference standard. Methods: Swine were placed under anesthesia and relevant physiologic parameters were continuously recorded. Intra-coronary adenosine was administered to induce maximum hyperemia. A pressure wire was advanced distal to the first diagonal branch of the left anterior descending (LAD) artery for FFR measurements and a balloon dilation catheter was inserted over the pressure wire into the proximal LAD to create varying levels of stenosis. Images were acquired with a 320-row wide volumemore » CT scanner. Three main coronary perfusion beds were delineated in the myocardium using arteries extracted from CT angiography images using a minimum energy hypothesis. The integrated density in the perfusion bed was used to calculate perfusion using the FPA technique. The perfusion in the LAD bed over a range of stenosis severity was measured. The measured fractional perfusion was compared to FFR and linear regression was performed. Results: The measured fractional perfusion using the FPA technique (P-FPA) and FFR were related as P-FPA = 1.06FFR – 0.06 (r{sup 2} = 0.86). The perfusion measurements were calculated with only three to five total CT volume scans, which drastically reduces the radiation dose as compared with the existing techniques requiring 15–20 volume scans. Conclusion: The measured perfusion using the first pass analysis technique showed good correlation with FFR measurements as a reference standard. The technique for perfusion measurement can potentially make a substantial reduction in radiation dose as compared with the existing techniques.« less
NASA Astrophysics Data System (ADS)
Singh, Sarvesh Kumar; Turbelin, Gregory; Issartel, Jean-Pierre; Kumar, Pramod; Feiz, Amir Ali
2015-04-01
The fast growing urbanization, industrialization and military developments increase the risk towards the human environment and ecology. This is realized in several past mortality incidents, for instance, Chernobyl nuclear explosion (Ukraine), Bhopal gas leak (India), Fukushima-Daichi radionuclide release (Japan), etc. To reduce the threat and exposure to the hazardous contaminants, a fast and preliminary identification of unknown releases is required by the responsible authorities for the emergency preparedness and air quality analysis. Often, an early detection of such contaminants is pursued by a distributed sensor network. However, identifying the origin and strength of unknown releases following the sensor reported concentrations is a challenging task. This requires an optimal strategy to integrate the measured concentrations with the predictions given by the atmospheric dispersion models. This is an inverse problem. The measured concentrations are insufficient and atmospheric dispersion models suffer from inaccuracy due to the lack of process understanding, turbulence uncertainties, etc. These lead to a loss of information in the reconstruction process and thus, affect the resolution, stability and uniqueness of the retrieved source. An additional well known issue is the numerical artifact arisen at the measurement locations due to the strong concentration gradient and dissipative nature of the concentration. Thus, assimilation techniques are desired which can lead to an optimal retrieval of the unknown releases. In general, this is facilitated within the Bayesian inference and optimization framework with a suitable choice of a priori information, regularization constraints, measurement and background error statistics. An inversion technique is introduced here for an optimal reconstruction of unknown releases using limited concentration measurements. This is based on adjoint representation of the source-receptor relationship and utilization of a weight function which exhibits a priori information about the unknown releases apparent to the monitoring network. The properties of the weight function provide an optimal data resolution and model resolution to the retrieved source estimates. The retrieved source estimates are proved theoretically to be stable against the random measurement errors and their reliability can be interpreted in terms of the distribution of the weight functions. Further, the same framework can be extended for the identification of the point type releases by utilizing the maximum of the retrieved source estimates. The inversion technique has been evaluated with the several diffusion experiments, like, Idaho low wind diffusion experiment (1974), IIT Delhi tracer experiment (1991), European Tracer Experiment (1994), Fusion Field Trials (2007), etc. In case of point release experiments, the source parameters are mostly retrieved close to the true source parameters with least error. Primarily, the proposed technique overcomes two major difficulties incurred in the source reconstruction: (i) The initialization of the source parameters as required by the optimization based techniques. The converged solution depends on their initialization. (ii) The statistical knowledge about the measurement and background errors as required by the Bayesian inference based techniques. These are hypothetically assumed in case of no prior knowledge.
NASA Technical Reports Server (NTRS)
Matic, Roy M.; Mosley, Judith I.
1994-01-01
Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.
NASA Astrophysics Data System (ADS)
Sulkosky, V.; Jin, G.; Long, E.; Zhang, Y.-W.; Mihovilovic, M.; Kelleher, A.; Anderson, B.; Higinbotham, D. W.; Širca, S.; Allada, K.; Annand, J. R. M.; Averett, T.; Bertozzi, W.; Boeglin, W.; Bradshaw, P.; Camsonne, A.; Canan, M.; Cates, G. D.; Chen, C.; Chen, J.-P.; Chudakov, E.; De Leo, R.; Deng, X.; Deur, A.; Dutta, C.; El Fassi, L.; Flay, D.; Frullani, S.; Garibaldi, F.; Gao, H.; Gilad, S.; Gilman, R.; Glamazdin, O.; Golge, S.; Gomez, J.; Hansen, J.-O.; Holmstrom, T.; Huang, J.; Ibrahim, H.; de Jager, C. W.; Jensen, E.; Jiang, X.; Jones, M.; Kang, H.; Katich, J.; Khanal, H. P.; King, P.; Korsch, W.; LeRose, J.; Lindgren, R.; Lu, H.-J.; Luo, W.; Markowitz, P.; Meekins, D.; Meziane, M.; Michaels, R.; Moffit, B.; Monaghan, P.; Muangma, N.; Nanda, S.; Norum, B. E.; Pan, K.; Parno, D.; Piasetzky, E.; Posik, M.; Punjabi, V.; Puckett, A. J. R.; Qian, X.; Qiang, Y.; Qui, X.; Riordan, S.; Saha, A.; Sawatzky, B.; Shabestari, M.; Shahinyan, A.; Shoenrock, B.; John, J. St.; Subedi, R.; Tobias, W. A.; Tireman, W.; Urciuoli, G. M.; Wang, D.; Wang, K.; Wang, Y.; Watson, J.; Wojtsekhowski, B.; Ye, Z.; Zhan, X.; Zhang, Y.; Zheng, X.; Zhao, B.; Zhu, L.; Jefferson Lab Hall A Collaboration
2017-12-01
Background: Measurements of the neutron charge form factor, GEn, are challenging because the neutron has no net charge. In addition, measurements of the neutron form factors must use nuclear targets which require accurately accounting for nuclear effects. Extracting GEn with different targets and techniques provides an important test of our handling of these effects. Purpose: The goal of the measurement was to use an inclusive asymmetry measurement technique to extract the neutron charge form factor at a four-momentum transfer of 1 (GeV/c ) 2 . This technique has very different systematic uncertainties than traditional exclusive measurements and thus serves as an independent check of whether nuclear effects have been taken into account correctly. Method: The inclusive quasielastic reaction 3He ⃗(e ⃗,e') was measured at Jefferson Laboratory. The neutron electric form factor, GEn, was extracted at Q2=0.98 (GeV/c ) 2 from ratios of electron-polarization asymmetries measured for two orthogonal target spin orientations. This Q2 is high enough that the sensitivity to GEn is not overwhelmed by the neutron magnetic contribution, and yet low enough that explicit neutron detection is not required to suppress pion production. Results: The neutron electric form factor, GEn, was determined to be 0.0414 ±0.0077 (stat)±0.0022 (syst) , providing the first high-precision inclusive extraction of the neutron's charge form factor. Conclusions: The use of the inclusive quasielastic 3He ⃗(e ⃗,e') with a four-momentum transfer near 1 (GeV/c ) 2 has been used to provide a unique measurement of GEn. This new result provides a systematically independent validation of the exclusive extraction technique results and implies that the nuclear corrections are understood. This is contrary to the proton form factor where asymmetry and differential cross section measurements have been shown to have large systematic differences.
ASD FieldSpec Calibration Setup and Techniques
NASA Technical Reports Server (NTRS)
Olive, Dan
2001-01-01
This paper describes the Analytical Spectral Devices (ASD) Fieldspec Calibration Setup and Techniques. The topics include: 1) ASD Fieldspec FR Spectroradiometer; 2) Components of Calibration; 3) Equipment list; 4) Spectral Setup; 5) Spectral Calibration; 6) Radiometric and Linearity Setup; 7) Radiometric setup; 8) Datadets Required; 9) Data files; and 10) Field of View Measurement. This paper is in viewgraph form.
Develop real-time dosimetry concepts and instrumentation for long term missions
NASA Technical Reports Server (NTRS)
Braby, L. A.
1981-01-01
The development of a rugged portable dosimetry system, based on microdosimetry techniques, which will measure dose and evaluate dose equivalent in a mixed radiation field is described. Progress in the desired dosimetry system can be divided into three distinct areas: development of the radiation detector, and electron system are presented. The mathematical techniques required are investigated.
First experimental demonstration of self-synchronous phase locking of an optical array
NASA Astrophysics Data System (ADS)
Shay, T. M.; Benham, Vincent; Baker, J. T.; Ward, Benjamin; Sanchez, Anthony D.; Culpepper, Mark A.; Pilkington, D.; Spring, Justin; Nelson, Douglas J.; Lu, Chunte A.
2006-12-01
A novel, highly accurate, all electronic technique for phase locking arrays of optical fibers is demonstrated. We report the first demonstration of the only electronic phase locking technique that doesn’t require a reference beam. The measured phase error is λ/20. Excellent phase locking has been demonstrated for fiber amplifier arrays.
NASA Astrophysics Data System (ADS)
Thouvenin, Olivier; Fink, Mathias; Boccara, A. Claude
2017-02-01
Understanding volume regulation during mitosis is technically challenging. Indeed, a very sensitive non invasive imaging over time scales ranging from seconds to hours and over large fields is required. Therefore, Quantitative Phase Imaging (QPI) would be a perfect tool for such a project. However, because of asymmetric protein segregation during mitosis, an efficient separation of the refractive index and the height in the phase signal is required. Even though many strategies to make such a separation have been developed, they usually are difficult to implement, have poor sensitivity, or cannot be performed in living cells, or in a single shot. In this paper, we will discuss the use of a new technique called fluorescence exclusion to perform volume measurements. By coupling such technique with a simultaneous phase measurement, we were also able to recover the refractive index inside the cells. Fluorescence exclusion is a versatile and powerful technique that allows the volume measurement of many types of cells. A fluorescent dye, which cannot penetrate inside the cells, is mixed with the external medium in a confined environment. Therefore, the fluorescent signal depends on the inverse of the object's height. We could demonstrate both experimentally and theoretically that fluorescence exclusion can accurately measure cell volumes, even for cells much higher than the depth of focus of the objective. A local accurate height and RI measurement can also be obtained for smaller cells. We will also discuss the way to optimize the confinement of the observation chamber, either mechanically or optically.
Wang, Jue; Maier, Robert L
2006-08-01
The requirements for optical components have drastically increased for the deep-ultraviolet and vacuum-ultraviolet spectral regions. Low optical loss, high laser damage threshold, and long lifetime fluoride optics are required for microlithographic applications. A nondestructive quasi-Brewster angle technique (qBAT) has been developed for evaluating the quality of optical surfaces including both top surface and subsurface information. By using effective medium approximation, the negative quasi-Brewster angle shift at wavelengths longer than 200 nm has been used to model the distribution of subsurface damage, whereas the positive quasi-Brewster angle shift for wavelengths shorter than 200 nm has been explained by subsurface contamination. The top surface roughness depicted by the qBAT is consistent with atomic force microscopy measurements. The depth and the microporous structure of the subsurface damage measured by the qBAT has been confirmed by magnetorheological finishing. The technique has been extended to evaluate both polished and antireflection-coated CaF(2) components.
Improved dewpoint-probe calibration
NASA Technical Reports Server (NTRS)
Stephenson, J. G.; Theodore, E. A.
1978-01-01
Relatively-simple pressure-control apparatus calibrates dewpoint probes considerably faster than conventional methods, with no loss of accuracy. Technique requires only pressure measurement at each calibration point and single absolute-humidity measurement at beginning of run. Several probes can be calibrated simultaneously and points can be checked above room temperature.
Gräfe, James L; McNeill, Fiona E
2018-06-28
This article briefly reviews the main measurement techniques for the non-invasive detection of residual gadolinium (Gd) in those exposed to gadolinium-based contrast agents (GBCAs). Approach and Main results: The current status of in vivo Gd measurement is discussed and is put into the context of concerns within the radiology community. The main techniques are based on applied atomic/nuclear medicine utilizing the characteristic atomic and nuclear spectroscopic signature of Gd. The main emission energies are in the 40-200 keV region and require spectroscopic detectors with good energy resolution. The two main techniques, prompt gamma neutron activation analysis and x-ray fluorescence, provide adequate detection limits for in vivo measurement, whilst delivering a low effective radiation dose on the order of a few µSv. Gadolinium is being detected in measureable quantities in people with healthy renal function who have received FDA approved GBCAs. The applied atomic/nuclear medicine techniques discussed in this review will be useful in determining the significance of this retention, and will help on advising future administration protocols.
ERIC Educational Resources Information Center
Yildirim, Kamil; Arastaman, Gökhan; Dasci, Elif
2016-01-01
Problem Statement: The quality of teaching at schools mostly depends on the teachers' competencies. One of these competencies is measurement and evaluation (MaE). Evaluation of the students' cognitive, affective, and psychomotor development requires skills and knowledge about various measuring tools and techniques. It is essential for a teacher to…
Preliminary study of temperature measurement techniques for Stirling engine reciprocating seals
NASA Technical Reports Server (NTRS)
Wilcock, D. F.; Hoogenboom, L.; Meinders, M.; Winer, W. O.
1981-01-01
Methods of determining the contact surface temperature in reciprocating seals are investigated. Direct infrared measurement of surface temperatures of a rod exiting a loaded cap seal or simulated seal are compared with surface thermocouple measurements. Significant cooling of the surface requires several milliseconds so that exit temperatures may be considered representative of internal contact temperatures.
Pitfalls in the measurement of muscle mass: a need for a reference standard
Landi, Francesco; Cesari, Matteo; Fielding, Roger A.; Visser, Marjolein; Engelke, Klaus; Maggi, Stefania; Dennison, Elaine; Al‐Daghri, Nasser M.; Allepaerts, Sophie; Bauer, Jurgen; Bautmans, Ivan; Brandi, Maria Luisa; Bruyère, Olivier; Cederholm, Tommy; Cerreta, Francesca; Cherubini, Antonio; Cooper, Cyrus; Cruz‐Jentoft, Alphonso; McCloskey, Eugene; Dawson‐Hughes, Bess; Kaufman, Jean‐Marc; Laslop, Andrea; Petermans, Jean; Reginster, Jean‐Yves; Rizzoli, René; Robinson, Sian; Rolland, Yves; Rueda, Ricardo; Vellas, Bruno; Kanis, John A.
2018-01-01
Abstract Background All proposed definitions of sarcopenia include the measurement of muscle mass, but the techniques and threshold values used vary. Indeed, the literature does not establish consensus on the best technique for measuring lean body mass. Thus, the objective measurement of sarcopenia is hampered by limitations intrinsic to assessment tools. The aim of this study was to review the methods to assess muscle mass and to reach consensus on the development of a reference standard. Methods Literature reviews were performed by members of the European Society for Clinical and Economic Aspects of Osteoporosis and Osteoarthritis working group on frailty and sarcopenia. Face‐to‐face meetings were organized for the whole group to make amendments and discuss further recommendations. Results A wide range of techniques can be used to assess muscle mass. Cost, availability, and ease of use can determine whether the techniques are better suited to clinical practice or are more useful for research. No one technique subserves all requirements but dual energy X‐ray absorptiometry could be considered as a reference standard (but not a gold standard) for measuring muscle lean body mass. Conclusions Based on the feasibility, accuracy, safety, and low cost, dual energy X‐ray absorptiometry can be considered as the reference standard for measuring muscle mass. PMID:29349935
RESIDUAL LIMB VOLUME CHANGE: SYSTEMATIC REVIEW OF MEASUREMENT AND MANAGEMENT
Sanders, JE; Fatone, S
2014-01-01
Management of residual limb volume affects decisions regarding timing of fit of the first prosthesis, when a new prosthetic socket is needed, design of a prosthetic socket, and prescription of accommodation strategies for daily volume fluctuations. The purpose of this systematic review was to assess what is known about measurement and management of residual limb volume change in persons with lower-limb amputation. Publications that met inclusion criteria were grouped into three categories: (I) descriptions of residual limb volume measurement techniques; (II) studies on people with lower-limb amputation investigating the effect of residual limb volume change on clinical care; and (III) studies of residual limb volume management techniques or descriptions of techniques for accommodating or controlling residual limb volume. The review showed that many techniques for the measurement of residual limb volume have been described but clinical use is limited largely because current techniques lack adequate resolution and in-socket measurement capability. Overall, there is limited evidence regarding the management of residual limb volume, and the evidence available focuses primarily on adults with trans-tibial amputation in the early post-operative phase. While we can draw some insights from the available research about residual limb volume measurement and management, further research is required. PMID:22068373
NASA Astrophysics Data System (ADS)
Bi, ChuanXing; Jing, WenQian; Zhang, YongBin; Xu, Liang
2015-02-01
The conventional nearfield acoustic holography (NAH) is usually based on the assumption of free-field conditions, and it also requires that the measurement aperture should be larger than the actual source. This paper is to focus on the problem that neither of the above-mentioned requirements can be met, and to examine the feasibility of reconstructing the sound field radiated by partial source, based on double-layer pressure measurements made in a non-free field by using patch NAH combined with sound field separation technique. And also, the sensitivity of the reconstructed result to the measurement error is analyzed in detail. Two experiments involving two speakers in an exterior space and one speaker inside a car cabin are presented. The experimental results demonstrate that the patch NAH based on single-layer pressure measurement cannot obtain a satisfied result due to the influences of disturbing sources and reflections, while the patch NAH based on double-layer pressure measurements can successfully remove these influences and reconstruct the patch sound field effectively.
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1995-01-01
Particle Image Velocimetry provides a means of measuring the instantaneous 2-component velocity field across a planar region of a seeded flowfield. In this work only two camera, single exposure images are considered where both cameras have the same view of the illumination plane. Two competing techniques which yield unambiguous velocity vector direction information have been widely used for reducing the single exposure, multiple image data: cross-correlation and particle tracking. Correlation techniques yield averaged velocity estimates over subregions of the flow, whereas particle tracking techniques give individual particle velocity estimates. The correlation technique requires identification of the correlation peak on the correlation plane corresponding to the average displacement of particles across the subregion. Noise on the images and particle dropout contribute to spurious peaks on the correlation plane, leading to misidentification of the true correlation peak. The subsequent velocity vector maps contain spurious vectors where the displacement peaks have been improperly identified. Typically these spurious vectors are replaced by a weighted average of the neighboring vectors, thereby decreasing the independence of the measurements. In this work fuzzy logic techniques are used to determine the true correlation displacement peak even when it is not the maximum peak on the correlation plane, hence maximizing the information recovery from the correlation operation, maintaining the number of independent measurements and minimizing the number of spurious velocity vectors. Correlation peaks are correctly identified in both high and low seed density cases. The correlation velocity vector map can then be used as a guide for the particle tracking operation. Again fuzzy logic techniques are used, this time to identify the correct particle image pairings between exposures to determine particle displacements, and thus velocity. The advantage of this technique is the improved spatial resolution which is available from the particle tracking operation. Particle tracking alone may not be possible in the high seed density images typically required for achieving good results from the correlation technique. This two staged approach offers a velocimetric technique capable of measuring particle velocities with high spatial resolution over a broad range of seeding densities.
Optimum radars and filters for the passive sphere system
NASA Technical Reports Server (NTRS)
Luers, J. K.; Soltes, A.
1971-01-01
Studies have been conducted to determine the influence of the tracking radar and data reduction technique on the accuracy of the meteorological measurements made in the 30 to 100 kilometer altitude region by the ROBIN passive falling sphere. A survey of accuracy requirements was made of agencies interested in data from this region of the atmosphere. In light of these requirements, various types of radars were evaluated to determine the tracking system most applicable to the ROBIN, and methods were developed to compute the errors in wind and density that arise from noise errors in the radar supplied data. The effects of launch conditions on the measurements were also examined. Conclusions and recommendations have been made concerning the optimum tracking and data reduction techniques for the ROBIN falling sphere system.
Dynamic photogrammetric calibration of industrial robots
NASA Astrophysics Data System (ADS)
Maas, Hans-Gerd
1997-07-01
Today's developments in industrial robots focus on aims like gain of flexibility, improvement of the interaction between robots and reduction of down-times. A very important method to achieve these goals are off-line programming techniques. In contrast to conventional teach-in-robot programming techniques, where sequences of actions are defined step-by- step via remote control on the real object, off-line programming techniques design complete robot (inter-)action programs in a CAD/CAM environment. This poses high requirements to the geometric accuracy of a robot. While the repeatability of robot poses in the teach-in mode is often better than 0.1 mm, the absolute pose accuracy potential of industrial robots is usually much worse due to tolerances, eccentricities, elasticities, play, wear-out, load, temperature and insufficient knowledge of model parameters for the transformation from poses into robot axis angles. This fact necessitates robot calibration techniques, including the formulation of a robot model describing kinematics and dynamics of the robot, and a measurement technique to provide reference data. Digital photogrammetry as an accurate, economic technique with realtime potential offers itself for this purpose. The paper analyzes the requirements posed to a measurement technique by industrial robot calibration tasks. After an overview on measurement techniques used for robot calibration purposes in the past, a photogrammetric robot calibration system based on off-the- shelf lowcost hardware components will be shown and results of pilot studies will be discussed. Besides aspects of accuracy, reliability and self-calibration in a fully automatic dynamic photogrammetric system, realtime capabilities are discussed. In the pilot studies, standard deviations of 0.05 - 0.25 mm in the three coordinate directions could be achieved over a robot work range of 1.7 X 1.5 X 1.0 m3. The realtime capabilities of the technique allow to go beyond kinematic robot calibration and perform dynamic robot calibration as well as photogrammetric on-line control of a robot in action.
Accuracy of selected techniques for estimating ice-affected streamflow
Walker, John F.
1991-01-01
This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.
Introduction to total- and partial-pressure measurements in vacuum systems
NASA Technical Reports Server (NTRS)
Outlaw, R. A.; Kern, F. A.
1989-01-01
An introduction to the fundamentals of total and partial pressure measurement in the vacuum regime (760 x 10 to the -16th power Torr) is presented. The instrument most often used in scientific fields requiring vacuum measurement are discussed with special emphasis on ionization type gauges and quadrupole mass spectrometers. Some attention is also given to potential errors in measurement as well as calibration techniques.
The Evolving Field of Wound Measurement Techniques: A Literature Review.
Khoo, Rachel; Jansen, Shirley
2016-06-01
Wound healing is a complex and multifactorial process that requires the involvement of a multidisciplinary approach. Methods of wound measurement have been developed and continually refined with the purpose of ensuring precision in wound measurement and documentation as the primary indicator of healing. This review aims to ascertain the efficacies of current wound area measurement techniques, and to highlight any perceived gaps in the literature so as to develop suggestions for future studies and practice. Med- line, PubMed, CliniKey, and CINAHL were searched using the terms "wound/ulcer measurement techniques," "wound assessment," "digi- tal planimetry," and "structured light." Articles between 2000 and 2014 were selected, and secondary searches were carried out by exam- ining the references of relevant articles. Only papers written in English were included. A universal, standardized method of wound as- sessment has not been established or proposed. At present, techniques range from the simple to the more complex - most of which have char- acteristics that allow for applicability in both rural and urban settings. Techniques covered are: ruler measurements, acetate tracings/contact planimetry, digital planimetry, and structured light devices. Conclu- sion. In reviewing the literature, the precision and reliability of digital planimetry over the more conventional methods of ruler measurements and acetate tracings are consistently demonstrated. The advent and utility of the laser or structured light approach, however, is promising, has only been analyzed by a few, and opens up the scope for further evaluation of this technique.
NASA Technical Reports Server (NTRS)
Claassen, J. P.; Fung, A. K.
1977-01-01
The radar equation for incoherent scenes is derived and scattering coefficients are introduced in a systematic way to account for the complete interaction between the incident wave and the random scene. Intensity (power) and correlation techniques similar to that for coherent targets are proposed to measure all the scattering parameters. The sensitivity of the intensity technique to various practical realizations of the antenna polarization requirements is evaluated by means of computer simulated measurements, conducted with a scattering characteristic similar to that of the sea. It was shown that for scenes satisfying reciprocity one must admit three new cross-correlation scattering coefficients in addition to the commonly measured autocorrelation coefficients.
Application of the SEM to the measurement of solar cell parameters
NASA Technical Reports Server (NTRS)
Weizer, V. G.; Andrews, C. W.
1977-01-01
Techniques are described which make use of the SEM to measure the minority carrier diffusion length and the metallurgical junction depth in silicon solar cells. The former technique permits the measurement of the true bulk diffusion length through the application of highly doped field layers to the back surfaces of the cells being investigated. It is shown that the secondary emission contrast observed in the SEM on a reverse-biased diode can depict the location of the metallurgical junction if the diode has been prepared with the proper beveled geometry. The SEM provides the required contrast and the option of high magnification, permitting the measurement of extremely shallow junction depths.
Three dimensional profile measurement using multi-channel detector MVM-SEM
NASA Astrophysics Data System (ADS)
Yoshikawa, Makoto; Harada, Sumito; Ito, Keisuke; Murakawa, Tsutomu; Shida, Soichi; Matsumoto, Jun; Nakamura, Takayuki
2014-07-01
In next generation lithography (NGL) for the 1x nm node and beyond, the three dimensional (3D) shape measurements such as side wall angle (SWA) and height of feature on photomask become more critical for the process control. Until today, AFM (Atomic Force Microscope), X-SEM (cross-section Scanning Electron Microscope) and TEM (Transmission Electron Microscope) tools are normally used for 3D measurements, however, these techniques require time-consuming preparation and observation. And both X-SEM and TEM are destructive measurement techniques. This paper presents a technology for quick and non-destructive 3D shape analysis using multi-channel detector MVM-SEM (Multi Vision Metrology SEM), and also reports its accuracy and precision.
Review on the importance of measurement technique in micromachine technology
NASA Astrophysics Data System (ADS)
Umeda, Akira
1996-09-01
In the beginning stage of MITI micromachine project, the committee on the standardization established in Micromachine Center recognized the importance of measurement technique for the promotion and the systemization of the micromachine technology. Micromachine Center is the organizing body for private sectors working in the MITI micromachine project which started in 1991. MITI stands for Ministry of International Trade and Industry in Japan. In order to known the requirements on the measurement technologies, the questionnaire was organized by the measurement working group in the committee. This talk covers the questionnaire and its results, and some research results obtained at National Research Laboratory of Metrology working as a member in the project.
Proceedings of a Workshop on Assessment of Techniques for Measuring Tropospheric N sub x O sub y
NASA Technical Reports Server (NTRS)
1983-01-01
Human impact on the troposphere, particularly on the regional to global scale is assessed. One area of required research is instrumentation development, which is aimed at improving the capability to measure important trace gases and aerosols which are key species in the major atmospheric biogeochemical cycles. To focus on specific needs, the Instrumentation Workshop for NxOy Tropospheric Species was conducted. The workshop discussed measurement needs and instrument capabilities for NxOy species, including NO, NO2, HNO3, HNO2, PAN, and NO3 aerosols. The status and measurement capabilities of various techniques (operational as well as conceptual) were discussed, along with future instrument and technology needs.
Laboratory requirements for in-situ and remote sensing of suspended material
NASA Technical Reports Server (NTRS)
Kuo, C. Y.; Cheng, R. Y. K.
1978-01-01
Recommendations for laboratory and in-situ measurements required for remote sensing of suspended material are presented. This study investigates the properties of the suspended materials, factors influencing the upwelling radiance, and the various types of remote sensing techniques. Calibration and correlation procedures are given to obtain the accuracy necessary to quantify the suspended materials by remote sensing. In addition, the report presents a survey of the national need for sediment data, the agencies that deal with and require the data of suspended sediment, and a summary of some recent findings of sediment measurements.
Laboratory requirements for in-situ and remote sensing of suspended material
NASA Technical Reports Server (NTRS)
Kuo, C. Y.; Cheng, R. Y. K.
1976-01-01
Recommendations for laboratory and in-situ measurements required for remote sensing of suspended material are presented. This study investigates the properties of the suspended materials, factors influencing the upwelling radiance, and the various types of remote sensing techniques. Calibration and correlation procedures are given to obtain the accuracy necessary to quantify the suspended materials by remote sensing. In addition, the report presents a survey of the national need for sediment data, the agencies that deal with and require the data of suspended sediment, and a summary of some recent findings of sediment measurements.
3D shape measurement of automotive glass by using a fringe reflection technique
NASA Astrophysics Data System (ADS)
Skydan, O. A.; Lalor, M. J.; Burton, D. R.
2007-01-01
In automotive and glass making industries, there is a need for accurately measuring the 3D shapes of reflective surfaces to speed up and ensure product development and manufacturing quality by using non-contact techniques. This paper describes a technique for the measurement of non-full-field reflective surfaces of automotive glass by using a fringe reflection technique. Physical properties of the measurement surfaces do not allow us to apply optical geometries used in existing techniques for surface measurement based upon direct fringe pattern illumination. However, this property of surface reflectivity can be used to implement similar ideas from existing techniques in a new improved method. In other words, the reflective surface can be used as a mirror to reflect illuminated fringe patterns onto a screen behind. It has been found that in the case of implementing the reflective fringe technique, the phase-shift distribution depends not only on the height of the object but also on the slope at each measurement point. This requires the solving of differential equations to find the surface slope and height distributions in the x and y directions and development of the additional height reconstruction algorithms. The main focus has been made on developing a mathematical model of the optical sub-system and discussing ways for its practical implementation including calibration procedures. A number of implemented image processing algorithms for system calibration and data analysis are discussed and two experimental results are given for automotive glass surfaces with different shapes and defects. The proposed technique showed the ability to provide accurate non-destructive measurement of 3D shapes of the reflective automotive glass surfaces and can be used as a key element for a glass shape quality control system on-line or in a laboratory environment.
Structured illumination assisted microdeflectometry with optical depth scanning capability
Lu, Sheng-Huei; Hua, Hong
2018-01-01
Microdeflectometry is a powerful noncontact tool for measuring nanometer defects on a freeform surface. However, it requires a time-consuming process to take measurements at different depths for an extended depth of field (EDOF) and lacks surface information for integrating the measured gradient data to height. We propose an optical depth scanning technique to speed up the measurement process and introduce the structured illumination technique to efficiently determine the focused data among 3D observation and provide surface orientations for reconstructing an unknown surface shape. We demonstrated 3D measurements with an equivalent surface height sensitivity of 7.21 nm and an EDOF of at least 250 μm, which is 15 times that of the diffraction limited depth range. PMID:27607986
Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen
2014-01-27
A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment.
Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen
2014-01-01
A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment. PMID:24473282
Setting up a Rayleigh Scattering Based Flow Measuring System in a Large Nozzle Testing Facility
NASA Technical Reports Server (NTRS)
Panda, Jayanta; Gomez, Carlos R.
2002-01-01
A molecular Rayleigh scattering based air density measurement system has been built in a large nozzle testing facility at NASA Glenn Research Center. The technique depends on the light scattering by gas molecules present in air; no artificial seeding is required. Light from a single mode, continuous wave laser was transmitted to the nozzle facility by optical fiber, and light scattered by gas molecules, at various points along the laser beam, is collected and measured by photon-counting electronics. By placing the laser beam and collection optics on synchronized traversing units, the point measurement technique is made effective for surveying density variation over a cross-section of the nozzle plume. Various difficulties associated with dust particles, stray light, high noise level and vibration are discussed. Finally, a limited amount of data from an underexpanded jet are presented and compared with expected variations to validate the technique.
Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.
2017-01-01
Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883
Objective Measurement of Emerging Affective Traits in Preschool Children.
ERIC Educational Resources Information Center
Adkins, Dorothy C.
An objective measure of motivation to achieve for preschool children called Gumpgookies is described. It is an objective-projective technique that requires choice between two alternate types of behavior portrayed in pictures and accompanying verbal descriptions. Gumpgookies are amoeba-like creatures who behave in ways intended to show differences…
USDA-ARS?s Scientific Manuscript database
Sensible heat flux measurements are used in conjunction with net radiation and ground heat flux measurements to determine the latent heat flux as the energy balance residual. Surface renewal is a relatively inexpensive technique for sensible heat flux estimation because it requires only a fast-resp...
Nuclear magnetic resonance for measurement of body composition in infants and children
USDA-ARS?s Scientific Manuscript database
Measurement of body composition in infants and children is currently challenging. Air Displacement Plethysmography (ADP) has not been validated between ages 6 mo and 6 y and the requirement for stillness of the Dual-energy X-ray Absorptiometry (DXA) technique limits its use. Quantitative Nuclear Ma...
Study of the Open Loop and Closed Loop Oscillator Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imel, George R.; Baker, Benjamin; Riley, Tony
This report presents the progress and completion of a five-year study undertaken at Idaho State University of the measurement of very small worth reactivity samples comparing open and closed loop oscillator techniques.The study conclusively demonstrated the equivalency of the two techniques with regard to uncertainties in reactivity values, i.e., limited by reactor noise. As those results are thoroughly documented in recent publications, in this report we will concentrate on the support work that was necessary. For example, we describe in some detail the construction and calibration of a pilot rod for the closed loop system. We discuss the campaign tomore » measure the required reactor parameters necessary for inverse-kinetics. Finally, we briefly discuss the transfer of the open loop technique to other reactor systems.« less
Study of the open loop and closed loop oscillator techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Benjamin; Riley, Tony; Langbehn, Adam
This paper presents some aspects of a five year study undertaken at Idaho State University of the measurement of very small worth reactivity samples comparing open and closed loop oscillator techniques. The study conclusively demonstrated the equivalency of the two techniques with regard to uncertainties in reactivity values, i.e., limited by reactor noise. As those results are thoroughly documented in recent publications, in this paper we will concentrate on the support work that was necessary. For example, we describe in some detail the construction and calibration of a pilot rod for the closed loop system. We discuss the campaign tomore » measure the required reactor parameters necessary for inverse-kinetics. Finally, we briefly discuss the transfer of the open loop technique to other reactor systems. (authors)« less
Relaxation-based distance measurements between a nitroxide and a lanthanide spin label
NASA Astrophysics Data System (ADS)
Jäger, H.; Koch, A.; Maus, V.; Spiess, H. W.; Jeschke, G.
2008-10-01
Distance measurements by electron paramagnetic resonance techniques between labels attached to biomacromolecules provide structural information on systems that cannot be crystallized or are too large to be characterized by NMR methods. However, existing techniques are limited in their distance range and sensitivity. It is anticipated by theoretical considerations that these limits could be extended by measuring the enhancement of longitudinal relaxation of a nitroxide label due to a lanthanide complex label at cryogenic temperatures. The relaxivity of the dysprosium complex with the macrocyclic ligand DOTA can be determined without direct measurements of longitudinal relaxation rates of the lanthanide and without recourse to model compounds with well defined distance by analyzing the dependence of relaxation enhancement on either temperature or concentration in homogeneous glassy frozen solutions. Relaxivities determined by the two calibration techniques are in satisfying agreement with each other. Error sources for both techniques are examined. A distance of about 2.7 nm is measured in a model compound of the type nitroxide-spacer-lanthanide complex and is found in good agreement with the distance in a modeled structure. Theoretical considerations suggest that an increase of the upper distance limit requires measurements at lower fields and temperatures.
Noncontact temperature measurements in the microgravity fluids and transport phenomena discipline
NASA Technical Reports Server (NTRS)
Salzman, Jack
1988-01-01
The program of activities within the Microgravity Fluids and Transport Phenomena Discipline has been structured to enable the systematic pursuit of an increased understanding of low gravity fluid behavior/phenomena in a way which ensures that the results are appropriate to the widest range of applications. This structure is discussed and an overview of some of the activities which are underway is given. Of significance is the fact that in the majority of the current and planned activities, the measurement and, or control of the fluid temperature is a key experiment requirement. In addition, many of the experiments require that the temperature measurement be nonintrusive. A description of these requirements together with the current techniques which are being employed or under study to make these measurements is also discussed.
NASA Astrophysics Data System (ADS)
Coventry, M. D.; Krites, A. M.
Measurements to determine the absolute D-D and D-7Li neutron production rates with a neutron generator running at 100-200 kV acceleration potential were performed using the threshold activation foil technique. This technique provides a clear measure of fast neutron flux and with a suitable model, the neutron output. This approach requires little specialized equipment and is used to calibrate real-time neutron detectors and to verify neutron output. We discuss the activation foil measurement technique and describe its use in determining the relative contributions of D-D and D-7Li reactions to the total neutron yield and real-time detector response and compare to model predictions. The D-7Li reaction produces neutrons with a continuum of energies and a sharp peak around 13.5 MeV for measurement techniques outside of what D-D generators can perform. The ability to perform measurements with D-D neutrons alone, then add D-7Li neutrons for inelastic gamma production presents additional measurement modalities with the same neutron source without the use of tritium. Typically, D-T generators are employed for inelastic scattering applications but have a high regulatory burden from a radiological aspect (tritium inventory, liability concerns) and are export-controlled. D-D and D-7Li generators avoid these issues completely.
Pulsed infrared thermography for assessment of ultrasonic welds
NASA Astrophysics Data System (ADS)
McGovern, Megan E.; Rinker, Teresa J.; Sekol, Ryan C.
2018-03-01
Battery packs are a critical component in electric vehicles. During pack assembly, the battery cell tab and busbar are ultrasonically welded. The properties of the welds ultimately affect battery pack durability. Quality inspection of these welds is important to ensure durable battery packs. Pack failure is detrimental economically and could also pose a safety hazard, such as thermal runaway. Ultrasonic welds are commonly checked by measuring electrical resistance or auditing using destructive mechanical testing. Resistance measurements are quick, but sensitive to set-up changes. Destructive testing cannot represent the entire weld set. It is possible for a weak weld to satisfy the electrical requirement check, because only sufficient contact between the tabs and busbar is required to yield a low resistance measurement. Laboratory techniques are often not suitable for inline inspection, as they may be time-consuming, use couplant, or are only suitable for coupons. The complex surface geometry also poses difficulties for conventional nondestructive techniques. A method for inspection of ultrasonic welds is proposed using pulsed infrared thermography to identify discrepant welds in a manufacturing environment. Thermal measurements of welds were compared to electrical and mechanical measurements. The heat source distribution was calculated to obtain thermal images with high temporal and spatial resolution. All discrepant welds were readily identifiable using two thermographic techniques: pixel counting and the gradient image. A positive relationship between pixel count and mechanical strength was observed. The results demonstrate the potential of pulsed thermography for inline inspection, which can complement, or even replace, conventional electrical resistance measurements.
Absolute calibration of Doppler coherence imaging velocity images
NASA Astrophysics Data System (ADS)
Samuell, C. M.; Allen, S. L.; Meyer, W. H.; Howard, J.
2017-08-01
A new technique has been developed for absolutely calibrating a Doppler Coherence Imaging Spectroscopy interferometer for measuring plasma ion and neutral velocities. An optical model of the interferometer is used to generate zero-velocity reference images for the plasma spectral line of interest from a calibration source some spectral distance away. Validation of this technique using a tunable diode laser demonstrated an accuracy better than 0.2 km/s over an extrapolation range of 3.5 nm; a two order of magnitude improvement over linear approaches. While a well-characterized and very stable interferometer is required, this technique opens up the possibility of calibrated velocity measurements in difficult viewing geometries and for complex spectral line-shapes.
Evaluating diffraction based overlay metrology for double patterning technologies
NASA Astrophysics Data System (ADS)
Saravanan, Chandra Saru; Liu, Yongdong; Dasari, Prasad; Kritsun, Oleg; Volkman, Catherine; Acheta, Alden; La Fontaine, Bruno
2008-03-01
Demanding sub-45 nm node lithographic methodologies such as double patterning (DPT) pose significant challenges for overlay metrology. In this paper, we investigate scatterometry methods as an alternative approach to meet these stringent new metrology requirements. We used a spectroscopic diffraction-based overlay (DBO) measurement technique in which registration errors are extracted from specially designed diffraction targets for double patterning. The results of overlay measurements are compared to traditional bar-in-bar targets. A comparison between DBO measurements and CD-SEM measurements is done to show the correlation between the two approaches. We discuss the total measurement uncertainty (TMU) requirements for sub-45 nm nodes and compare TMU from the different overlay approaches.
NASA Technical Reports Server (NTRS)
Cosgrove, D. J.
1987-01-01
This study was carried out to develop improved methods for measuring in-vivo stress relaxation of growing tissues and to compare relaxation in the stems of four different species. When water uptake by growing tissue is prevented, in-vivo stress relaxation occurs because continued wall loosening reduces wall stress and cell turgor pressure. With this procedure one may measure the yield threshold for growth (Y), the turgor pressure in excess of the yield threshold (P-Y), and the physiological wall extensibility (phi). Three relaxation techniques proved useful: "turgor-relaxation", "balance-pressure" and "pressure-block". In the turgor-relaxation method, water is withheld from growing tissue and the reduction in turgor is measured directly with the pressure probe. This technique gives absolute values for P and Y, but requires tissue excision. In the balance-pressure technique, the excised growing region is sealed in a pressure chamber, and the subsequent reduction in water potential is measured as the applied pressure needed to return xylem sap to the cut surface. This method is simple, but only measures (P-Y), not the individual values of P and Y. In the pressure-block technique, the growing tissue is sealed into a pressure chamber, growth is monitored continuously, and just sufficient pressure is applied to the chamber to block growth. The method gives high-resolution kinetics of relaxation and does not require tissue excision, but only measures (P-Y). The three methods gave similar results when applied to the growing stems of pea (Pisum sativum L.), cucumber (Cucumis sativus L.), soybean (Glycine max (L.) Merr.) and zucchini (Curcubita pepo L.) seedlings. Values for (P-Y) averaged between 1.4 and 2.7 bar, depending on species. Yield thresholds averaged between 1.3 and 3.0 bar. Compared with the other methods, relaxation by pressure-block was faster and exhibited dynamic changes in wall-yielding properties. The two pressure-chamber methods were also used to measure the internal water-potential gradient (between the xylem and the epidermis) which drives water uptake for growth. For the four species it was small, between 0.3 and 0.6 bar, and so did not limit growth substantially.
SAR calibration technology review
NASA Technical Reports Server (NTRS)
Walker, J. L.; Larson, R. W.
1981-01-01
Synthetic Aperture Radar (SAR) calibration technology including a general description of the primary calibration techniques and some of the factors which affect the performance of calibrated SAR systems are reviewed. The use of reference reflectors for measurement of the total system transfer function along with an on-board calibration signal generator for monitoring the temporal variations of the receiver to processor output is a practical approach for SAR calibration. However, preliminary error analysis and previous experimental measurements indicate that reflectivity measurement accuracies of better than 3 dB will be difficult to achieve. This is not adequate for many applications and, therefore, improved end-to-end SAR calibration techniques are required.
NASA Technical Reports Server (NTRS)
Barranger, John P.
1990-01-01
A novel optical method of measuring 2-D surface strain is proposed. Two linear strains along orthogonal axes and the shear strain between those axes is determined by a variation of Yamaguchi's laser-speckle strain gage technique. It offers the advantages of shorter data acquisition times, less stringent alignment requirements, and reduced decorrelation effects when compared to a previously implemented optical strain rosette technique. The method automatically cancels the translational and rotational components of rigid body motion while simplifying the optical system and improving the speed of response.
Ductile film delamination from compliant substrates using hard overlayers
Cordill, M.J.; Marx, V.M.; Kirchlechner, C.
2014-01-01
Flexible electronic devices call for copper and gold metal films to adhere well to polymer substrates. Measuring the interfacial adhesion of these material systems is often challenging, requiring the formulation of different techniques and models. Presented here is a strategy to induce well defined areas of delamination to measure the adhesion of copper films on polyimide substrates. The technique utilizes a stressed overlayer and tensile straining to cause buckle formation. The described method allows one to examine the effects of thin adhesion layers used to improve the adhesion of flexible systems. PMID:25641995
Ductile film delamination from compliant substrates using hard overlayers.
Cordill, M J; Marx, V M; Kirchlechner, C
2014-11-28
Flexible electronic devices call for copper and gold metal films to adhere well to polymer substrates. Measuring the interfacial adhesion of these material systems is often challenging, requiring the formulation of different techniques and models. Presented here is a strategy to induce well defined areas of delamination to measure the adhesion of copper films on polyimide substrates. The technique utilizes a stressed overlayer and tensile straining to cause buckle formation. The described method allows one to examine the effects of thin adhesion layers used to improve the adhesion of flexible systems.
A study of optical scattering methods in laboratory plasma diagnosis
NASA Technical Reports Server (NTRS)
Phipps, C. R., Jr.
1972-01-01
Electron velocity distributions are deduced along axes parallel and perpendicular to the magnetic field in a pulsed, linear Penning discharge in hydrogen by means of a laser Thomson scattering experiment. Results obtained are numerical averages of many individual measurements made at specific space-time points in the plasma evolution. Because of the high resolution in k-space and the relatively low maximum electron density 2 x 10 to the 13th power/cu cm, special techniques were required to obtain measurable scattering signals. These techniques are discussed and experimental results are presented.
NASA Astrophysics Data System (ADS)
Shay, T. M.; Benham, Vincent; Baker, J. T.; Ward, Benjamin; Sanchez, Anthony D.; Culpepper, Mark A.; Pilkington, D.; Spring, Justin; Nelson, Douglas J.; Lu, Chunte A.
2006-08-01
A novel high accuracy all electronic technique for phase locking arrays of optical fibers is demonstrated. We report the first demonstration of the only electronic phase locking technique that doesn't require a reference beam. The measured phase error is λ/20. Excellent phase locking has been demonstrated for fiber amplifier arrays.
MacDonald, Sharyn L S; Cowan, Ian A; Floyd, Richard A; Graham, Rob
2013-10-01
Accurate and transparent measurement and monitoring of radiologist workload is highly desirable for management of daily workflow in a radiology department, and for informing decisions on department staffing needs. It offers the potential for benchmarking between departments and assessing future national workforce and training requirements. We describe a technique for quantifying, with minimum subjectivity, all the work carried out by radiologists in a tertiary department. Six broad categories of clinical activities contributing to radiologist workload were identified: reporting, procedures, trainee supervision, clinical conferences and teaching, informal case discussions, and administration related to referral forms. Time required for reporting was measured using data from the radiology information system. Other activities were measured by observation and timing by observers, and based on these results and extensive consultation, the time requirements and frequency of each activity was agreed on. An activity list was created to record this information and to calculate the total clinical hours required to meet the demand for radiologist services. Diagnostic reporting accounted for approximately 35% of radiologist clinical time; procedures, 23%; trainee supervision, 15%; conferences and tutorials, 14%; informal case discussions, 10%; and referral-related administration, 3%. The derived data have been proven reliable for workload planning over the past 3 years. A transparent and robust method of measuring radiologists' workload has been developed, with subjective assessments kept to a minimum. The technique has value for daily workload and longer term planning. It could be adapted for widespread use. © 2013 The Authors. Journal of Medical Imaging and Radiation Oncology © 2013 The Royal Australian and New Zealand College of Radiologists.
Influence and measurement of mass ablation in ICF implosions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spears, B K; Hicks, D; Velsko, C
2007-09-05
Point design ignition capsules designed for the National Ignition Facility (NIF) currently use an x-ray-driven Be(Cu) ablator to compress the DT fuel. Ignition specifications require that the mass of unablated Be(Cu), called residual mass, be known to within 1% of the initial ablator mass when the fuel reaches peak velocity. The specifications also require that the implosion bang time, a surrogate measurement for implosion velocity, be known to +/- 50 ps RMS. These specifications guard against several capsule failure modes associated with low implosion velocity or low residual mass. Experiments designed to measure and to tune experimentally the amount ofmore » residual mass are being developed as part of the National Ignition Campaign (NIC). Tuning adjustments of the residual mass and peak velocity can be achieved using capsule and laser parameters. We currently plan to measure the residual mass using streaked radiographic imaging of surrogate tuning capsules. Alternative techniques to measure residual mass using activated Cu debris collection and proton spectrometry have also been developed. These developing techniques, together with bang time measurements, will allow us to tune ignition capsules to meet NIC specs.« less
The stonehenge technique: a new method of crystal alignment for coherent bremsstrahlung experiments
NASA Astrophysics Data System (ADS)
Livingston, Kenneth
2005-08-01
In the coherent bremsstrahlung technique a thin diamond crystal oriented correctly in an electron beam can produce photons with a high degree of linear polarization.1 The crystal is mounted on a goniometer to control its orientation and it is necessary to measure the angular offsets a) between the crystal axes and the goniometer axes and b) between the goniometer and the electron beam axis. A method for measuring these offsets and aligning the crystal was developed by Lohman et al, and has been used successfully in Mainz.2 However, recent attempts to investigate new crystals have shown that this approach has limitations which become more serious at higher beam energies where more accurate setting of the crystal angles, which scale with l/Ebeam, is required. (Eg. the recent installation of coherent bremsstrahlung facility at Jlab, with Ebeam = 6 GeV ) This paper describes a new, more general alignment technique, which overcomes these limitations. The technique is based on scans where the horizontal and vertical rotation axes of the goniometer are adjusted in a series of steps to make the normal to the crystal describe a cone of a given angle. For each step in the scan, the photon energy spectrum is measured using a tagging spectrometer, and the offsets between the electron beam and the crystal lattice are inferred from the resulting 2D plot. Using this method, it is possible to align the crystal with the beam quickly, and hence to set any desired orientation of the crystal relative to the beam. This is essential for any experiment requiring linearly polarized photons produced via coherent bremsstrahlung, and is also required for a systematic study of the channeling radiation produced by the electron beam incident on the crystal.
Venturi Air-Jet Vacuum Ejector For Sampling Air
NASA Technical Reports Server (NTRS)
Hill, Gerald F.; Sachse, Glen W.; Burney, L. Garland; Wade, Larry O.
1990-01-01
Venturi air-jet vacuum ejector pump light in weight, requires no electrical power, does not contribute heat to aircraft, and provides high pumping speeds at moderate suctions. High-pressure motive gas required for this type of pump bled from compressor of aircraft engine with negligible effect on performance of engine. Used as source of vacuum for differential-absorption CO-measurement (DACOM), modified to achieve in situ measurements of CO at frequency response of 10 Hz. Provides improvement in spatial resolution and potentially leads to capability to measure turbulent flux of CO by use of eddy-correlation technique.
Strategies for In situ and Sample Return Analyses
NASA Astrophysics Data System (ADS)
Papanastassiou, D. A.
2006-12-01
There is general agreement that planetary exploration proceeds from orbital reconnaissance of a planet, to surface and near-surface in situ exploration, to sample return missions, which bring back samples for investigations in terrestrial laboratories, using the panoply of state-of-the-art analytical techniques. The applicable techniques may depend on the nature of the returned material and complementary and multi- disciplinary techniques can be used to best advantage. High precision techniques also serve to provide the "ground truth" and calibrate past and future orbital and in situ measurements on a planet. It is also recognized that returned samples may continue to be analyzed by novel techniques as the techniques become developed, in part to address specific characteristics of returned samples. There are geophysical measurements such as those of the moment of inertia of a planet, seismic activity, and surface morphology that depend on orbital and in-situ science. Other characteristics, such as isotopic ages and isotopic compositions (e.g., initial Sr and Nd) as indicators of planetary mantle or crust evolution and sample provenance require returned samples. In situ analyses may be useful for preliminary characterization and for optimization of sample selection for sample return. In situ analyses by Surveyor on the Moon helped identify the major element chemistry of lunar samples and the need for high precision mass spectrometry (e. g., for Rb-Sr ages, based on extremely low alkali contents). The discussion of in-situ investigations vs. investigations on returned samples must be directly related to available instrumentation and to instrumentation that can be developed in the foreseeable future. The discussion of choices is not a philosophical but instead a very practical issue: what precision is required for key investigations and what is the instrumentation that meets or exceeds the required precision. This must be applied to potential in situ instruments and to laboratory instruments. Age determinations and use of isotopes for deciphering planetary evolution are viewed as off-limits for in-situ determinations, as they require: a) typically high precision mass spectrometry (at 0.01% and below); b) the determination of parent-daughter element ratios at least at the percent level; c) the measurement of coexisting minerals (for internal isochron determinations); d) low contamination (e. g., for U-Pb and Pb-Pb); and e) removal of adhering phases and contaminants, not related to the samples to be analyzed. Total K-Ar age determinations are subject to fewer requirements and may be feasible, in situ, but in the absence of neutron activation, as required for 39Ar-40Ar, the expected precision is at the level of ~20%, with trapped Ar in the samples introducing further uncertainty. Precision of 20% for K-Ar may suffice to address some key cratering rate uncertainties on Mars, especially as applicable to the Middle Amazonian(1). For in situ, the key issues, which must be addressed for all measurements are: what precision is required and are there instruments available, at the required precision levels. These issues must be addressed many years before a mission gets defined. Low precision instruments on several in situ missions that do not address key scientific questions may in fact be more expensive, in their sum, than a sample return mission. In summary, all missions should undergo similar intense scrutiny with regard to desired science and feasibility, based on available instrumentation (with demonstrated and known capabilities) and cost. 1. P. T. Doran et al. (2004) Earth Sci. Rev. 67, 313-337.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lammertsma, A.A.; Baron, J.C.; Jones, T.
1987-06-01
The oxygen-15 steady-state technique to measure the regional cerebral metabolic rate for oxygen requires a correction for the nonextracted intravascular molecular oxygen-15. To perform this correction, an additional procedure is carried out using RBCs labeled with /sup 11/CO or C/sup 15/O. The previously reported correction method, however, required knowledge of the regional cerebral to large vessel hematocrit ratio. A closer examination of the underlying model eliminated this ratio. Both molecular oxygen and carbon monoxide are carried by RBCs and are therefore similarly affected by a change in hematocrit.
Lidar Measurements of Tropospheric Wind Profiles with the Double Edge Technique
NASA Technical Reports Server (NTRS)
Gentry, Bruce M.; Li, Steven X.; Korb, C. Laurence; Mathur, Savyasachee; Chen, Huailin
1998-01-01
Research has established the importance of global tropospheric wind measurements for large scale improvements in numerical weather prediction. In addition, global wind measurements provide data that are fundamental to the understanding and prediction of global climate change. These tasks are closely linked with the goals of the NASA Earth Science Enterprise and Global Climate Change programs. NASA Goddard has been actively involved in the development of direct detection Doppler lidar methods and technologies to meet the wind observing needs of the atmospheric science community. A variety of direct detection Doppler wind lidar measurements have recently been reported indicating the growing interest in this area. Our program at Goddard has concentrated on the development of the edge technique for lidar wind measurements. Implementations of the edge technique using either the aerosol or molecular backscatter for the Doppler wind measurement have been described. The basic principles have been verified in lab and atmospheric lidar wind experiments. The lidar measurements were obtained with an aerosol edge technique lidar operating at 1064 nm. These measurements demonstrated high spatial resolution (22 m) and high velocity sensitivity (rms variances of 0.1 m/s) in the planetary boundary layer (PBL). The aerosol backscatter is typically high in the PBL and the effects of the molecular backscatter can often be neglected. However, as was discussed in the original edge technique paper, the molecular contribution to the signal is significant above the boundary layer and a correction for the effects of molecular backscatter is required to make wind measurements. In addition, the molecular signal is a dominant source of noise in regions where the molecular to aerosol ratio is large since the energy monitor channel used in the single edge technique measures the sum of the aerosol and molecular signals. To extend the operation of the edge technique into the free troposphere we have developed a variation of the edge technique called the double edge technique. In this paper a ground based aerosol double edge lidar is described and the first measurements of wind profiles in the free troposphere obtained with this lidar will be presented.
Forest Structure Retrieval From EcoSAR P-Band Single-Pass Interferometry
NASA Technical Reports Server (NTRS)
Osmanoglu, Batuhan; Rincon, Rafael; Lee, Seung Kuk; Fatoyinbo, Temilola; Bollian, Tobias
2017-01-01
EcoSAR is a single-pass (dual antenna) digital beamforming, P-band radar system that is designed for remote sensing of dense forest structure. Forest structure retrievals require the measurement related to the vertical dimension, for which several techniques have been developed over the years. These techniques use polarimetric and interferometric aspects of the SAR data, which can be collected using EcoSAR. In this paper we describe EcoSAR system in light of its interferometric capabilities and investigate forest structure retrieval techniques.
Michelsen, H. A.; Schulz, C.; Smallwood, G. J.; ...
2015-09-09
The understanding of soot formation in combustion processes and the optimization of practical combustion systems require in situ measurement techniques that can provide important characteristics, such as particle concentrations and sizes, under a variety of conditions. Of equal importance are techniques suitable for characterizing soot particles produced from incomplete combustion and emitted into the environment. Also, the production of engineered nanoparticles, such as carbon blacks, may benefit from techniques that allow for online monitoring of these processes.
NASA Technical Reports Server (NTRS)
Bentley, P. B.
1975-01-01
The measurement of the volume flow-rate of blood in an artery or vein requires both an estimate of the flow velocity and its spatial distribution and the corresponding cross-sectional area. Transcutaneous measurements of these parameters can be performed using ultrasonic techniques that are analogous to the measurement of moving objects by use of a radar. Modern digital data recording and preprocessing methods were applied to the measurement of blood-flow velocity by means of the CW Doppler ultrasonic technique. Only the average flow velocity was measured and no distribution or size information was obtained. Evaluations of current flowmeter design and performance, ultrasonic transducer fabrication methods, and other related items are given. The main thrust was the development of effective data-handling and processing methods by application of modern digital techniques. The evaluation resulted in useful improvements in both the flowmeter instrumentation and the ultrasonic transducers. Effective digital processing algorithms that provided enhanced blood-flow measurement accuracy and sensitivity were developed. Block diagrams illustrative of the equipment setup are included.
NASA Technical Reports Server (NTRS)
Goodman, P.
1973-01-01
A study has been conducted to determine the feasibility of increasing sensitivity for ozone detection. The detection technique employed is the chemiluminescent reaction of ozone with a rhodamine-B impregnated disk. Previously achieved sensitivities are required to be increased by a factor of about 20 to permit measurements at altitudes of 80 km. Sensitivity was increased by using a more sensitive photomultiplier tube, by increasing the gas velocity past the disk, by different disk preparation techniques, and by using reflective coatings in the disk chamber and on the uncoated side of the glass disk. Reflective coatings provided the largest sensitivity increase. The sum of all these changes was a sensitivity increased by an estimated factor of 70, more than sufficient to permit measurement of ambient ozone concentrations at altitudes of 80 km.
A technique for estimating dry deposition velocities based on similarity with latent heat flux
NASA Astrophysics Data System (ADS)
Pleim, Jonathan E.; Finkelstein, Peter L.; Clarke, John F.; Ellestad, Thomas G.
Field measurements of chemical dry deposition are needed to assess impacts and trends of airborne contaminants on the exposure of crops and unmanaged ecosystems as well as for the development and evaluation of air quality models. However, accurate measurements of dry deposition velocities require expensive eddy correlation measurements and can only be practically made for a few chemical species such as O 3 and CO 2. On the other hand, operational dry deposition measurements such as those used in large area networks involve relatively inexpensive standard meteorological and chemical measurements but rely on less accurate deposition velocity models. This paper describes an intermediate technique which can give accurate estimates of dry deposition velocity for chemical species which are dominated by stomatal uptake such as O 3 and SO 2. This method can give results that are nearly the quality of eddy correlation measurements of trace gas fluxes at much lower cost. The concept is that bulk stomatal conductance can be accurately estimated from measurements of latent heat flux combined with standard meteorological measurements of humidity, temperature, and wind speed. The technique is tested using data from a field experiment where high quality eddy correlation measurements were made over soybeans. Over a four month period, which covered the entire growth cycle, this technique showed very good agreement with eddy correlation measurements for O 3 deposition velocity.
Kretzschmar, Moritz; Schilling, Thomas; Vogt, Andreas; Rothen, Hans Ulrich; Borges, João Batista; Hachenberg, Thomas; Larsson, Anders; Baumgardner, James E; Hedenstierna, Göran
2013-10-15
The mismatching of alveolar ventilation and perfusion (VA/Q) is the major determinant of impaired gas exchange. The gold standard for measuring VA/Q distributions is based on measurements of the elimination and retention of infused inert gases. Conventional multiple inert gas elimination technique (MIGET) uses gas chromatography (GC) to measure the inert gas partial pressures, which requires tonometry of blood samples with a gas that can then be injected into the chromatograph. The method is laborious and requires meticulous care. A new technique based on micropore membrane inlet mass spectrometry (MMIMS) facilitates the handling of blood and gas samples and provides nearly real-time analysis. In this study we compared MIGET by GC and MMIMS in 10 piglets: 1) 3 with healthy lungs; 2) 4 with oleic acid injury; and 3) 3 with isolated left lower lobe ventilation. The different protocols ensured a large range of normal and abnormal VA/Q distributions. Eight inert gases (SF6, krypton, ethane, cyclopropane, desflurane, enflurane, diethyl ether, and acetone) were infused; six of these gases were measured with MMIMS, and six were measured with GC. We found close agreement of retention and excretion of the gases and the constructed VA/Q distributions between GC and MMIMS, and predicted PaO2 from both methods compared well with measured PaO2. VA/Q by GC produced more widely dispersed modes than MMIMS, explained in part by differences in the algorithms used to calculate VA/Q distributions. In conclusion, MMIMS enables faster measurement of VA/Q, is less demanding than GC, and produces comparable results.
Assessing estimation techniques for missing plot observations in the U.S. forest inventory
Grant M. Domke; Christopher W. Woodall; Ronald E. McRoberts; James E. Smith; Mark A. Hatfield
2012-01-01
The U.S. Forest Service, Forest Inventory and Analysis Program made a transition from state-by-state periodic forest inventories--with reporting standards largely tailored to regional requirements--to a nationally consistent, annual inventory tailored to large-scale strategic requirements. Lack of measurements on all forest land during the periodic inventory, along...
Evaluation of a new arterial pressure-based cardiac output device requiring no external calibration
Prasser, Christopher; Bele, Sylvia; Keyl, Cornelius; Schweiger, Stefan; Trabold, Benedikt; Amann, Matthias; Welnhofer, Julia; Wiesenack, Christoph
2007-01-01
Background Several techniques have been discussed as alternatives to the intermittent bolus thermodilution cardiac output (COPAC) measurement by the pulmonary artery catheter (PAC). However, these techniques usually require a central venous line, an additional catheter, or a special calibration procedure. A new arterial pressure-based cardiac output (COAP) device (FloTrac™, Vigileo™; Edwards Lifesciences, Irvine, CA, USA) only requires access to the radial or femoral artery using a standard arterial catheter and does not need an external calibration. We validated this technique in critically ill patients in the intensive care unit (ICU) using COPAC as the method of reference. Methods We studied 20 critically ill patients, aged 16 to 74 years (mean, 55.5 ± 18.8 years), who required both arterial and pulmonary artery pressure monitoring. COPAC measurements were performed at least every 4 hours and calculated as the average of 3 measurements, while COAP values were taken immediately at the end of bolus determinations. Accuracy of measurements was assessed by calculating the bias and limits of agreement using the method described by Bland and Altman. Results A total of 164 coupled measurements were obtained. Absolute values of COPAC ranged from 2.80 to 10.80 l/min (mean 5.93 ± 1.55 l/min). The bias and limits of agreement between COPAC and COAP for unequal numbers of replicates was 0.02 ± 2.92 l/min. The percentage error between COPAC and COAP was 49.3%. The bias between percentage changes in COPAC (ΔCOPAC) and percentage changes in COAP (ΔCOAP) for consecutive measurements was -0.70% ± 32.28%. COPAC and COAP showed a Pearson correlation coefficient of 0.58 (p < 0.01), while the correlation coefficient between ΔCOPAC and ΔCOAP was 0.46 (p < 0.01). Conclusion Although the COAP algorithm shows a minimal bias with COPAC over a wide range of values in an inhomogeneous group of critically ill patients, the scattering of the data remains relative wide. Therefore, the used algorithm (V 1.03) failed to demonstrate an acceptable accuracy in comparison to the clinical standard of cardiac output determination. PMID:17996086
Chemiluminescence and bioluminescence microbe detection
NASA Technical Reports Server (NTRS)
Taylor, R. E.; Chappelle, E.; Picciolo, G. L.; Jeffers, E. L.; Thomas, R. R.
1978-01-01
Automated biosensors for online use with NASA Water Monitoring System employs bioluminescence and chemiluminescence techniques to rapidly measure microbe contamination of water samples. System eliminates standard laboratory procedures requiring time duration of 24 hours or longer.
Progress in the Determination of the Earth's Gravity Field
NASA Technical Reports Server (NTRS)
Rapp, Richard H. (Editor)
1989-01-01
Topics addressed include: global gravity model development; methods for approximation of the gravity field; gravity field measuring techniques; global gravity field applications and requirements in geophysics and oceanography; and future gravity missions.
Camp, Christopher L; Heidenreich, Mark J; Dahm, Diane L; Bond, Jeffrey R; Collins, Mark S; Krych, Aaron J
2016-03-01
Tibial tubercle-trochlear groove (TT-TG) distance is a variable that helps guide surgical decision-making in patients with patellar instability. The purpose of this study was to compare the accuracy and reliability of an MRI TT-TG measuring technique using a simple external alignment method to a previously validated gold standard technique that requires advanced software read by radiologists. TT-TG was calculated by MRI on 59 knees with a clinical diagnosis of patellar instability in a blinded and randomized fashion by two musculoskeletal radiologists using advanced software and by two orthopaedists using the study technique which utilizes measurements taken on a simple electronic imaging platform. Interrater reliability between the two radiologists and the two orthopaedists and intermethods reliability between the two techniques were calculated using interclass correlation coefficients (ICC) and concordance correlation coefficients (CCC). ICC and CCC values greater than 0.75 were considered to represent excellent agreement. The mean TT-TG distance was 14.7 mm (Standard Deviation (SD) 4.87 mm) and 15.4 mm (SD 5.41) as measured by the radiologists and orthopaedists, respectively. Excellent interobserver agreement was noted between the radiologists (ICC 0.941; CCC 0.941), the orthopaedists (ICC 0.978; CCC 0.976), and the two techniques (ICC 0.941; CCC 0.933). The simple TT-TG distance measurement technique analysed in this study resulted in excellent agreement and reliability as compared to the gold standard technique. This method can predictably be performed by orthopaedic surgeons without advanced radiologic software. II.
NASA Astrophysics Data System (ADS)
Diekmann, Christian; Troebs, Michael; Steier, Frank; Bykov, Iouri; Heinzel, Gerhard; Danzmann, Karsten
The space-based interferometric gravitational-wave detector Laser Interferometer Space An-tenna (LISA) requires interferometry with subpicometer and nanoradian sensitivity in the fre-quency range between 3 mHz and 1 Hz. Currently, a first prototype of the optical bench for LISA is being designed. We report on a pre-experiment with the aim to demonstrate the required sensitivities and to thoroughly characterise the equipment. For this purpose, a quasi-monolithic optical setup has been built with two Mach-Zehnder interferometers (MZI) on an optical bench made of zerodur. In a first step the relative length change between these two MZI will be measured with a heterodyne modulation scheme in the kHz-range and the angle between two laser beams will be read out via quadrant photodiodes and a technique called differential wavefront sensing. These techniques have already been used for the LISA prede-cessor mission LISA Pathfinder and their sensitivity needs to be further improved to fulfill the requirements of the LISA mission. We describe the experiment and the characterization of the basic components. Measurements of the length and angular noise will be presented.
Gelfusa, M; Gaudio, P; Malizia, A; Murari, A; Vega, J; Richetta, M; Gonzalez, S
2014-06-01
Recently, surveying large areas in an automatic way, for early detection of both harmful chemical agents and forest fires, has become a strategic objective of defence and public health organisations. The Lidar and Dial techniques are widely recognized as a cost-effective alternative to monitor large portions of the atmosphere. To maximize the effectiveness of the measurements and to guarantee reliable monitoring of large areas, new data analysis techniques are required. In this paper, an original tool, the Universal Multi Event Locator, is applied to the problem of automatically identifying the time location of peaks in Lidar and Dial measurements for environmental physics applications. This analysis technique improves various aspects of the measurements, ranging from the resilience to drift in the laser sources to the increase of the system sensitivity. The method is also fully general, purely software, and can therefore be applied to a large variety of problems without any additional cost. The potential of the proposed technique is exemplified with the help of data of various instruments acquired during several experimental campaigns in the field.
Comandini, A; Malewicki, T; Brezinsky, K
2012-03-01
The implementation of techniques aimed at improving engine performance and reducing particulate matter (PM) pollutant emissions is strongly influenced by the limited understanding of the polycyclic aromatic hydrocarbons (PAH) formation chemistry, in combustion devices, that produces the PM emissions. New experimental results which examine the formation of multi-ring compounds are required. The present investigation focuses on two techniques for such an experimental examination by recovery of PAH compounds from a typical combustion oriented experimental apparatus. The online technique discussed constitutes an optimal solution but not always feasible approach. Nevertheless, a detailed description of a new online sampling system is provided which can serve as reference for future applications to different experimental set-ups. In comparison, an offline technique, which is sometimes more experimentally feasible but not necessarily optimal, has been studied in detail for the recovery of a variety of compounds with different properties, including naphthalene, biphenyl, and iodobenzene. The recovery results from both techniques were excellent with an error in the total carbon balance of around 10% for the online technique and an uncertainty in the measurement of the single species of around 7% for the offline technique. Although both techniques proved to be suitable for measurement of large PAH compounds, the online technique represents the optimal solution in view of the simplicity of the corresponding experimental procedure. On the other hand, the offline technique represents a valuable solution in those cases where the online technique cannot be implemented.
[Technological innovations in radiation oncology require specific quality controls].
Lenaerts, E; Mathot, M
2014-01-01
During the last decade, the field of radiotherapy has benefited from major technological innovations and continuously improving treatment efficacy, comfort and safety of patients. This mainly concerns the imaging techniques that allow 4D CT scan recording the respiratory phases, on-board imaging on linear accelerators that ensure perfect positioning of the patient for treatment and irradiation techniques that reduce very significantly the duration of treatment sessions without compromising quality of the treatment plan, including IMRT (Intensity Modulated Radiation Therapy) and VMAT (Volumetric Modulated Arc therapy). In this context of rapid technological change, it is the responsibility of medical physicists to regularly and precisely monitor the perfect functioning of new techniques to ensure patient safety. This requires the use of specific quality control equipment best suited to these new techniques. We will briefly describe the measurement system Delta4 used to control individualized treatment plan for each patient treated with VMAT technology.
Xavier, Pedro; Ayres-De-Campos, Diogo; Reynolds, Ana; Guimarães, Mariana; Costa-Santos, Cristina; Patrício, Belmiro
2005-09-01
Modifications to the classic cesarean section technique described by Pfannenstiel and Kerr have been proposed in the last few years. The objective of this trial was to compare intraoperative and short-term postoperative outcomes between the Pfannenstiel-Kerr and the modified Misgav-Ladach (MML) techniques for cesarean section. This prospective randomized trial involved 162 patients undergoing transverse lower uterine segment cesarean section. Patients were allocated to one of the two arms: 88 to the MML technique and 74 to the Pfannenstiel-Kerr technique. Main outcome measures were defined as the duration of surgery, analgesic requirements, and bowel restitution by the second postoperative day. Additional outcomes evaluated were febrile morbidity, postoperative antibiotic use, postpartum endometritis, and wound complications. Student's t, Mann-Whitney, and Chi-square tests were used for statistical analysis of the results, and a p < 0.05 was considered as the probability level reflecting significant differences. No differences between groups were noted in the incidence of analgesic requirements, bowel restitution by the second postoperative day, febrile morbidity, antibiotic requirements, endometritis, or wound complications. The MML technique took on average 12 min less to complete (p = 0.001). The MML technique is faster to perform and similar in terms of febrile morbidity, time to bowel restitution, or need for postoperative medications. It is likely to be more cost-effective.
Computer-assisted techniques to evaluate fringe patterns
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Bhat, Gopalakrishna K.
1992-01-01
Strain measurement using interferometry requires an efficient way to extract the desired information from interferometric fringes. Availability of digital image processing systems makes it possible to use digital techniques for the analysis of fringes. In the past, there have been several developments in the area of one dimensional and two dimensional fringe analysis techniques, including the carrier fringe method (spatial heterodyning) and the phase stepping (quasi-heterodyning) technique. This paper presents some new developments in the area of two dimensional fringe analysis, including a phase stepping technique supplemented by the carrier fringe method and a two dimensional Fourier transform method to obtain the strain directly from the discontinuous phase contour map.
Measurement Consistency from Magnetic Resonance Images
Chung, Dongjun; Chung, Moo K.; Durtschi, Reid B.; Lindell, R. Gentry; Vorperian, Houri K.
2010-01-01
Rationale and Objectives In quantifying medical images, length-based measurements are still obtained manually. Due to possible human error, a measurement protocol is required to guarantee the consistency of measurements. In this paper, we review various statistical techniques that can be used in determining measurement consistency. The focus is on detecting a possible measurement bias and determining the robustness of the procedures to outliers. Materials and Methods We review correlation analysis, linear regression, Bland-Altman method, paired t-test, and analysis of variance (ANOVA). These techniques were applied to measurements, obtained by two raters, of head and neck structures from magnetic resonance images (MRI). Results The correlation analysis and the linear regression were shown to be insufficient for detecting measurement inconsistency. They are also very sensitive to outliers. The widely used Bland-Altman method is a visualization technique so it lacks the numerical quantification. The paired t-test tends to be sensitive to small measurement bias. On the other hand, ANOVA performs well even under small measurement bias. Conclusion In almost all cases, using only one method is insufficient and it is recommended to use several methods simultaneously. In general, ANOVA performs the best. PMID:18790405
NASA Technical Reports Server (NTRS)
Merceret, Francis J.
1995-01-01
This document presents results of a field study of the effect of sheltering of wind sensors by nearby foliage on the validity of wind measurements at the Space Shuttle Landing Facility (SLF). Standard measurements are made at one second intervals from 30-feet (9.1-m) towers located 500 feet (152 m) from the SLF centerline. The centerline winds are not exactly the same as those measured by the towers. A companion study, Merceret (1995), quantifies the differences as a function of statistics of the observed winds and distance between the measurements and points of interest. This work examines the effect of nearby foliage on the accuracy of the measurements made by any one sensor, and the effects of averaging on interpretation of the measurements. The field program used logarithmically spaced portable wind towers to measure wind speed and direction over a range of conditions as a function of distance from the obstructing foliage. Appropriate statistics were computed. The results suggest that accurate measurements require foliage be cut back to OFCM standards. Analysis of averaging techniques showed that there is no significant difference between vector and scalar averages. Longer averaging periods reduce measurement error but do not otherwise change the measurement in reasonably steady flow regimes. In rapidly changing conditions, shorter averaging periods may be required to capture trends.
"Open-Box" Approach to Measuring Fluorescence Quenching Using an iPad Screen and Digital SLR Camera
ERIC Educational Resources Information Center
Koenig, Michael H.; Yi, Eun P.; Sandridge, Matthew J.; Mathew, Alexander S.; Demas, James N.
2015-01-01
Fluorescence quenching is an analytical technique and a common undergraduate laboratory exercise. Unfortunately, a typical quenching experiment requires the use of an expensive fluorometer that measures the relative fluorescence intensity of a single sample in a closed compartment unseen by the experimenter. To overcome these shortcomings, we…
Why You Should Believe Cold Fusion is Real
NASA Astrophysics Data System (ADS)
Storms, Edmund K.
2005-03-01
Nuclear reactions are now claimed to be initiated in certain solid materials at an energy too low to overcome the Coulomb barrier. These reactions include fusion, accelerated radioactive decay, and transmutation involving heavy elements. Evidence is based on hundreds of measurements of anomalous energy using a variety of calorimeters at levels far in excess of error, measurement of nuclear products using many normally accepted techniques, observations of many patterns of behavior common to all studies, measurement of anomalous energetic emissions using accepted techniques, and an understanding of most variables that have hindered reproducibility in the past. This evidence can be found at www.LENR-CANR.orgwww.LENR-CANR.org. Except for an accepted theory, the claims have met all requirements normally required before a new idea is accepted by conventional science, yet rejection continues. How long can the US afford to reject a clean and potentially cheap source of energy, especially when other nations are attempting to develop this energy and the need for such an energy source is so great?
OVERVIEW OF NEUTRON MEASUREMENTS IN JET FUSION DEVICE.
Batistoni, P; Villari, R; Obryk, B; Packer, L W; Stamatelatos, I E; Popovichev, S; Colangeli, A; Colling, B; Fonnesu, N; Loreti, S; Klix, A; Klosowski, M; Malik, K; Naish, J; Pillon, M; Vasilopoulou, T; De Felice, P; Pimpinella, M; Quintieri, L
2017-10-05
The design and operation of ITER experimental fusion reactor requires the development of neutron measurement techniques and numerical tools to derive the fusion power and the radiation field in the device and in the surrounding areas. Nuclear analyses provide essential input to the conceptual design, optimisation, engineering and safety case in ITER and power plant studies. The required radiation transport calculations are extremely challenging because of the large physical extent of the reactor plant, the complexity of the geometry, and the combination of deep penetration and streaming paths. This article reports the experimental activities which are carried-out at JET to validate the neutronics measurements methods and numerical tools used in ITER and power plant design. A new deuterium-tritium campaign is proposed in 2019 at JET: the unique 14 MeV neutron yields produced will be exploited as much as possible to validate measurement techniques, codes, procedures and data currently used in ITER design thus reducing the related uncertainties and the associated risks in the machine operation. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Laser tracker orientation in confined space using on-board targets
NASA Astrophysics Data System (ADS)
Gao, Yang; Kyle, Stephen; Lin, Jiarui; Yang, Linghui; Ren, Yu; Zhu, Jigui
2016-08-01
This paper presents a novel orientation method for two laser trackers using on-board targets attached to the tracker head and rotating with it. The technique extends an existing method developed for theodolite intersection systems which are now rarely used. This method requires only a very narrow space along the baseline between the instrument heads, in order to establish the orientation relationship. This has potential application in environments where space is restricted. The orientation parameters can be calculated by means of two-face reciprocal measurements to the on-board targets, and measurements to a common point close to the baseline. An accurate model is then applied which can be solved through nonlinear optimization. Experimental comparison has been made with the conventional orientation method, which is based on measurements to common intersection points located off the baseline. This requires more space and the comparison has demonstrated the feasibility of the more compact technique presented here. Physical setup and testing suggest that the method is practical. Uncertainties estimated by simulation indicate good performance in terms of measurement quality.
Pitfalls in the measurement of muscle mass: a need for a reference standard.
Buckinx, Fanny; Landi, Francesco; Cesari, Matteo; Fielding, Roger A; Visser, Marjolein; Engelke, Klaus; Maggi, Stefania; Dennison, Elaine; Al-Daghri, Nasser M; Allepaerts, Sophie; Bauer, Jurgen; Bautmans, Ivan; Brandi, Maria Luisa; Bruyère, Olivier; Cederholm, Tommy; Cerreta, Francesca; Cherubini, Antonio; Cooper, Cyrus; Cruz-Jentoft, Alphonso; McCloskey, Eugene; Dawson-Hughes, Bess; Kaufman, Jean-Marc; Laslop, Andrea; Petermans, Jean; Reginster, Jean-Yves; Rizzoli, René; Robinson, Sian; Rolland, Yves; Rueda, Ricardo; Vellas, Bruno; Kanis, John A
2018-04-01
All proposed definitions of sarcopenia include the measurement of muscle mass, but the techniques and threshold values used vary. Indeed, the literature does not establish consensus on the best technique for measuring lean body mass. Thus, the objective measurement of sarcopenia is hampered by limitations intrinsic to assessment tools. The aim of this study was to review the methods to assess muscle mass and to reach consensus on the development of a reference standard. Literature reviews were performed by members of the European Society for Clinical and Economic Aspects of Osteoporosis and Osteoarthritis working group on frailty and sarcopenia. Face-to-face meetings were organized for the whole group to make amendments and discuss further recommendations. A wide range of techniques can be used to assess muscle mass. Cost, availability, and ease of use can determine whether the techniques are better suited to clinical practice or are more useful for research. No one technique subserves all requirements but dual energy X-ray absorptiometry could be considered as a reference standard (but not a gold standard) for measuring muscle lean body mass. Based on the feasibility, accuracy, safety, and low cost, dual energy X-ray absorptiometry can be considered as the reference standard for measuring muscle mass. © 2018 The Authors. Journal of Cachexia, Sarcopenia and Muscle published by John Wiley & Sons Ltd on behalf of the Society on Sarcopenia, Cachexia and Wasting Disorders.
Fabrication of glass gas cells for the HALOE and MAPS satellite experiments
NASA Technical Reports Server (NTRS)
Sullivan, E. M.; Walthall, H. G.
1984-01-01
The Halogen Occultation Experiment (HALOE) and the Measurement of Air Pollution from Satellites (MAPS) experiment are satellite-borne experiments which measure trace constituents in the Earth's atmosphere. The instruments which obtain the data for these experiments are based on the gas filter correlation radiometer measurement technique. In this technique, small samples of the gases of interest are encapsulated in glass cylinders, called gas cells, which act as very selective optical filters. This report describes the techniques employed in the fabrication of the gas cells for the HALOE and MAPS instruments. Details of the method used to fuse the sapphire windows (required for IR transmission) to the glass cell bodies are presented along with detailed descriptions of the jigs and fixtures used during the assembly process. The techniques and equipment used for window inspection and for pairing the HALOE windows are discussed. Cell body materials and the steps involved in preparing the cell bodies for the glass-to-sapphire fusion process are given.
Two microstrip arrays for interferometric SAR applications
NASA Technical Reports Server (NTRS)
Huang, J.
1993-01-01
Two types of C-band aircraft interferometric Synthetic Aperture Radar (SAR) are being developed at JPL to measure the ocean wave characteristics. Each type requires two identical antennas with each having a long rectangular aperture to radiate fan-shaped beam(s). One type of these radars requires each of its antennas to radiate a broadside beam that will measure the target's cross-track velocity. The other type, having each of its antennas to radiate two off-broadside pointed beams, will allow the measurement of both the cross-track and the along-track velocities of the target. Because flush mounting of the antenna on the aircraft fuselage is desirable, microstrip patch array is selected for these interferometric SAR antennas. To meet the radar system requirement, each array needs a total of 76 microstrip patches which are arranged in a 38 x 2 rectangular aperture with a physical size of 1.6m x 16.5cm. To minimize the insertion loss and physical real estate of this relatively long array, a combined series/parallel feed technique is used. Techniques to suppress cross-pol radiation and to effectively utilize the RF power are also implemented. Cross-pol level of lower than -30 dB from the co-pol peak and low insertion loss of 0.36 dB have been achieved for both types of arrays. For the type of radar that requires two off-braodside pointed beams, a simple phasing technique is used to achieve this dual-beam capability with adequate antenna gain (20 dBi) and sidelobe level (-14 dB). Both radar arrays have been flight tested on aircraft with excellent antenna performance demonstrated.
Slit Function Measurement of An Imaging Spectrograph Using Fourier Transform Techniques
NASA Technical Reports Server (NTRS)
Park, Hongwoo; Swimyard, Bruce; Jakobsen, Peter; Moseley, Harvey; Greenhouse, Matthew
2004-01-01
Knowledge of a spectrograph slit function is necessary to interpret the unresolved lines in an observed spectrum. A theoretical slit function can be calculated from the sizes of the entrance slit, the detector aperture when it functions as an exit slit, the dispersion characteristic of the disperser, and the point spread function of the spectrograph. A measured slit function is preferred to the theoretical one for the correct interpretation of the spectral data. In a scanning spectrometer with a single exit slit, the slit function is easily measured. In a fixed grating/or disperser spectrograph, illuminating the entrance slit with a near monochromatic light from a pre-monochrmator or a tunable laser and varying the wavelength of the incident light can measure the slit function. Even though the latter technique had been used successfully for the slit function measurements, it had been very laborious and it would be prohibitive to an imaging spectrograph or a multi-object spectrograph that has a large field of view. We explore an alternative technique that is manageable for the measurements. In the proposed technique, the imaging spectrograph is used as a detector of a Fourier transform spectrometer. This method can be applied not only to an IR spectrograph but also has a potential to a visible/UV spectrograph including a wedge filter spectrograph. This technique will require a blackbody source of known temperature and a bolometer to characterize the interferometer part of the Fourier Transform spectrometer. This pa?er will describe the alternative slit function measurement technique using a Fourier transform spectrometer.
Gosálbez, J; Wright, W M D; Jiang, W; Carrión, A; Genovés, V; Bosch, I
2018-08-01
In this paper, the study of frequency-dependent ultrasonic attenuation in strongly heterogeneous cementitious materials is addressed. To accurately determine the attenuation over a wide frequency range, it is necessary to have suitable excitation techniques. We have analysed two kinds of ultrasound techniques: contact ultrasound and airborne non-contact ultrasound. The mathematical formulation for frequency-dependent attenuation has been established and it has been revealed that each technique may achieve similar results but requires specific different calibration processes. In particular, the airborne non-contact technique suffers high attenuation due to energy losses at the air-material interfaces. Thus, its bandwidth is limited to low frequencies but it does not require physical contact between transducer and specimen. In contrast, the classical contact technique can manage higher frequencies but the measurement depends on the pressure between the transducer and the specimen. Cement specimens have been tested with both techniques and frequency attenuation dependence has been estimated. Similar results were achieved at overlapping bandwidth and it has been demonstrated that the airborne non-contact ultrasound technique could be a viable alternative to the classical contact technique. Copyright © 2018 Elsevier B.V. All rights reserved.
Visible near-diffraction-limited lucky imaging with full-sky laser-assisted adaptive optics
NASA Astrophysics Data System (ADS)
Basden, A. G.
2014-08-01
Both lucky imaging techniques and adaptive optics require natural guide stars, limiting sky-coverage, even when laser guide stars are used. Lucky imaging techniques become less successful on larger telescopes unless adaptive optics is used, as the fraction of images obtained with well-behaved turbulence across the whole telescope pupil becomes vanishingly small. Here, we introduce a technique combining lucky imaging techniques with tomographic laser guide star adaptive optics systems on large telescopes. This technique does not require any natural guide star for the adaptive optics, and hence offers full sky-coverage adaptive optics correction. In addition, we introduce a new method for lucky image selection based on residual wavefront phase measurements from the adaptive optics wavefront sensors. We perform Monte Carlo modelling of this technique, and demonstrate I-band Strehl ratios of up to 35 per cent in 0.7 arcsec mean seeing conditions with 0.5 m deformable mirror pitch and full adaptive optics sky-coverage. We show that this technique is suitable for use with lucky imaging reference stars as faint as magnitude 18, and fainter if more advanced image selection and centring techniques are used.
Underwater 3D Surface Measurement Using Fringe Projection Based Scanning Devices
Bräuer-Burchardt, Christian; Heinze, Matthias; Schmidt, Ingo; Kühmstedt, Peter; Notni, Gunther
2015-01-01
In this work we show the principle of optical 3D surface measurements based on the fringe projection technique for underwater applications. The challenges of underwater use of this technique are shown and discussed in comparison with the classical application. We describe an extended camera model which takes refraction effects into account as well as a proposal of an effective, low-effort calibration procedure for underwater optical stereo scanners. This calibration technique combines a classical air calibration based on the pinhole model with ray-based modeling and requires only a few underwater recordings of an object of known length and a planar surface. We demonstrate a new underwater 3D scanning device based on the fringe projection technique. It has a weight of about 10 kg and the maximal water depth for application of the scanner is 40 m. It covers an underwater measurement volume of 250 mm × 200 mm × 120 mm. The surface of the measurement objects is captured with a lateral resolution of 150 μm in a third of a second. Calibration evaluation results are presented and examples of first underwater measurements are given. PMID:26703624
NASA Technical Reports Server (NTRS)
Bowhill, S. A. (Editor); Edwards, B. (Editor)
1984-01-01
Various topics relative to middle atmosphere research were discussed. meteorological and aeronomical requirements for mesosphere-stratosphere-troposphere (MST) radar networks, general circulation of the middle atmosphere, the interpretation of radar returns from clear air, spaced antenna and Doppler techniques for velocity measurement, and techniques for the study of gravity waves and turbulence are among the topics discussed.
Noninterferometric Two-Dimensional Fourier-Transform Spectroscopy of Multilevel Systems
NASA Astrophysics Data System (ADS)
Davis, J. A.; Dao, L. V.; Do, M. T.; Hannaford, P.; Nugent, K. A.; Quiney, H. M.
2008-06-01
We demonstrate a technique that determines the phase of the photon-echo emission from spectrally resolved intensity data without requiring phase-stabilized input pulses. The full complex polarization of the emission is determined from spectral intensity measurements. The validity of this technique is demonstrated using simulated data, and is then applied to the analysis of two-color data obtained from the light-harvesting molecule lycopene.
Stratospheric measurement requirements and satellite-borne remote sensing capabilities
NASA Technical Reports Server (NTRS)
Carmichael, J. J.; Eldridge, R. G.; Frey, E. J.; Friedman, E. J.; Ghovanlou, A. H.
1976-01-01
The capabilities of specific NASA remote sensing systems to provide appropriate measurements of stratospheric parameters for potential user needs were assessed. This was used to evaluate the capabilities of the remote sensing systems to perform global monitoring of the stratosphere. The following conclusions were reached: (1) The performance of current remote stratospheric sensors, in some cases, compares quite well with identified measurement requirements. Their ability to measure other species has not been demonstrated. (2) None of the current, in-situ methods have the capability to satisfy the requirements for global monitoring and the temporal constraints derived from the users needs portion of the study. (3) Existing, non-remote techniques will continue to play an important role in stratospheric investigations for both corroboration of remotely collected data and in the evolutionary development of future remote sensors.
Modulation Transfer Function (MTF) measurement techniques for lenses and linear detector arrays
NASA Technical Reports Server (NTRS)
Schnabel, J. J., Jr.; Kaishoven, J. E., Jr.; Tom, D.
1984-01-01
Application is the determination of the Modulation Transfer Function (MTF) for linear detector arrays. A system set up requires knowledge of the MTF of the imaging lens. Procedure for this measurement is described for standard optical lab equipment. Given this information, various possible approaches to MTF measurement for linear arrays is described. The knife edge method is then described in detail.
Surface and downhole shear wave seismic methods for thick soil site investigations
Hunter, J.A.; Benjumea, B.; Harris, J.B.; Miller, R.D.; Pullan, S.E.; Burns, R.A.; Good, R.L.
2002-01-01
Shear wave velocity-depth information is required for predicting the ground motion response to earthquakes in areas where significant soil cover exists over firm bedrock. Rather than estimating this critical parameter, it can be reliably measured using a suite of surface (non-invasive) and downhole (invasive) seismic methods. Shear wave velocities from surface measurements can be obtained using SH refraction techniques. Array lengths as large as 1000 m and depth of penetration to 250 m have been achieved in some areas. High resolution shear wave reflection techniques utilizing the common midpoint method can delineate the overburden-bedrock surface as well as reflecting boundaries within the overburden. Reflection data can also be used to obtain direct estimates of fundamental site periods from shear wave reflections without the requirement of measuring average shear wave velocity and total thickness of unconsolidated overburden above the bedrock surface. Accurate measurements of vertical shear wave velocities can be obtained using a seismic cone penetrometer in soft sediments, or with a well-locked geophone array in a borehole. Examples from thick soil sites in Canada demonstrate the type of shear wave velocity information that can be obtained with these geophysical techniques, and show how these data can be used to provide a first look at predicted ground motion response for thick soil sites. ?? 2002 Published by Elsevier Science Ltd.
Pfeiffer, Tobias; Weber, Stefan; Klier, Jens; Bachtler, Sebastian; Molter, Daniel; Jonuscheit, Joachim; Von Freymann, Georg
2018-05-14
In many industrial fields, like automotive and painting industry, the thickness of thin layers is a crucial parameter for quality control. Hence, the demand for thickness measurement techniques continuously grows. In particular, non-destructive and contact-free terahertz techniques access a wide range of thickness determination applications. However, terahertz time-domain spectroscopy based systems perform the measurement in a sampling manner, requiring fixed distances between measurement head and sample. In harsh industrial environments vibrations of sample and measurement head distort the time-base and decrease measurement accuracy. We present an interferometer-based vibration correction for terahertz time-domain measurements, able to reduce thickness distortion by one order of magnitude for vibrations with frequencies up to 100 Hz and amplitudes up to 100 µm. We further verify the experimental results by numerical calculations and find very good agreement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, E.F.; Yule, T.J.
1984-01-01
Measurements of degraded fission-neutron spectra using recoil proportional counters are done routinely for studies involving fast reactor mockups. The same techniques are applicable to measurements of neutron spectra required for personnel dosimetry in fast neutron environments. A brief discussion of current applications of these methods together with the results of a measurement made on the LITTLE BOY assembly at Los Alamos are here described.
NASA Technical Reports Server (NTRS)
Roder, H. M.
1974-01-01
Information is presented on instrumentation for density measurement, liquid level measurement, quantity gauging, and phase measurement. Coverage of existing information directly concerned with oxygen was given primary emphasis. A description of the physical principle of measurement for each instrumentation type is included. The basic materials of construction are listed if available from the source document for each instrument discussed. Cleaning requirements, procedures, and verification techniques are included.
Wind Gust Measurement Techniques—From Traditional Anemometry to New Possibilities
2018-01-01
Information on wind gusts is needed for assessment of wind-induced damage and risks to safety. The measurement of wind gust speed requires a high temporal resolution of the anemometer system, because the gust is defined as a short-duration (seconds) maximum of the fluctuating wind speed. Until the digitalization of wind measurements in the 1990s, the wind gust measurements suffered from limited recording and data processing resources. Therefore, the majority of continuous wind gust records date back at most only by 30 years. Although the response characteristics of anemometer systems are good enough today, the traditional measurement techniques at weather stations based on cup and sonic anemometers are limited to heights and regions where the supporting structures can reach. Therefore, existing measurements are mainly concentrated over densely-populated land areas, whereas from remote locations, such as the marine Arctic, wind gust information is available only from sparse coastal locations. Recent developments of wind gust measurement techniques based on turbulence measurements from research aircraft and from Doppler lidar can potentially provide new information from heights and locations unreachable by traditional measurement techniques. Moreover, fast-developing measurement methods based on Unmanned Aircraft Systems (UASs) may add to better coverage of wind gust measurements in the future. In this paper, we provide an overview of the history and the current status of anemometry from the perspective of wind gusts. Furthermore, a discussion on the potential future directions of wind gust measurement techniques is provided. PMID:29690647
A New Electromagnetic Instrument for Thickness Gauging of Conductive Materials
NASA Technical Reports Server (NTRS)
Fulton, J. P.; Wincheski, B.; Nath, S.; Reilly, J.; Namkung, M.
1994-01-01
Eddy current techniques are widely used to measure the thickness of electrically conducting materials. The approach, however, requires an extensive set of calibration standards and can be quite time consuming to set up and perform. Recently, an electromagnetic sensor was developed which eliminates the need for impedance measurements. The ability to monitor the magnitude of a voltage output independent of the phase enables the use of extremely simple instrumentation. Using this new sensor a portable hand-held instrument was developed. The device makes single point measurements of the thickness of nonferromagnetic conductive materials. The technique utilized by this instrument requires calibration with two samples of known thicknesses that are representative of the upper and lower thickness values to be measured. The accuracy of the instrument depends upon the calibration range, with a larger range giving a larger error. The measured thicknesses are typically within 2-3% of the calibration range (the difference between the thin and thick sample) of their actual values. In this paper the design, operational and performance characteristics of the instrument along with a detailed description of the thickness gauging algorithm used in the device are presented.
DOE R&D Accomplishments Database
Phelps, M. E.; Hoffman, E. J.; Huang, S. C.; Schelbert, H. R.; Kuhl, D. E.
1978-01-01
Emission computed tomography can provide a quantitative in vivo measurement of regional tissue radionuclide tracer concentrations. This facility when combined with physiologic models and radioactively labeled physiologic tracers that behave in a predictable manner allow measurement of a wide variety of physiologic variables. This integrated technique has been referred to as Physiologic Tomography (PT). PT requires labeled compounds which trace physiologic processes in a known and predictable manner, and physiologic models which are appropriately formulated and validated to derive physiologic variables from ECT data. In order to effectively achieve this goal, PT requires an ECT system that is capable of performing truly quantitative or analytical measurements of tissue tracer concentrations and which has been well characterized in terms of spatial resolution, sensitivity and signal to noise ratios in the tomographic image. This paper illustrates the capabilities of emission computed tomography and provides examples of physiologic tomography for the regional measurement of cerebral and myocardial metabolic rate for glucose, regional measurement of cerebral blood volume, gated cardiac blood pools and capillary perfusion in brain and heart. Studies on patients with stroke and myocardial ischemia are also presented.
[The requirements of standard and conditions of interchangeability of medical articles].
Men'shikov, V V; Lukicheva, T I
2013-11-01
The article deals with possibility to apply specific approaches under evaluation of interchangeability of medical articles for laboratory analysis. The development of standardized analytical technologies of laboratory medicine and formulation of requirements of standards addressed to manufacturers of medical articles the clinically validated requirements are to be followed. These requirements include sensitivity and specificity of techniques, accuracy and precision of research results, stability of reagents' quality in particular conditions of their transportation and storage. The validity of requirements formulated in standards and addressed to manufacturers of medical articles can be proved using reference system, which includes master forms and standard samples, reference techniques and reference laboratories. This approach is supported by data of evaluation of testing systems for measurement of level of thyrotrophic hormone, thyroid hormones and glycated hemoglobin HB A1c. The versions of testing systems can be considered as interchangeable only in case of results corresponding to the results of reference technique and comparable with them. In case of absence of functioning reference system the possibilities of the Joined committee of traceability in laboratory medicine make it possible for manufacturers of reagent sets to apply the certified reference materials under development of manufacturing of sets for large listing of analytes.
NASA Technical Reports Server (NTRS)
Rodriguez, Ernesto; Imel, David; Houshmand, Bijan; Carande, Richard
1994-01-01
The structure of surface currents in the coastal environment can be very complex as it is governed by a multitude of factors such as local bathymetry, sea state, etc. Knowledge of the structure of coastal currents is a key requirement in the ability to carry out safe maneuvers and landings in an unknown coastal situation. Furthermore, it is desirable to have the ability to obtain such information by remote sensing and in a timely manner. We present a remote sensing technique which has the potential to meet certain specific requisites. We will present a theoretical discussion of the measurement technique, then will demonstrate the technique using data previously acquired and compare the results against conventional along-track interferometric measurements.
Radius of Curvature Measurement of Large Optics Using Interferometry and Laser Tracker
NASA Technical Reports Server (NTRS)
Hagopian, John; Connelly, Joseph
2011-01-01
The determination of radius of curvature (ROC) of optics typically uses either a phase measuring interferometer on an adjustable stage to determine the position of the ROC and the optics surface under test. Alternatively, a spherometer or a profilometer are used for this measurement. The difficulty of this approach is that for large optics, translation of the interferometer or optic under test is problematic because of the distance of translation required and the mass of the optic. Profilometry and spherometry are alternative techniques that can work, but require a profilometer or a measurement of subapertures of the optic. The proposed approach allows a measurement of the optic figure simultaneous with the full aperture radius of curvature.
Continuous Wave Ring-Down Spectroscopy for Velocity Distribution Measurements in Plasma
NASA Astrophysics Data System (ADS)
McCarren, Dustin W.
Cavity Ring-Down Spectroscopy CRDS is a proven, ultra-sensitive, cavity enhanced absorption spectroscopy technique. When combined with a continuous wavelength (CW) diode laser that has a sufficiently narrow line width, the Doppler broadened absorption line, i.e., the velocity distribution functions (VDFs) of the absorbing species, can be measured. Measurements of VDFs can be made using established techniques such as laser induced fluorescence (LIF). However, LIF suffers from the requirement that the initial state of the LIF sequence have a substantial density and that the excitation scheme fluoresces at an easily detectable wavelength. This usually limits LIF to ions and atoms with large metastable state densities for the given plasma conditions. CW-CRDS is considerably more sensitive than LIF and can potentially be applied to much lower density populations of ion and atom states. Also, as a direct absorption technique, CW-CRDS measurements only need to be concerned with the species' absorption wavelength and provide an absolute measure of the line integrated initial state density. Presented in this work are measurements of argon ion and neutral VDFs in a helicon plasma using CW-CRDS and LIF.
Techniques For Measuring Absorption Coefficients In Crystalline Materials
NASA Astrophysics Data System (ADS)
Klein, Philipp H.
1981-10-01
Absorption coefficients smaller than 0.001 cm-1 can, with more or less difficulty, be measured by several techniques. With diligence, all methods can be refined to permit measurement of absorption coefficients as small as 0.00001 cm-1. Spectral data are most readily obtained by transmission (spectrophotometric) methods, using multiple internal reflection to increase effective sample length. Emissivity measurements, requiring extreme care in the elimination of detector noise and stray light, nevertheless afford the most accessible spectral data in the 0.0001 to 0.00001 cm-1 range. Single-wavelength informa-tion is most readily obtained with modifications of laser calorimetry. Thermo-couple detection of energy absorbed from a laser beam is convenient, but involves dc amplification techniques and is susceptible to stray-light problems. Photoacoustic detection, using ac methods, tends to diminish errors of these types, but at some expense in experimental complexity. Laser calorimetry has been used for measurements of absorption coefficients as small as 0.000003 cm-1. Both transmission and calorimetric data, taken as functions of intensity, have been used for measurement of nonlinear absorption coefficients.
Tao, Feifei; Ngadi, Michael
2018-06-13
Conventional methods for determining fat content and fatty acids (FAs) composition are generally based on the solvent extraction and gas chromatography techniques, respectively, which are time consuming, laborious, destructive to samples and require use of hazard solvents. These disadvantages make them impossible for large-scale detection or being applied to the production line of meat factories. In this context, the great necessity of developing rapid and nondestructive techniques for fat and FAs analyses has been highlighted. Measurement techniques based on near-infrared spectroscopy, Raman spectroscopy, nuclear magnetic resonance and hyperspectral imaging have provided interesting and promising results for fat and FAs prediction in varieties of foods. Thus, the goal of this article is to give an overview of the current research progress in application of the four important techniques for fat and FAs analyses of muscle foods, which consist of pork, beef, lamb, chicken meat, fish and fish oil. The measurement techniques are described in terms of their working principles, features, and application advantages. Research advances for these techniques for specific food are summarized in detail and the factors influencing their modeling results are discussed. Perspectives on the current situation, future trends and challenges associated with the measurement techniques are also discussed.
Non-Intrusive Measurement Techniques Applied to the Hybrid Solid Fuel Degradation
NASA Astrophysics Data System (ADS)
Cauty, F.
2004-10-01
The knowledge of the solid fuel regression rate and the time evolution of the grain geometry are requested for hybrid motor design and control of its operating conditions. Two non-intrusive techniques (NDT) have been applied to hybrid propulsion : both are based on wave propagation, the X-rays and the ultrasounds, through the materials. X-ray techniques allow local thickness measurements (attenuated signal level) using small probes or 2D images (Real Time Radiography), with a link between the size of field of view and accuracy. Beside the safety hazards associated with the high-intensity X-ray systems, the image analysis requires the use of quite complex post-processing techniques. The ultrasound technique is more widely used in energetic material applications, including hybrid fuels. Depending upon the transducer size and the associated equipment, the application domain is large, from tiny samples to the quad-port wagon wheel grain of the 1.1 MN thrust HPDP motor. The effect of the physical quantities has to be taken into account in the wave propagation analysis. With respect to the various applications, there is no unique and perfect experimental method to measure the fuel regression rate. The best solution could be obtained by combining two techniques at the same time, each technique enhancing the quality of the global data.
Technology and techniques for parity experiments at Mainz: Past, Present and Future
NASA Astrophysics Data System (ADS)
Diefenbach, Juergen
2016-03-01
For almost 20 years the Mainz accelerator facility MAMI delivered polarized electron beam to the parity violation experiment A4 that measured the contributions of strange sea quarks to the proton electromagnetic factors. Parity violation asymmetries were of the order of A ~5 ppm. Currently the A1 collaboration carries out single spin asymmetry measurements at MAMI (A ~20 ppm) to prepare for a measurement of neutron skin depth on lead (A ~1 ppm). For such high precision experiments active stabilization and precise determination of beam parameters like current, energy, position, and angle are essential requirements in addition to precision electron beam polarimetry. For the future P2 experiment at the planned superconducting accelerator MESA in Mainz the requirements for beam quality will be even higher. P2 will measure the weak mixing angle with 0.15 percent total uncertainty and, in addition, the neutron skin depth of lead as well as parity violation in electron scattering off 12C. A tiny asymmetry of only -0.03 ppm creates the needs to combine digital feedback with feedforward stabilizations along with new polarimetry developments like a hydro-Moller and a double-Mott polarimeter to meet the goals for systematic uncertainty. This talk gives an overview of our experience with polarimetry, analog feedbacks and compensation techniques for apparative asymmetries at the A4 experiment. It finally leads to the requirements and new techniques for the pioneering P2 experiment at MESA. First results from beam tests currently carried out at the existing MAMI accelerator, employing high speed analog/digital conversion and FPGAs for control of beam parameters, will be presented. Supported by the cluster of excellence PRISMA and the Deutsche Forschungsgemeinschaft in the framework of the SFB1044.
Requirements for Calibration in Noninvasive Glucose Monitoring by Raman Spectroscopy
Lipson, Jan; Bernhardt, Jeff; Block, Ueyn; Freeman, William R.; Hofmeister, Rudy; Hristakeva, Maya; Lenosky, Thomas; McNamara, Robert; Petrasek, Danny; Veltkamp, David; Waydo, Stephen
2009-01-01
Background In the development of noninvasive glucose monitoring technology, it is highly desirable to derive a calibration that relies on neither person-dependent calibration information nor supplementary calibration points furnished by an existing invasive measurement technique (universal calibration). Method By appropriate experimental design and associated analytical methods, we establish the sufficiency of multiple factors required to permit such a calibration. Factors considered are the discrimination of the measurement technique, stabilization of the experimental apparatus, physics–physiology-based measurement techniques for normalization, the sufficiency of the size of the data set, and appropriate exit criteria to establish the predictive value of the algorithm. Results For noninvasive glucose measurements, using Raman spectroscopy, the sufficiency of the scale of data was demonstrated by adding new data into an existing calibration algorithm and requiring that (a) the prediction error should be preserved or improved without significant re-optimization, (b) the complexity of the model for optimum estimation not rise with the addition of subjects, and (c) the estimation for persons whose data were removed entirely from the training set should be no worse than the estimates on the remainder of the population. Using these criteria, we established guidelines empirically for the number of subjects (30) and skin sites (387) for a preliminary universal calibration. We obtained a median absolute relative difference for our entire data set of 30 mg/dl, with 92% of the data in the Clarke A and B ranges. Conclusions Because Raman spectroscopy has high discrimination for glucose, a data set of practical dimensions appears to be sufficient for universal calibration. Improvements based on reducing the variance of blood perfusion are expected to reduce the prediction errors substantially, and the inclusion of supplementary calibration points for the wearable device under development will be permissible and beneficial. PMID:20144354
PeerShield: determining control and resilience criticality of collaborative cyber assets in networks
NASA Astrophysics Data System (ADS)
Cam, Hasan
2012-06-01
As attackers get more coordinated and advanced in cyber attacks, cyber assets are required to have much more resilience, control effectiveness, and collaboration in networks. Such a requirement makes it essential to take a comprehensive and objective approach for measuring the individual and relative performances of cyber security assets in network nodes. To this end, this paper presents four techniques as to how the relative importance of cyber assets can be measured more comprehensively and objectively by considering together the main variables of risk assessment (e.g., threats, vulnerabilities), multiple attributes (e.g., resilience, control, and influence), network connectivity and controllability among collaborative cyber assets in networks. In the first technique, a Bayesian network is used to include the random variables for control, recovery, and resilience attributes of nodes, in addition to the random variables of threats, vulnerabilities, and risk. The second technique shows how graph matching and coloring can be utilized to form collaborative pairs of nodes to shield together against threats and vulnerabilities. The third technique ranks the security assets of nodes by incorporating multiple weights and thresholds of attributes into a decision-making algorithm. In the fourth technique, the hierarchically well-separated tree is enhanced to first identify critical nodes of a network with respect to their attributes and network connectivity, and then selecting some nodes as driver nodes for network controllability.
An experimental study of nonlinear dynamic system identification
NASA Technical Reports Server (NTRS)
Stry, Greselda I.; Mook, D. Joseph
1990-01-01
A technique for robust identification of nonlinear dynamic systems is developed and illustrated using both simulations and analog experiments. The technique is based on the Minimum Model Error optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature of the current work is the ability to identify nonlinear dynamic systems without prior assumptions regarding the form of the nonlinearities, in constrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.
Dynamic Measurement of Low Contact Angles by Optical Microscopy.
Campbell, James M; Christenson, Hugo K
2018-05-16
Precise measurement of contact angles is an important challenge in surface science, in the design and characterization of materials and in many crystallization experiments. Here we present a novel technique for measuring the contact angles of droplets between about 2° and 30°, with the lowest experimental uncertainty at the lower end of this range, typically ±0.1°. The lensing effect of a droplet interface produces the appearance of bright circles in low-aperture light, whose diameter is related to the contact angle. The technique requires no specialized equipment beyond an ordinary optical microscope, and may be used to study the dynamic evolution of the contact angle in situ during an experiment.
A survey of quality measures for gray-scale image compression
NASA Technical Reports Server (NTRS)
Eskicioglu, Ahmet M.; Fisher, Paul S.
1993-01-01
Although a variety of techniques are available today for gray-scale image compression, a complete evaluation of these techniques cannot be made as there is no single reliable objective criterion for measuring the error in compressed images. The traditional subjective criteria are burdensome, and usually inaccurate or inconsistent. On the other hand, being the most common objective criterion, the mean square error (MSE) does not have a good correlation with the viewer's response. It is now understood that in order to have a reliable quality measure, a representative model of the complex human visual system is required. In this paper, we survey and give a classification of the criteria for the evaluation of monochrome image quality.
NASA Technical Reports Server (NTRS)
Noel, G. T.; Sliemers, F. A.; Derringer, G. C.; Wood, V. E.; Wilkes, K. E.; Gaines, G. B.; Carmichael, D. C.
1978-01-01
Accelerated life-prediction test methodologies have been developed for the validation of a 20-year service life for low-cost photovoltaic arrays. Array failure modes, relevant materials property changes, and primary degradation mechanisms are discussed as a prerequisite to identifying suitable measurement techniques and instruments. Measurements must provide sufficient confidence to permit selection among alternative designs and materials and to stimulate widespread deployment of such arrays. Furthermore, the diversity of candidate materials and designs, and the variety of potential environmental stress combinations, degradation mechanisms and failure modes require that combinations of measurement techniques be identified which are suitable for the characterization of various encapsulation system-cell structure-environment combinations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sulkosky, V.; Jin, G.; Long, E.
Background: Measurements of the neutron charge form factor, Gmore » $$n\\atop{E}$$, are challenging because the neutron has no net charge. Additionally, measurements of the neutron form factors must use nuclear targets which require accurately accounting for nuclear effects. Extracting G$$n\\atop{E}$$ with different targets and techniques provides an important test of our handling of these effects. Purpose: The goal of the measurement was to use an inclusive asymmetry measurement technique to extract the neutron charge form factor at a four-momentum transfer of 1 (GeV/c) 2. This technique has very different systematic uncertainties than traditional exclusive measurements and thus serves as an independent check of whether nuclear effects have been taken into account correctly. Method: The inclusive quasielastic reaction 3$$→\\atop{He}$$ ($$→\\atop{e}$$, e') was measured at Jefferson Laboratory. The neutron electric form factor, G$$n\\atop{E}$$, was extracted at Q 2 = 0.98 ( GeV/c) 2 from ratios of electron-polarization asymmetries measured for two orthogonal target spin orientations. This Q 2 is high enough that the sensitivity to G$$n\\atop{E}$$ is not overwhelmed by the neutron magnetic contribution, and yet low enough that explicit neutron detection is not required to suppress pion production. Results: The neutron electric form factor, G$$n\\atop{E}$$, was determined to be 0.0414 ± 0.0077 ( stat ) ± 0.0022 ( syst ), providing the first high-precision inclusive extraction of the neutron's charge form factor. In conclusion: The use of the inclusive quasielastic 3$$→\\atop{He}$$ ($$→\\atop{e}$$, e') with a four-momentum transfer near 1 (GeV/c) 2 has been used to provide a unique measurement of G$$n\\atop{E}$$. This new result provides a systematically independent validation of the exclusive extraction technique results and implies that the nuclear corrections are understood. This is contrary to the proton form factor where asymmetry and differential cross section measurements have been shown to have large systematic differences.« less
Sulkosky, V.; Jin, G.; Long, E.; ...
2017-12-26
Background: Measurements of the neutron charge form factor, Gmore » $$n\\atop{E}$$, are challenging because the neutron has no net charge. Additionally, measurements of the neutron form factors must use nuclear targets which require accurately accounting for nuclear effects. Extracting G$$n\\atop{E}$$ with different targets and techniques provides an important test of our handling of these effects. Purpose: The goal of the measurement was to use an inclusive asymmetry measurement technique to extract the neutron charge form factor at a four-momentum transfer of 1 (GeV/c) 2. This technique has very different systematic uncertainties than traditional exclusive measurements and thus serves as an independent check of whether nuclear effects have been taken into account correctly. Method: The inclusive quasielastic reaction 3$$→\\atop{He}$$ ($$→\\atop{e}$$, e') was measured at Jefferson Laboratory. The neutron electric form factor, G$$n\\atop{E}$$, was extracted at Q 2 = 0.98 ( GeV/c) 2 from ratios of electron-polarization asymmetries measured for two orthogonal target spin orientations. This Q 2 is high enough that the sensitivity to G$$n\\atop{E}$$ is not overwhelmed by the neutron magnetic contribution, and yet low enough that explicit neutron detection is not required to suppress pion production. Results: The neutron electric form factor, G$$n\\atop{E}$$, was determined to be 0.0414 ± 0.0077 ( stat ) ± 0.0022 ( syst ), providing the first high-precision inclusive extraction of the neutron's charge form factor. In conclusion: The use of the inclusive quasielastic 3$$→\\atop{He}$$ ($$→\\atop{e}$$, e') with a four-momentum transfer near 1 (GeV/c) 2 has been used to provide a unique measurement of G$$n\\atop{E}$$. This new result provides a systematically independent validation of the exclusive extraction technique results and implies that the nuclear corrections are understood. This is contrary to the proton form factor where asymmetry and differential cross section measurements have been shown to have large systematic differences.« less
Measuring the free neutron lifetime to <= 0.3s via the beam method
NASA Astrophysics Data System (ADS)
Fomin, Nadia
2017-09-01
Neutron beta decay is an archetype for all semi-leptonic charged-current weak processes. While of interest as a fundamental particle property, a precise value for the neutron lifetime is also required for consistency tests of the Standard Model as well as to calculate the primordial 4He abundance in Big Bang Nucleosynthesis models. An effort has begun to develop an in-beam measurement of the neutron lifetime with a projected <= 0.3s uncertainty. This effort is part of a phased campaign of neutron lifetime measurements based at the NIST Center for Neutron Research, using the Sussex-ILL-NIST technique. Recent advances in neutron fluence measurement techniques as well as new large area silicon detector technology address the two largest sources of uncertainty of in-beam measurements, paving the way for a new measurement. The experimental design and projected uncertainties for the 0.3s measurement will be discussed.
Measuring the free neutron lifetime to <= 0.3s via the beam method
NASA Astrophysics Data System (ADS)
Mulholland, Jonathan; Fomin, Nadia; BL3 Collaboration
2015-10-01
Neutron beta decay is an archetype for all semi-leptonic charged-current weak processes. A precise value for the neutron lifetime is required for consistency tests of the Standard Model and is needed to predict the primordial 4He abundance from the theory of Big Bang Nucleosynthesis. An effort has begun for an in-beam measurement of the neutron lifetime with an projected <=0.3s uncertainty. This effort is part of a phased campaign of neutron lifetime measurements based at the NIST Center for Neutron Research, using the Sussex-ILL-NIST technique. Recent advances in neutron fluence measurement techniques as well as new large area silicon detector technology address the two largest sources of uncertainty of in-beam measurements, paving the way for a new measurement. The experimental design and projected uncertainties for the 0.3s measurement will be discussed.
Does Nursing Facility Use of Habilitation Therapy Improve Performance on Quality Measures?
Fitzler, Sandra; Raia, Paul; Buckley, Fredrick O; Wang, Mei
2016-12-01
The purpose of the project, Centers for Medicare & Medicaid Services (CMS) Innovation study, was to evaluate the impact on 12 quality measures including 10 Minimum Data Set (MDS) publicly reported measures and 2 nursing home process measures using habilitation therapy techniques and a behavior team to manage dementia-related behaviors. A prospective design was used to assess the changes in the measures. A total of 30 Massachusetts nursing homes participated in the project over a 12-month period. Project participation required the creation of an interdisciplinary behavior team, habilitation therapy training, facility visit by the program coordinator, attendance at bimonthly support and sharing calls, and monthly collection of process measure data. Participating facilities showed improvement in 9 of the 12 reported measures. Findings indicate potential quality improvement in having nursing homes learn habilitation therapy techniques and know how to use the interdisciplinary team to manage problem behaviors. © The Author(s) 2016.
Emerging optical nanoscopy techniques
Montgomery, Paul C; Leong-Hoi, Audrey
2015-01-01
To face the challenges of modern health care, new imaging techniques with subcellular resolution or detection over wide fields are required. Far field optical nanoscopy presents many new solutions, providing high resolution or detection at high speed. We present a new classification scheme to help appreciate the growing number of optical nanoscopy techniques. We underline an important distinction between superresolution techniques that provide improved resolving power and nanodetection techniques for characterizing unresolved nanostructures. Some of the emerging techniques within these two categories are highlighted with applications in biophysics and medicine. Recent techniques employing wider angle imaging by digital holography and scattering lens microscopy allow superresolution to be achieved for subcellular and even in vivo, imaging without labeling. Nanodetection techniques are divided into four subcategories using contrast, phase, deconvolution, and nanomarkers. Contrast enhancement is illustrated by means of a polarized light-based technique and with strobed phase-contrast microscopy to reveal nanostructures. Very high sensitivity phase measurement using interference microscopy is shown to provide nanometric surface roughness measurement or to reveal internal nanometric structures. Finally, the use of nanomarkers is illustrated with stochastic fluorescence microscopy for mapping intracellular structures. We also present some of the future perspectives of optical nanoscopy. PMID:26491270
Reliability of Two Smartphone Applications for Radiographic Measurements of Hallux Valgus Angles.
Mattos E Dinato, Mauro Cesar; Freitas, Marcio de Faria; Milano, Cristiano; Valloto, Elcio; Ninomiya, André Felipe; Pagnano, Rodrigo Gonçalves
The objective of the present study was to assess the reliability of 2 smartphone applications compared with the traditional goniometer technique for measurement of radiographic angles in hallux valgus and the time required for analysis with the different methods. The radiographs of 31 patients (52 feet) with a diagnosis of hallux valgus were analyzed. Four observers, 2 with >10 years' experience in foot and ankle surgery and 2 in-training surgeons, measured the hallux valgus angle and intermetatarsal angle using a manual goniometer technique and 2 smartphone applications (Hallux Angles and iPinPoint). The interobserver and intermethod reliability were estimated using intraclass correlation coefficients (ICCs), and the time required for measurement of the angles among the 3 methods was compared using the Friedman test. A very good or good interobserver reliability was found among the 4 observers measuring the hallux valgus angle and intermetatarsal angle using the goniometer (ICC 0.913 and 0.821, respectively) and iPinPoint (ICC 0.866 and 0.638, respectively). Using the Hallux Angles application, a very good interobserver reliability was found for measurements of the hallux valgus angle (ICC 0.962) and intermetatarsal angle (ICC 0.935) only among the more experienced observers. The time required for the measurements was significantly shorter for the measurements using both smartphone applications compared with the goniometer method. One smartphone application (iPinPoint) was reliable for measurements of the hallux valgus angles by either experienced or nonexperienced observers. The use of these tools might save time in the evaluation of radiographic angles in the hallux valgus. Copyright © 2016 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Measurement of segmental lumbar spine flexion and extension using ultrasound imaging.
Chleboun, Gary S; Amway, Matthew J; Hill, Jesse G; Root, Kara J; Murray, Hugh C; Sergeev, Alexander V
2012-10-01
Clinical measurement, technical note. To describe a technique to measure interspinous process distance using ultrasound (US) imaging, to assess the reliability of the technique, and to compare the US imaging measurements to magnetic resonance imaging (MRI) measurements in 3 different positions of the lumbar spine. Segmental spinal motion has been assessed using various imaging techniques, as well as surgically inserted pins. However, some imaging techniques are costly (MRI) and some require ionizing radiation (radiographs and fluoroscopy), and surgical procedures have limited use because of the invasive nature of the technique. Therefore, it is important to have an easily accessible and inexpensive technique for measuring lumbar segmental motion to more fully understand spine motion in vivo, to evaluate the changes that occur with various interventions, and to be able to accurately relate the changes in symptoms to changes in motion of individual vertebral segments. Six asymptomatic subjects participated. The distance between spinous processes at each lumbar segment (L1-2, L2-3, L3-4, L4-5) was measured digitally using MRI and US imaging. The interspinous distance was measured with subjects supine and the lumbar spine in 3 different positions (resting, lumbar flexion, and lumbar extension) for both MRI and US imaging. The differences in distance from neutral to extension, neutral to flexion, and extension to flexion were calculated. The measurement methods had excellent reliability for US imaging (intraclass correlation coefficient [ICC3,3] = 0.94; 95% confidence interval: 0.85, 0.97) and MRI (ICC3,3 = 0.98; 95% confidence interval: 0.95, 0.99). The distance measured was similar between US imaging and MRI (P>.05), except at L3-4 flexion-extension (P = .003). On average, the MRI measurements were 1.3 mm greater than the US imaging measurements. This study describes a new method for the measurement of lumbar spine segmental flexion and extension motion using US imaging. The US method may offer an alternative to other imaging techniques to monitor clinical outcomes because of its ease of use and the consistency of measurements compared to MRI.
NASA Technical Reports Server (NTRS)
Mcpeters, Richard
1993-01-01
Measurements of the atmospheric backscattered UV albedo have been used from satellites for more than 20 years to measure ozone. The longest continuous record has been from the Solar Backscattered Ultraviolet instrument (SBUV) and TOMS on the Nimbus 7 satellite, which have been in operation since November of 1978. Because of degradation in space of the diffuser plate used to measure extraterrestrial solar flux, it has been necessary to develop new techniques to maintain the calibration of these instruments. Calibration is maintained by requiring that ozone measured by different wavelength pairs be consistent, and by requiring that ozone measured at different solar zenith angles be consistent. This technique of using a geophysical quantity, ozone, as a transfer standard for wavelength calibration is very powerful. The recalibrated data have been used to measure total ozone trends to an accuracy of +/- 1.3 percent 2(sigma) error over ten years. No significant trends are found near the equator, but significant trends larger than predicted by homogeneous chemistry are found at middle to high latitudes in both hemispheres. In addition, UV albedo data have been used to measure SO2 using band structure in the 300-310 nm range, and to measure nitric oxide in the upper stratosphere and mesosphere using the (10) and (02) NO gamma band fluorescence features.
Wide-Field Imaging Using Nitrogen Vacancies
NASA Technical Reports Server (NTRS)
Englund, Dirk Robert (Inventor); Trusheim, Matthew Edwin (Inventor)
2017-01-01
Nitrogen vacancies in bulk diamonds and nanodiamonds can be used to sense temperature, pressure, electromagnetic fields, and pH. Unfortunately, conventional sensing techniques use gated detection and confocal imaging, limiting the measurement sensitivity and precluding wide-field imaging. Conversely, the present sensing techniques do not require gated detection or confocal imaging and can therefore be used to image temperature, pressure, electromagnetic fields, and pH over wide fields of view. In some cases, wide-field imaging supports spatial localization of the NVs to precisions at or below the diffraction limit. Moreover, the measurement range can extend over extremely wide dynamic range at very high sensitivity.
A new method for flight test determination of propulsive efficiency and drag coefficient
NASA Technical Reports Server (NTRS)
Bull, G.; Bridges, P. D.
1983-01-01
A flight test method is described from which propulsive efficiency as well as parasite and induced drag coefficients can be directly determined using relatively simple instrumentation and analysis techniques. The method uses information contained in the transient response in airspeed for a small power change in level flight in addition to the usual measurement of power required for level flight. Measurements of pitch angle and longitudinal and normal acceleration are eliminated. The theoretical basis for the method, the analytical techniques used, and the results of application of the method to flight test data are presented.
Precise measurement of the half-life of the Fermi {beta} decay of {sup 26}Al{sup m}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Rebecca J.; Thompson, Maxwell N.; Rassool, Roger P.
2011-08-15
State-of-the-art signal digitization and analysis techniques have been used to measure the half-life of the Fermi {beta} decay of {sup 26}Al{sup m}. The half-life was determined to be 6347.8 {+-} 2.5 ms. This new datum contributes to the experimental testing of the conserved-vector-current hypothesis and the required unitarity of the Cabibbo-Kobayashi-Maskawa matrix: two essential components of the standard model. Detailed discussion of the experimental techniques and data analysis and a thorough investigation of the statistical and systematic uncertainties are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robert P. Lucht
Laser-induced polarization spectroscopy (LIPS), degenerate four-wave mixing (DFWM), and electronic-resonance-enhanced (ERE) coherent anti-Stokes Raman scattering (CARS) are techniques that shows great promise for sensitive measurements of transient gas-phase species, and diagnostic applications of these techniques are being pursued actively at laboratories throughout the world. However, significant questions remain regarding strategies for quantitative concentration measurements using these techniques. The primary objective of this research program is to develop and test strategies for quantitative concentration measurements in flames and plasmas using these nonlinear optical techniques. Theoretically, we are investigating the physics of these processes by direct numerical integration (DNI) of the time-dependentmore » density matrix equations that describe the wave-mixing interaction. Significantly fewer restrictive assumptions are required when the density matrix equations are solved using this DNI approach compared with the assumptions required to obtain analytical solutions. For example, for LIPS calculations, the Zeeman state structure and hyperfine structure of the resonance and effects such as Doppler broadening can be included. There is no restriction on the intensity of the pump and probe beams in these nonperturbative calculations, and both the pump and probe beam intensities can be high enough to saturate the resonance. As computer processing speeds have increased, we have incorporated more complicated physical models into our DNI codes. During the last project period we developed numerical methods for nonperturbative calculations of the two-photon absorption process. Experimentally, diagnostic techniques are developed and demonstrated in gas cells and/or well-characterized flames for ease of comparison with model results. The techniques of two-photon, two-color H-atom LIPS and three-laser ERE CARS for NO and C{sub 2}H{sub 2} were demonstrated during the project period, and nonperturbative numerical models of both of these techniques were developed. In addition, we developed new single-mode, injection-seeded optical parametric laser sources (OPLSs) that will be used to replace multi-mode commercial dye lasers in our experimental measurements. The use of single-mode laser radiation in our experiments will increase significantly the rigor with which theory and experiment are compared.« less
Microbial Burden Approach : New Monitoring Approach for Measuring Microbial Burden
NASA Technical Reports Server (NTRS)
Venkateswaran, Kasthuri; Vaishampayan, Parag; Barmatz, Martin
2013-01-01
Advantages of new approach for differentiating live cells/ spores from dead cells/spores. Four examples of Salmonella outbreaks leading to costly destruction of dairy products. List of possible collaboration activities between JPL and other industries (for future discussion). Limitations of traditional microbial monitoring approaches. Introduction to new approach for rapid measurement of viable (live) bacterial cells/spores and its areas of application. Detailed example for determining live spores using new approach (similar procedure for determining live cells). JPL has developed a patented approach for measuring amount of live and dead cells/spores. This novel "molecular" method takes less than 5 to 7 hrs. compared to the seven days required using conventional techniques. Conventional "molecular" techniques can not discriminate live cells/spores among dead cells/spores. The JPL-developed novel method eliminates false positive results obtained from conventional "molecular" techniques that lead to unnecessary delay in the processing and to unnecessary destruction of food products.
Development of Moire machine vision
NASA Technical Reports Server (NTRS)
Harding, Kevin G.
1987-01-01
Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.
Measurement of precipitation induced FUV emission and Geocoronal Lyman Alpha from the IMI mission
NASA Technical Reports Server (NTRS)
Mende, Stephen B.; Fuselier, S. A.; Rairden, R. L.
1995-01-01
This final report describes the activities of the Lockheed Martin Palo Alto Research Laboratory in studying the measurement of ion and electron precipitation induced Far Ultra-Violet (FUV) emissions and Geocoronal Lyman Alpha for the NASA Inner Magnetospheric Imager (IMI) mission. this study examined promising techniques that may allow combining several FUV instruments that would separately measure proton aurora, electron aurora, and geocoronal Lyman alpha into a single instrument operated on a spinning spacecraft. The study consisted of two parts. First, the geocoronal Lyman alpha, proton aurora, and electron aurora emissions were modeled to determine instrument requirements. Second, several promising techniques were investigated to determine if they were suitable for use in an IMI-type mission. Among the techniques investigated were the Hydrogen gas cell for eliminating cold geocoronal Lyman alpha emissions, and a coded aperture spectrometer with sufficient resolution to separate Doppler shifted Lyman alpha components.
NASA Technical Reports Server (NTRS)
Wong, R. C.; Owen, H. A., Jr.; Wilson, T. G.; Rodriguez, G. E.
1980-01-01
Small-signal modeling techniques are used in a system stability analysis of a breadboard version of a complete functional electrical power system. The system consists of a regulated switching dc-to-dc converter, a solar-cell-array simulator, a solar-array EMI filter, battery chargers and linear shunt regulators. Loss mechanisms in the converter power stage, including switching-time effects in the semiconductor elements, are incorporated into the modeling procedure to provide an accurate representation of the system without requiring frequency-domain measurements to determine the damping factor. The small-signal system model is validated by the use of special measurement techniques which are adapted to the poor signal-to-noise ratio encountered in switching-mode systems. The complete electrical power system with the solar-array EMI filter is shown to be stable over the intended range of operation.
NASA Technical Reports Server (NTRS)
Linford, R. M. F.; Allen, T. H.; Dillow, C. F.
1975-01-01
A program is described to design, fabricate and install an experimental work chamber assembly (WCA) to provide a wide range of experimental capability. The WCA incorporates several techniques for studying the kinetics of contaminant films and their effect on optical surfaces. It incorporates the capability for depositing both optical and contaminant films on temperature-controlled samples, and for in-situ measurements of the vacuum ultraviolet reflectance. Ellipsometer optics are mounted on the chamber for film thickness determinations, and other features include access ports for radiation sources and instrumentation. Several supporting studies were conducted to define specific chamber requirements, to determine the sensitivity of the measurement techniques to be incorporated in the chamber, and to establish procedures for handling samples prior to their installation in the chamber. A bibliography and literature survey of contamination-related articles is included.
Correlation techniques to determine model form in robust nonlinear system realization/identification
NASA Technical Reports Server (NTRS)
Stry, Greselda I.; Mook, D. Joseph
1991-01-01
The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.
Development of Moire machine vision
NASA Astrophysics Data System (ADS)
Harding, Kevin G.
1987-10-01
Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.
A noncontact laser technique for circular contouring accuracy measurement
NASA Astrophysics Data System (ADS)
Wang, Charles; Griffin, Bob
2001-02-01
The worldwide competition in manufacturing frequently requires the high-speed machine tools to deliver contouring accuracy in the order of a few micrometers, while moving at relatively high feed rates. Traditional test equipment is rather limited in its capability to measure contours of small radius at high speed. Described here is a new noncontact laser measurement technique for the test of circular contouring accuracy. This technique is based on a single-aperture laser Doppler displacement meter with a flat mirror as the target. It is of a noncontact type with the ability to vary the circular path radius continuously at data rates of up to 1000 Hz. Using this instrument, the actual radius, feed rate, velocity, and acceleration profiles can also be determined. The basic theory of operation, the hardware setup, the data collection, the data processing, and the error budget are discussed.
NASA Technical Reports Server (NTRS)
Smith, Suzanne Weaver; Beattie, Christopher A.
1991-01-01
On-orbit testing of a large space structure will be required to complete the certification of any mathematical model for the structure dynamic response. The process of establishing a mathematical model that matches measured structure response is referred to as model correlation. Most model correlation approaches have an identification technique to determine structural characteristics from the measurements of the structure response. This problem is approached with one particular class of identification techniques - matrix adjustment methods - which use measured data to produce an optimal update of the structure property matrix, often the stiffness matrix. New methods were developed for identification to handle problems of the size and complexity expected for large space structures. Further development and refinement of these secant-method identification algorithms were undertaken. Also, evaluation of these techniques is an approach for model correlation and damage location was initiated.
NASA Astrophysics Data System (ADS)
Li, Zhi-Guo; Chen, Qi-Feng; Gu, Yun-Jun; Zheng, Jun; Chen, Xiang-Rong
2016-10-01
The accurate hydrodynamic description of an event or system that addresses the equations of state, phase transitions, dissociations, ionizations, and compressions, determines how materials respond to a wide range of physical environments. To understand dense matter behavior in extreme conditions requires the continual development of diagnostic methods for accurate measurements of the physical parameters. Here, we present a comprehensive diagnostic technique that comprises optical pyrometry, velocity interferometry, and time-resolved spectroscopy. This technique was applied to shock compression experiments of dense gaseous deuterium-helium mixtures driven via a two-stage light gas gun. The advantage of this approach lies in providing measurements of multiple physical parameters in a single experiment, such as light radiation histories, particle velocity profiles, and time-resolved spectra, which enables simultaneous measurements of shock velocity, particle velocity, pressure, density, and temperature and expands understanding of dense high pressure shock situations. The combination of multiple diagnostics also allows different experimental observables to be measured and cross-checked. Additionally, it implements an accurate measurement of the principal Hugoniots of deuterium-helium mixtures, which provides a benchmark for the impedance matching measurement technique.
Reports of workshops on Probe Measurements of Particles and Radiation in the Atmosphere of Titan
NASA Technical Reports Server (NTRS)
Ragent, Boris (Compiler); Swenson, Byron L. (Compiler)
1990-01-01
The planned 1995 joint ESA-NASA Cassini mission to the Saturnian system will include an atmospheric probe to be dropped into the atmosphere of Titan for in situ measurements during descent. Because of the unique properties of the Titan atmosphere it is necessary to consider the peculiar requirements for such measurements and applicable techniques. The proceedings of two workshops dealing with the measurement of particles and radiation in the atmosphere of Titan are presented in two parts. The first part dealt with the measurement of particulate matter in the atmosphere of Titan. The second part dealt with the measurement of radiation in the atmosphere of Titan. The proceedings were first published and distributed informally, and are presented with only minor editorial changes. In the report of the particulate matter workshop, discussions of the mission background, the importance of the measurements, and descriptions of the desired information are followed by a description of appropriate measurement techniques and conclusions and recommendations. The proceeding for the workshop on radiation measurement and imaging contains a discussion of the importance of radiation measurements and imaging, and presents a summary of participants' experience with such measurements made from entry probes. This is followed by a description of appropriate measurement techniques and conclusions and recommendations.
Spatial correlation in matter-wave interference as a measure of decoherence, dephasing, and entropy
NASA Astrophysics Data System (ADS)
Chen, Zilin; Beierle, Peter; Batelaan, Herman
2018-04-01
The loss of contrast in double-slit electron diffraction due to dephasing and decoherence processes is studied. It is shown that the spatial intensity correlation function of diffraction patterns can be used to distinguish between dephasing and decoherence. This establishes a measure of time reversibility that does not require the determination of coherence terms of the density matrix, while von Neumann entropy, another measure of time reversibility, does require coherence terms. This technique is exciting in view of the need to understand and control the detrimental experimental effect of contrast loss and for fundamental studies on the transition from the classical to the quantum regime.
NASA Technical Reports Server (NTRS)
Zebker, Howard A.; Rosen, Paul A.; Goldstein, Richard M.; Gabriel, Andrew; Werner, Charles L.
1994-01-01
We present a map of the coseimic displacement field resulting from the Landers, California, June 28, 1992, earthquake derived using data acquired from an orbiting high-resolution radar system. We achieve results more accurate than previous space studies and similar in accuracy to those obtained by conventional field survey techniques. Data from the ERS 1 synthetic aperture radar instrument acquired in April, July, and August 1992 are used to generate a high-resolution, wide area map of the displacements. The data represent the motion in the direction of the radar line of sight to centimeter level precision of each 30-m resolution element in a 113 km by 90 km image. Our coseismic displacement contour map gives a lobed pattern consistent with theoretical models of the displacement field from the earthquake. Fine structure observed as displacement tiling in regions several kilometers from the fault appears to be the result of local surface fracturing. Comparison of these data with Global Positioning System and electronic distance measurement survey data yield a correlation of 0.96; thus the radar measurements are a means to extend the point measurements acquired by traditional techniques to an area map format. The technique we use is (1) more automatic, (2) more precise, and (3) better validated than previous similar applications of differential radar interferometry. Since we require only remotely sensed satellite data with no additioanl requirements for ancillary information. the technique is well suited for global seismic monitoring and analysis.
The detection of oral cancer using differential pathlength spectroscopy
NASA Astrophysics Data System (ADS)
Sterenborg, H. J. C. M.; Kanick, S.; de Visscher, S.; Witjes, M.; Amelink, A.
2010-02-01
The development of optical techniques for non-invasive diagnosis of cancer is an ongoing challenge to biomedical optics. For head and neck cancer we see two main fields of potential application 1) Screening for second primaries in patients with a history of oral cancer. This requires imaging techniques or an approach where a larger area can be scanned quickly. 2) Distinguishing potentially malignant visible primary lesions from benign ones. Here fiberoptic point measurements can be used as the location of the lesion is known. This presentation will focus on point measurement techniques. Various techniques for point measurements have been developed and investigated clinically for different applications. Differential Pathlength Spectroscopy is a recently developed fiberoptic point measurement technique that measures scattered light in a broad spectrum. Due to the specific fiberoptic geometry we measure only scattered photons that have travelled a predetermined pathlength. This allows us to analyse the spectrum mathematically and translate the measured curve into a set of parameters that are related to the microvasculature and to the intracellular morphology. DPS has been extensively evaluated on optical phantoms and tested clinically in various clinical applications. The first measurements in biopsy proven squamous cell carcinoma showed significant changes in both vascular and morphological parameters. Measurements on thick keratinized lesions however failed to generate any vascular signatures. This is related to the sampling depth of the standard optical fibers used. Recently we developed a fiberoptic probe with a ~1 mm sampling depth. Measurements on several leukoplakias showed that with this new probe we sample just below the keratin layer and can obtain vascular signatures. The results of a first set of clinical measurements will be presented and the significance for clinical diagnostics will be discussed.
NASA Astrophysics Data System (ADS)
Nogueira, A. M.; Paço, T. A.; Silvestre, J. C.; Gonzalez, L. F.; Santos, F. L.; Pereira, L. S.
2012-04-01
The water footprint of a crop is the volume of water that is necessary to produce it, therefore relating crop water requirements and yield. The components of water footprint, blue, green and grey water footprints, refer to the volumes of respectively, surface and groundwater, rainfall, and water required to assimilate pollution, that are used to produce the crop yield. Determining blue and green water footprints is generally achieved using estimates of evapotranspiration obtained with a crop coefficient approach and of a water use ratio. In the present study we have used evapotranspiration measurements to estimate the water footprint of a super-intensive olive grove in southern Portugal (cv. Arbequina, drip irrigated, 1975 trees ha-1), during 2011. The crop water requirements were measured using a heat dissipation sap flow technique, to determine transpiration and using the eddy covariance method that allowed the direct measurement of evapotranspiration, applied to non-flat terrain conditions. This technique was used for a short period, from end of July till the end of August, while the sap flow measurements were performed from May to December, hence allowing the extension of the data series; for other periods estimates were used. Evapotranspiration measured directly with the eddy covariance method was in average close to 3 mm d-1 and the ratio of evapotranspiration to reference evapotranspiration approached 0.6 for the same period. Plants were under a moderate water stress, as confirmed with predawn leaf water potential measurements. The water footprint of the olive crop under study was lower than the water footprint simulations reported in literature. A possible reason relates to the density of plantation, yield and irrigation crops management. The irrigated olive grove under study had a high yield, which compensates for a high water consumption, leading to a water footprint lower than the ones of rainfed or less dense groves. Furthermore, as evapotranspiration measurements were used to calculate water footprint instead of the common procedure (using evapotranspiration estimates), this might have also introduced some differences. The potential of using remote sensing techniques for the assessment of water footprint of crops has been discussed in recent literature. It can provide estimates of actual evapotranspiration, of precipitation, of surface runoff and of irrigation needs when associated with modelling. In this study we further compare the water footprint estimates using in situ evapotranspiration measurements and water footprint estimates using remote sensing techniques. A comparison with the irrigation records for this particular olive orchard will be used to validate the approaches.
Application of polarization in high speed, high contrast inspection
NASA Astrophysics Data System (ADS)
Novak, Matthew J.
2017-08-01
Industrial optical inspection often requires high speed and high throughput of materials. Engineers use a variety of techniques to handle these inspection needs. Some examples include line scan cameras, high speed multi-spectral and laser-based systems. High-volume manufacturing presents different challenges for inspection engineers. For example, manufacturers produce some components in quantities of millions per month, per week or even per day. Quality control of so many parts requires creativity to achieve the measurement needs. At times, traditional vision systems lack the contrast to provide the data required. In this paper, we show how dynamic polarization imaging captures high contrast images. These images are useful for engineers to perform inspection tasks in some cases where optical contrast is low. We will cover basic theory of polarization. We show how to exploit polarization as a contrast enhancement technique. We also show results of modeling for a polarization inspection application. Specifically, we explore polarization techniques for inspection of adhesives on glass.
Laboratory for Atmospheres: Instrument Systems Report
NASA Technical Reports Server (NTRS)
2011-01-01
Studies of the atmospheres of our solar system's planets including our own require a comprehensive set of observations, relying on instruments on spacecraft, aircraft, balloons, and on the surface. Laboratory personnel define requirements, conceive concepts, and develop instrument systems for spaceflight missions, and for balloon, aircraft, and ground-based observations. Laboratory scientists also participate in the design of data processing algorithms, calibration techniques, and data processing systems. The instrument sections of this report are organized by measurement technique: lidar, passive, in situ and microwave. A number of instruments in various stages of development or modification are also described. This report will be updated as instruments evolve.
NuMI Beam Flux Studies for MINERvA
NASA Astrophysics Data System (ADS)
Aliaga Soplin, Leonidas
2012-03-01
MINERνA is a few-GeV neutrino scattering experiment which is required to understand the neutrino beam flux in order to make absolute cross section measurements. We have three techniques for constraining the flux: in situ measurements, external hadron production data and muon monitors. In this presentation I will discuss the details and our progress on these efforts.
Total energy expenditure in burned children using the doubly labeled water technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goran, M.I.; Peters, E.J.; Herndon, D.N.
Total energy expenditure (TEE) was measured in 15 burned children with the doubly labeled water technique. Application of the technique in burned children required evaluation of potential errors resulting from nutritional intake altering background enrichments during studies and from the high rate of water turnover relative to CO2 production. Five studies were discarded because of these potential problems. TEE was 1.33 +/- 0.27 times predicted basal energy expenditure (BEE), and in studies where resting energy expenditure (REE) was simultaneously measured, TEE was 1.18 +/- 0.17 times REE, which in turn was 1.16 +/- 0.10 times predicted BEE. TEE was significantlymore » correlated with measured REE (r2 = 0.92) but not with predicted BEE. These studies substantiate the advantage of measuring REE to predict TEE in severely burned patients as opposed to relying on standardized equations. Therefore we recommend that optimal nutritional support will be achieved in convalescent burned children by multiplying REE by an activity factor of 1.2.« less
Xin, Xia; Wan, Yinglang; Wang, Wenjun; Yin, Guangkun; McLamore, Eric S; Lu, Xinxiong
2013-10-28
Quantifying seed viability is required for seed bank maintenance. The classical methods for detecting seed viability are time consuming and frequently cause seed damage and unwanted germination. We have established a novel micro-optrode technique (MOT) to measure seed viability in a quick and non-invasive manner by measuring the oxygen influxes of intact seeds, approximately 10 seconds to screen one seed. Here, we used soybean, wheat, and oilseed rape as models to test our method. After 3-hour imbibition, oxygen influxes were recorded in real-time with the total measurement taking less than 5 minutes. The results indicated a significantly positive correlation between oxygen influxes and viability in all 3 seed types. We also established a linear equation between oxygen influxes and seed viability for each seed type. For measurements, seeds were kept in the early imbibition stage without germination. Thus, MOT is a reliable, quick, and low-cost seed viability detecting technique.
NASA Astrophysics Data System (ADS)
Dusanter, S.; Lew, M.; Bottorff, B.; Bechara, J.; Mielke, L. H.; Berke, A.; Raff, J. D.; Stevens, P. S.; Afif, C.
2013-12-01
A good understanding of the oxidative capacity of the atmosphere is important to tackle fundamental issues related to climate change and air quality. The hydroxyl radical (OH) is the dominant oxidant in the daytime troposphere and an accurate description of its sources in atmospheric models is of utmost importance. Recent field studies indicate higher-than-expected concentrations of HONO during the daytime, suggesting that the photolysis of HONO may be an important underestimated source of OH. Understanding the tropospheric HONO budget requires confidence in analytical instrumentation capable of selectively measuring HONO. In this presentation, we discuss an intercomparison study of HONO measurements performed during summer 2013 at the edge of a hardwood forest in Southern Indiana. This exercise involved a wet chemical technique (NITROMAC), an Incoherent Broad-Band Cavity Enhanced Absorption Spectroscopy instrument (IBBCEAS), and a Laser-Photofragmentation/Fluorescence Assay by Gas Expansion instrument (LP/FAGE). The agreement observed between the three techniques will be discussed for both ambient measurements and cross calibration experiments.
Non-contact method for characterization of small size thermoelectric modules.
Manno, Michael; Yang, Bao; Bar-Cohen, Avram
2015-08-01
Conventional techniques for characterization of thermoelectric performance require bringing measurement equipment into direct contact with the thermoelectric device, which is increasingly error prone as device size decreases. Therefore, the novel work presented here describes a non-contact technique, capable of accurately measuring the maximum ΔT and maximum heat pumping of mini to micro sized thin film thermoelectric coolers. The non-contact characterization method eliminates the measurement errors associated with using thermocouples and traditional heat flux sensors to test small samples and large heat fluxes. Using the non-contact approach, an infrared camera, rather than thermocouples, measures the temperature of the hot and cold sides of the device to determine the device ΔT and a laser is used to heat to the cold side of the thermoelectric module to characterize its heat pumping capacity. As a demonstration of the general applicability of the non-contact characterization technique, testing of a thin film thermoelectric module is presented and the results agree well with those published in the literature.
NASA Astrophysics Data System (ADS)
Mann, J. L.; Kelly, W. R.
2006-05-01
A new analytical technique for the determination of δ34S will be described. The technique is based on the production of singularly charged arsenic sulfide molecular ions (AsS+) by thermal ionization using silica gel as an emitter and combines multiple-collector thermal ionization mass spectrometry (MC-TIMS) with a 33S/36S double spike to correct instrumental fractionation. Because the double spike is added to the sample before chemical processing, both the isotopic composition and sulfur concentration are measured simultaneously. The accuracy and precision of the double spike technique is comparable to or better than modern gas source mass spectrometry, but requires about a factor of 10 less sample. Δ33S effects can be determined directly in an unspiked sample without any assumptions about the value of k (mass dependent fractionation factor) which is currently required by gas source mass spectrometry. Three international sulfur standards (IAEA-S-1, IAEA-S-2, and IAEA-S-3) were measured to evaluate the precision and accuracy of the new technique and to evaluate the consensus values for these standards. Two different double spike preparations were used. The δ34S values (reported relative to Vienna Canyon Diablo Troilite (VCDT), (δ34S (‰) = 34S/32S)sample/(34S/32S)VCDT - 1) x 1000]), 34S/32SVCDT = 0.0441626) determined were -0.32‰ ± 0.04‰ (1σ, n=4) and -0.31‰ ± 0.13‰ (1σ, n=8) for IAEA-S-1, 22.65‰ ± 0.04‰ (1σ, n=7) and 22.60‰ ± 0.06‰ (1σ, n=5) for IAEA- S-2, and -32.47‰ ± 0.07‰ (1σ, n=8) for IAEA-S-3. The amount of natural sample used for these analyses ranged from 0.40 μmoles to 2.35 μmoles. Each standard showed less than 0.5‰ variability (IAEA-S-1 < 0.4‰, IAEA-S-2 < 0.2‰, and IAEA-S-3 < 0.2‰). Our values for S-1 and S-2 are in excellent agreement with the consensus values and the values reported by other laboratories using both SF6 and SO2. Our value for S-3 differs statistically from the Institute for Reference Materials and Measurement (IRMM) value and is slightly lower than the currently accepted consensus value (-32.3). Because the technique is based on thermal ionization of AsS+, and As is mononuclidic, corrections for interferences or for scale contraction/expansion are not required. The availability of MC-TIMS instruments in laboratories around the world makes this technique immediately available to a much larger scientific community who require highly accurate and precise measurements of sulfur.
Low frequency radio synthesis imaging of the galactic center region
NASA Astrophysics Data System (ADS)
Nord, Michael Evans
2005-11-01
The Very Large Array radio interferometer has been equipped with new receivers to allow observations at 330 and 74 MHz, frequencies much lower than were previously possible with this instrument. Though the VLA dishes are not optimal for working at these frequencies, the system is successful and regular observations are now taken at these frequencies. However, new data analysis techniques are required to work at these frequencies. The technique of self- calibration, used to remove small atmospheric effects at higher frequencies, has been adapted to compensate for ionospheric turbulence in much the same way that adaptive optics is used in the optical regime. Faceted imaging techniques are required to compensate for the noncoplanar image distortion that affects the system due to the wide fields of view at these frequencies (~2.3° at 330 MHz and ~11° at 74 MHz). Furthermore, radio frequency interference is a much larger problem at these frequencies than in higher frequencies and novel approaches to its mitigation are required. These new techniques and new system are allowing for imaging of the radio sky at sensitivities and resolutions orders of magnitude higher than were possible with the low frequency systems of decades past. In this work I discuss the advancements in low frequency data techniques required to make high resolution, high sensitivity, large field of view measurements with the new Very Large Array low frequency system and then detail the results of turning this new system and techniques on the center of our Milky Way Galaxy. At 330 MHz I image the Galactic center region with roughly 10 inches resolution and 1.6 mJy beam -1 sensitivity. New Galactic center nonthermal filaments, new pulsar candidates, and the lowest frequency detection to date of the radio source associated with our Galaxy's central massive black hole result. At 74 MHz I image a region of the sky roughly 40° x 6° with, ~10 feet resolution. I use the high opacity of H II regions at 74 MHz to extract three-dimensional data on the distribution of Galactic cosmic ray emissivity, a measurement possible only at low radio frequencies.
Picosecond time-resolved photoluminescence using picosecond excitation correlation spectroscopy
NASA Astrophysics Data System (ADS)
Johnson, M. B.; McGill, T. C.; Hunter, A. T.
1988-03-01
We present a study of the temporal decay of photoluminescence (PL) as detected by picosecond excitation correlation spectroscopy (PECS). We analyze the correlation signal that is obtained from two simple models; one where radiative recombination dominates, the other where trapping processes dominate. It is found that radiative recombination alone does not lead to a correlation signal. Parallel trapping type processes are found to be required to see a signal. To illustrate this technique, we examine the temporal decay of the PL signal for In-alloyed, semi-insulating GaAs substrates. We find that the PL signal indicates a carrier lifetime of roughly 100 ps, for excitation densities of 1×1016-5×1017 cm-3. PECS is shown to be an easy technique to measure the ultrafast temporal behavior of PL processes because it requires no ultrafast photon detection. It is particularly well suited to measuring carrier lifetimes.
Laminography using resonant neutron attenuation for detection of drugs and explosives
NASA Astrophysics Data System (ADS)
Loveman, R. A.; Feinstein, R. L.; Bendahan, J.; Gozani, T.; Shea, P.
1997-02-01
Resonant neutron attenuation has been shown to be usable for assaying elements which constitute explosives, cocaine, and heroin. By careful analysis of attenuation measurements, the determination of the presence or absence of explosives can be determined. Simple two dimensional radiographic techniques only give results for areal density and consequently will be limited in their effectiveness. Classical tomographic techniques are both computationally very intensive and place strict requirements on the quality and amount of data acquired. These requirements and computations take time and are likely to be very difficult to perform in real time. Simulation studies described in this article have shown that laminographic image reconstruction can be used effectively with resonant neutron attenuation measurements to interrogate luggage for explosives or drugs. The design of the system described in this article is capable of pseudo-three dimensional image reconstruction of all of the elemental densities pertinent to explosive and drug detection.
Tail-Cuff Technique and Its Influence on Central Blood Pressure in the Mouse.
Wilde, Elena; Aubdool, Aisah A; Thakore, Pratish; Baldissera, Lineu; Alawi, Khadija M; Keeble, Julie; Nandi, Manasi; Brain, Susan D
2017-06-27
Reliable measurement of blood pressure in conscious mice is essential in cardiovascular research. Telemetry, the "gold-standard" technique, is invasive and expensive and therefore tail-cuff, a noninvasive alternative, is widely used. However, tail-cuff requires handling and restraint during measurement, which may cause stress affecting blood pressure and undermining reliability of the results. C57Bl/6J mice were implanted with radio-telemetry probes to investigate the effects of the steps of the tail-cuff technique on central blood pressure, heart rate, and temperature. This included comparison of handling techniques, operator's sex, habituation, and influence of hypertension induced by angiotensin II. Direct comparison of measurements obtained by telemetry and tail-cuff were made in the same mouse. The results revealed significant increases in central blood pressure, heart rate, and core body temperature from baseline following handling interventions without significant difference among the different handling technique, habituation, or sex of the investigator. Restraint induced the largest and sustained increase in cardiovascular parameters and temperature. The tail-cuff readings significantly underestimated those from simultaneous telemetry recordings; however, "nonsimultaneous" telemetry, obtained in undisturbed mice, were similar to tail-cuff readings obtained in undisturbed mice on the same day. This study reveals that the tail-cuff technique underestimates the core blood pressure changes that occur simultaneously during the restraint and measurement phases. However, the measurements between the 2 techniques are similar when tail-cuff readings are compared with telemetry readings in the nondisturbed mice. The differences between the simultaneous recordings by the 2 techniques should be recognized by researchers. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
Wagner, Maximilian E H; Gellrich, Nils-Claudius; Friese, Karl-Ingo; Becker, Matthias; Wolter, Franz-Erich; Lichtenstein, Juergen T; Stoetzer, Marcus; Rana, Majeed; Essig, Harald
2016-01-01
Objective determination of the orbital volume is important in the diagnostic process and in evaluating the efficacy of medical and/or surgical treatment of orbital diseases. Tools designed to measure orbital volume with computed tomography (CT) often cannot be used with cone beam CT (CBCT) because of inferior tissue representation, although CBCT has the benefit of greater availability and lower patient radiation exposure. Therefore, a model-based segmentation technique is presented as a new method for measuring orbital volume and compared to alternative techniques. Both eyes from thirty subjects with no known orbital pathology who had undergone CBCT as a part of routine care were evaluated (n = 60 eyes). Orbital volume was measured with manual, atlas-based, and model-based segmentation methods. Volume measurements, volume determination time, and usability were compared between the three methods. Differences in means were tested for statistical significance using two-tailed Student's t tests. Neither atlas-based (26.63 ± 3.15 mm(3)) nor model-based (26.87 ± 2.99 mm(3)) measurements were significantly different from manual volume measurements (26.65 ± 4.0 mm(3)). However, the time required to determine orbital volume was significantly longer for manual measurements (10.24 ± 1.21 min) than for atlas-based (6.96 ± 2.62 min, p < 0.001) or model-based (5.73 ± 1.12 min, p < 0.001) measurements. All three orbital volume measurement methods examined can accurately measure orbital volume, although atlas-based and model-based methods seem to be more user-friendly and less time-consuming. The new model-based technique achieves fully automated segmentation results, whereas all atlas-based segmentations at least required manipulations to the anterior closing. Additionally, model-based segmentation can provide reliable orbital volume measurements when CT image quality is poor.
NASA Technical Reports Server (NTRS)
Beattie, J. R.
1983-01-01
An investigation of short term measurement techniques for predicting the wearout of ion thrusters resulting from sputter erosion damage is described. The previously established laminar thin film techniques to provide high precision erosion rate data. However, the erosion rates obtained using this technique are generally substantially higher than those obtained during long term endurance tests (by virtue of the as deposited nature of the thin films), so that the results must be interpreted in a relative sense. Absolute measurements can be performed using a new masked substrate arrangement which was developed during this study. This new technique provides a means for estimating the lifetimes of critical discharge chamber components based on direct measurements of sputter erosion depths obtained during short duration (10 hour) tests. The method enables the effects on lifetime of thruster design and operating parameters to be inferred without the investment of the time and capital required to conduct long term (1000 hour) endurance tests. Results obtained using the direct measurement technique are shown to agree with sputter erosion depths calculated for the plasma conditions of the test and also with lifetest results. The direct measurement approach is shown to be applicable to both mercury and argon discharge plasma environments and should be useful in estimating the lifetimes of inert gas and extended performance mercury ion thrusters presently under development.
Smith, Winchell
1971-01-01
Current-meter measurements of high accuracy will be required for calibration of an acoustic flow-metering system proposed for installation in the Sacramento River at Chipps Island in California. This report presents an analysis of the problem of making continuous accurate current-meter measurements in this channel where the flow regime is changing constantly in response to tidal action. Gaging-system requirements are delineated, and a brief description is given of the several applicable techniques that have been developed by others. None of these techniques provides the accuracies required for the flowmeter calibration. A new system is described--one which has been assembled and tested in prototype and which will provide the matrix of data needed for accurate continuous current-meter measurements. Analysis of a large quantity of data on the velocity distribution in the channel of the Sacramento River at Chipps Island shows that adequate definition of the velocity can be made during the dominant flow periods--that is, at times other than slack-water periods--by use of current meters suspended at elevations 0.2 and 0.8 of the depth below the water surface. However, additional velocity surveys will be necessary to determine whether or not small systematic corrections need be applied during periods of rapidly changing flow. In the proposed system all gaged parameters, including velocities, depths, position in the stream, and related times, are monitored continuously as a boat moves across the river on the selected cross section. Data are recorded photographically and transferred later onto punchcards for computer processing. Computer programs have been written to permit computation of instantaneous discharges at any selected time interval throughout the period of the current meter measurement program. It is anticipated that current-meter traverses will be made at intervals of about one-half hour over periods of several days. Capability of performance for protracted periods was, consequently, one of the important elements in system design. Analysis of error sources in the proposed system indicates that errors in individual computed discharges can be kept smaller than 1.5 percent if the expected precision in all measured parameters is maintained.
Beaufrère, Hugues; Pariaut, Romain; Rodriguez, Daniel; Nevarez, Javier G; Tully, Thomas N
2012-10-01
To assess the agreement and reliability of cardiac measurements obtained with 3 echocardiographic techniques in anesthetized red-tailed hawks (Buteo jamaicensis). 10 red-tailed hawks. Transcoelomic, contrast transcoelomic, and transesophageal echocardiographic evaluations of the hawks were performed, and cineloops of imaging planes were recorded. Three observers performed echocardiographic measurements of cardiac variables 3 times on 3 days. The order in which hawks were assessed and echocardiographic techniques were used was randomized. Results were analyzed with linear mixed modeling, agreement was assessed with intraclass correlation coefficients, and variation was estimated with coefficients of variation. Significant differences were evident among the 3 echocardiographic methods for most measurements, and the agreement among findings was generally low. Interobserver agreement was generally low to medium. Intraobserver agreement was generally medium to high. Overall, better agreement was achieved for the left ventricular measurements and for the transesophageal approach than for other measurements and techniques. Echocardiographic measurements in hawks were not reliable, except when the left ventricle was measured by the same observer. Furthermore, cardiac morphometric measurements may not be clinically important. When measurements are required, one needs to consider that follow-up measurements should be performed by the same echocardiographer and should show at least a 20% difference from initial measurements to be confident that any difference is genuine.
Noncontact Temperature Measurement
NASA Technical Reports Server (NTRS)
Lee, Mark C. (Editor)
1988-01-01
Noncontact temperature measurement has been identified as one of the eight advanced technology development (ATD) areas to support the effort of the Microgravity Science and Applications Division in developing six Space Station flight experiment facilities. This two-day workshop was an opportunity for all six disciplines to present their requirements on noncontact temperature measurement and to discuss state-of-the-art developments. Multi-color pyrometry, laser pyrometry and radiometric imaging techniques are addressed.
Flash X-ray with image enhancement applied to combustion events
NASA Astrophysics Data System (ADS)
White, K. J.; McCoy, D. G.
1983-10-01
Flow visualization of interior ballistic processes by use of X-rays has placed more stringent requirements on flash X-ray techniques. The problem of improving radiographic contrast of propellants in X-ray transparent chambers was studied by devising techniques for evaluating, measuring and reducing the effects of scattering from both the test object and structures in the test area. X-ray film and processing is reviewed and techniques for evaluating and calibrating these are outlined. Finally, after X-ray techniques were optimized, the application of image enhancement processing which can improve image quality is described. This technique was applied to X-ray studies of the combustion of very high burning rate (VHBR) propellants and stick propellant charges.
Methods of blood flow measurement in the arterial circulatory system.
Tabrizchi, R; Pugsley, M K
2000-01-01
The most commonly employed techniques for the in vivo measurement of arterial blood flow to individual organs involve the use of flow probes or sensors. Commercially available systems for the measurement of in vivo blood flow can be divided into two categories: ultrasonic and electromagnetic. Two types of ultrasonic probes are used. The first type of flow probe measures blood flow-mediated Doppler shifts (Doppler flowmetry) in a vessel. The second type of flow probe measures the "transit time" required by an emitted ultrasound wave to traverse the vessel and are transit-time volume flow sensors. Measurement of blood flow in any vessel requires that the flow probe or sensor be highly accurate and exhibit signal linearity over the flow range in the vessel of interest. Moreover, additional desirable features include compact design, size, and weight. An additional important feature for flow probes is that they exhibit good biocompatability; it is imperative for the sensor to behave in an inert manner towards the biological system. A sensitive and reliable method to assess blood flow in individual organs in the body, other than by the use of probes/sensors, is the reference sample method that utilizes hematogeneously delivered microspheres. This method has been utilized to a large extend to assess regional blood flow in the entire body. Obviously, the purpose of measuring blood flow is to determine the amount of blood delivered to a given region per unit time (milliliters per minute) and it is desirable to achieve this goal by noninvasive methodologies. This, however, is not always possible. This review attempts to offer an overview of some of the techniques available for the assessment of regional blood flow in the arterial circulatory system and discusses advantages and disadvantages of these common techniques.
Data selection techniques in the interpretation of MAGSAT data over Australia
NASA Technical Reports Server (NTRS)
Johnson, B. D.; Dampney, C. N. G.
1983-01-01
The MAGSAT data require critical selection in order to produce a self-consistent data set suitable for map construction and subsequent interpretation. Interactive data selection techniques are described which involve the use of a special-purpose profile-oriented data base and a colour graphics display. The careful application of these data selection techniques permits validation every data value and ensures that the best possible self-consistent data set is being used to construct the maps of the magnetic field measured at satellite altitudes over Australia.
Non-dynamic decimeter tracking of earth satellites using the Global Positioning System
NASA Technical Reports Server (NTRS)
Yunck, T. P.; Wu, S. C.
1986-01-01
A technique is described for employing the Global Positioning System (GPS) to determine the position of a low earth orbiter with decimeter accuracy without the need for user dynamic models. A differential observing strategy is used requiring a GPS receiver on the user vehicle and a network of six ground receivers. The technique uses the continuous record of position change obtained from GPS carrier phase to smooth position measurements made with pseudo-range. The result is a computationally efficient technique that can deliver decimeter accuracy down to the lowest altitude orbits.
Koch, Christian
2010-05-01
A technique for the calibration of photodiodes in ultrasonic measurement systems using standard and cost-effective optical and electronic components is presented. A heterodyne system was realized using two commercially available distributed feedback lasers, and the required frequency stability and resolution were ensured by a difference-frequency servo control scheme. The frequency-sensitive element generating the error signal for the servo loop comprised a delay-line discriminator constructed from electronic elements. Measurements were carried out at up to 450 MHz, and the uncertainties of about 5% (k = 2) can be further reduced by improved radio frequency power measurement without losing the feature of using only simple elements. The technique initially dedicated to the determination of the frequency response of photodetectors applied in ultrasonic applications can be transferred to other application fields of optical measurements.
NASA Technical Reports Server (NTRS)
Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.
1984-01-01
A technique for the radiometric correction of LANDSAT-4 Thematic Mapper data was proposed by the Canada Center for Remote Sensing. Subsequent detailed observations of raw image data, raw radiometric calibration data and background measurements extracted from the raw data stream on High Density Tape highlighted major shortcomings in the proposed method which if left uncorrected, can cause severe radiometric striping in the output product. Results are presented which correlate measurements of the DC background with variations in both image data background and calibration samples. The effect on both raw data and on data corrected using the earlier proposed technique is explained, and the correction required for these factors as a function of individual scan line number for each detector is described. It is shown how the revised technique can be incorporated into an operational environment.
Biosensor-based microRNA detection: techniques, design, performance, and challenges.
Johnson, Blake N; Mutharasan, Raj
2014-04-07
The current state of biosensor-based techniques for amplification-free microRNA (miRNA) detection is critically reviewed. Comparison with non-sensor and amplification-based molecular techniques (MTs), such as polymerase-based methods, is made in terms of transduction mechanism, associated protocol, and sensitivity. Challenges associated with miRNA hybridization thermodynamics which affect assay selectivity and amplification bias are briefly discussed. Electrochemical, electromechanical, and optical classes of miRNA biosensors are reviewed in terms of transduction mechanism, limit of detection (LOD), time-to-results (TTR), multiplexing potential, and measurement robustness. Current trends suggest that biosensor-based techniques (BTs) for miRNA assay will complement MTs due to the advantages of amplification-free detection, LOD being femtomolar (fM)-attomolar (aM), short TTR, multiplexing capability, and minimal sample preparation requirement. Areas of future importance in miRNA BT development are presented which include focus on achieving high measurement confidence and multiplexing capabilities.
From statistical proofs of the Kochen-Specker theorem to noise-robust noncontextuality inequalities
NASA Astrophysics Data System (ADS)
Kunjwal, Ravi; Spekkens, Robert W.
2018-05-01
The Kochen-Specker theorem rules out models of quantum theory wherein projective measurements are assigned outcomes deterministically and independently of context. This notion of noncontextuality is not applicable to experimental measurements because these are never free of noise and thus never truly projective. For nonprojective measurements, therefore, one must drop the requirement that an outcome be assigned deterministically in the model and merely require that it be assigned a distribution over outcomes in a manner that is context-independent. By demanding context independence in the representation of preparations as well, one obtains a generalized principle of noncontextuality that also supports a quantum no-go theorem. Several recent works have shown how to derive inequalities on experimental data which, if violated, demonstrate the impossibility of finding a generalized-noncontextual model of this data. That is, these inequalities do not presume quantum theory and, in particular, they make sense without requiring an operational analog of the quantum notion of projectiveness. We here describe a technique for deriving such inequalities starting from arbitrary proofs of the Kochen-Specker theorem. It extends significantly previous techniques that worked only for logical proofs, which are based on sets of projective measurements that fail to admit of any deterministic noncontextual assignment, to the case of statistical proofs, which are based on sets of projective measurements that d o admit of some deterministic noncontextual assignments, but not enough to explain the quantum statistics.
NASA Technical Reports Server (NTRS)
Zell, P. T.; Hoffmann, J.; Sandlin, D. R.
1985-01-01
A study was performed in order to develop the criteria for the selection of flow direction indicators for use in the Integrated Systems Tests (ISTs) of the 40 by 80/80 by 120 Foot Wind Tunnel System. The problems, requirements, and limitations of flow direction measurement in the wind tunnel were investigated. The locations and types of flow direction measurements planned in the facility were discussed. A review of current methods of flow direction measurement was made and the most suitable technique for each location was chosen. A flow direction vane for each location was chosen. A flow direction vane that employs a Hall Effect Transducer was then developed and evaluated for application during the ISTs.
Measurement Techniques for Hypervelocity Impact Test Fragments
NASA Technical Reports Server (NTRS)
Hill, Nicole E.
2008-01-01
The ability to classify the size and shape of individual orbital debris fragments provides a better understanding of the orbital debris environment as a whole. The characterization of breakup fragmentation debris has gradually evolved from a simplistic, spherical assumption towards that of describing debris in terms of size, material, and shape parameters. One of the goals of the NASA Orbital Debris Program Office is to develop high-accuracy techniques to measure these parameters and apply them to orbital debris observations. Measurement of the physical characteristics of debris resulting from groundbased, hypervelocity impact testing provides insight into the shapes and sizes of debris produced from potential impacts in orbit. Current techniques for measuring these ground-test fragments require determination of dimensions based upon visual judgment. This leads to reduced accuracy and provides little or no repeatability for the measurements. With the common goal of mitigating these error sources, allaying any misunderstandings, and moving forward in fragment shape determination, the NASA Orbital Debris Program Office recently began using a computerized measurement system. The goal of using these new techniques is to improve knowledge of the relation between commonly used dimensions and overall shape. The immediate objective is to scan a single fragment, measure its size and shape properties, and import the fragment into a program that renders a 3D model that adequately demonstrates how the object could appear in orbit. This information would then be used to aid optical methods in orbital debris shape determination. This paper provides a description of the measurement techniques used in this initiative and shows results of this work. The tradeoffs of the computerized methods are discussed, as well as the means of repeatability in the measurements of these fragments. This paper serves as a general description of methods for the measurement and shape analysis of orbital debris.
[Current radionuclear methods in the diagnosis of regional myocardial circulation disorders].
Felix, R; Winkler, C
1977-01-29
Among nuclear medical diagnostic procedures a distinction can be made between non-invasive and invasive methods. The non-invasive methods serve either to image the still viable myocardium ("cold spot" technique) or for direct visualization of recently infarcted myocardial tissue ("hot spot" technique). These methods have the advantage of simple handling and good reproducibility. Side effects and risks are thus far unknown. Improvement of local dissolution should be aimed at in the future and wound greatly increase diagnostic and topographic security. The invasive procedures always require catheterization of the coronary arteries. This is the reason why they can be performed only with coronary arteriography. The Xenon "wash out" technique permits, with some restrictions, quantitative measurement of the regional flow rate. The "inflow technique" permits determination of perfusion distribution. The possibilities of the "double-radionuclide" scintigramm are discussed. For measurement of activity distribution, sationary detectors are generally preferred. In the case of the time-activity curves with the Xenon "wash out" technique, single detectors offer certain advantages.
Measurement of bronchial blood flow in the sheep by video dilution technique.
Link, D P; Parsons, G H; Lantz, B M; Gunther, R A; Green, J F; Cross, C E
1985-01-01
Bronchial blood flow was determined in five adult anaesthetised sheep by the video dilution technique. This is a new fluoroscopic technique for measuring blood flow that requires only arterial catheterisation. Catheters were placed into the broncho-oesophageal artery and ascending aorta from the femoral arteries for contrast injections and subsequent videotape recording. The technique yields bronchial blood flow as a percentage of cardiac output. The average bronchial artery blood flow was 0.6% (SD 0.20%) of cardiac output. In one sheep histamine (90 micrograms) injected directly into the bronchial artery increased bronchial blood flow by a factor of 6 and histamine (90 micrograms) plus methacholine (4.5 micrograms) augmented flow by a factor of 7.5 while leaving cardiac output unchanged. This study confirms the high degree of reactivity of the bronchial circulation and demonstrates the feasibility of using the video dilution technique to investigate the determinants of total bronchial artery blood flow in a stable animal model avoiding thoracotomy. Images PMID:3883564
Diffraction based overlay re-assessed
NASA Astrophysics Data System (ADS)
Leray, Philippe; Laidler, David; D'havé, Koen; Cheng, Shaunee
2011-03-01
In recent years, numerous authors have reported the advantages of Diffraction Based Overlay (DBO) over Image Based Overlay (IBO), mainly by comparison of metrology figures of merit such as TIS and TMU. Some have even gone as far as to say that DBO is the only viable overlay metrology technique for advanced technology nodes; 22nm and beyond. Typically the only reported drawback of DBO is the size of the required targets. This severely limits its effective use, when all critical layers of a product, including double patterned layers need to be measured, and in-die overlay measurements are required. In this paper we ask whether target size is the only limitation to the adoption of DBO for overlay characterization and control, or are there other metrics, which need to be considered. For example, overlay accuracy with respect to scanner baseline or on-product process overlay control? In this work, we critically re-assess the strengths and weaknesses of DBO for the applications of scanner baseline and on-product process layer overlay control. A comprehensive comparison is made to IBO. For on product process layer control we compare the performance on critical process layers; Gate, Contact and Metal. In particularly we focus on the response of the scanner to the corrections determined by each metrology technique for each process layer, as a measure of the accuracy. Our results show that to characterize an overlay metrology technique that is suitable for use in advanced technology nodes requires much more than just evaluating the conventional metrology metrics of TIS and TMU.
Flynn, Richard A; Shao, Bing; Chachisvilis, Mirianas; Ozkan, Mihrimah; Esener, Sadik C
2006-01-15
We propose and demonstrate a novel approach to measure the size and refractive index of microparticles based on two beam optical trapping, where forward scattered light is detected to give information about the particle. The counter-propagating optical trap measurement (COTM) system exploits the capability of optical traps to measure pico-Newton forces for microparticles' refractive index and size characterization. Different from the current best technique for microparticles' refractive index measurement, refractometry, a bulk technique requiring changing the fluid composition of the sample, our optical trap technique works with any transparent fluid and enables single particle analysis without the use of biological markers. A ray-optics model is used to explore the physical operation of the COTM system, predict system performance and aid system design. Experiments demonstrate the accuracy of refractive index measurement of Deltan=0.013 and size measurement of 3% of diameter with 2% standard deviation. Present performance is instrumentation limited, and a potential improvement by more than two orders of magnitude can be expected in the future. With further development in parallelism and miniaturization, the system offers advantages for cell manipulation and bioanalysis compatible with lab-on-a-chip systems.
Detection of microbial concentration in ice-cream using the impedance technique.
Grossi, M; Lanzoni, M; Pompei, A; Lazzarini, R; Matteuzzi, D; Riccò, B
2008-06-15
The detection of microbial concentration, essential for safe and high quality food products, is traditionally made with the plate count technique, that is reliable, but also slow and not easily realized in the automatic form, as required for direct use in industrial machines. To this purpose, the method based on impedance measurements represents an attractive alternative since it can produce results in about 10h, instead of the 24-48h needed by standard plate counts and can be easily realized in automatic form. In this paper such a method has been experimentally studied in the case of ice-cream products. In particular, all main ice-cream compositions of real interest have been considered and no nutrient media has been used to dilute the samples. A measurement set-up has been realized using benchtop instruments for impedance measurements on samples whose bacteria concentration was independently measured by means of standard plate counts. The obtained results clearly indicate that impedance measurement represents a feasible and reliable technique to detect total microbial concentration in ice-cream, suitable to be implemented as an embedded system for industrial machines.
NASA Astrophysics Data System (ADS)
Bell, S. A.; Miao, P.; Carroll, P. A.
2018-04-01
Evolved vapor coulometry is a measurement technique that selectively detects water and is used to measure water content of materials. The basis of the measurement is the quantitative electrolysis of evaporated water entrained in a carrier gas stream. Although this measurement has a fundamental principle—based on Faraday's law which directly relates electrolysis current to amount of substance electrolyzed—in practice it requires calibration. Commonly, reference materials of known water content are used, but the variety of these is limited, and they are not always available for suitable values, materials, with SI traceability, or with well-characterized uncertainty. In this paper, we report development of an alternative calibration approach using as a reference the water content of humid gas of defined dew point traceable to the SI via national humidity standards. The increased information available through this new type of calibration reveals a variation of the instrument performance across its range not visible using the conventional approach. The significance of this is discussed along with details of the calibration technique, example results, and an uncertainty evaluation.
NASA Astrophysics Data System (ADS)
Chen, Du-Xing; Pardo, Enric; Zhu, Yong-Hong; Xiang, Li-Xiong; Ding, Jia-Quan
2018-03-01
A technique is proposed for demagnetizing correction of the measured magnetization curve and hysteresis loop, i.e., the M∗ (Ha) curve, of a ferromagnetic cylinder into the true M (H) curve of the material, where Ha is the uniform applied field provided by a long solenoid and M∗ is the magnetization measured by a fluxmeter with the measuring coil surrounding the cylinder midplane. Different from ordinary demagnetizing correction by using a fixed demagnetizing factor, an (Ha,M∗) -dependent fluxmetric demagnetizing factor Nf (γ,χd) is used in this technique, where γ is the ratio of cylinder length to diameter, χd is the differential susceptibility on the corrected M (H) curve, and Nf (γ,χd) is approximated by accurately calculated Nf (γ, χ) of paramagnetic cylinders of the same γ and χ =χd . The validity of the technique is studied by comparing results for several samples of different lengths cut from the same cylinder. Such a demagnetizing correction is unambiguous but its success requires very high accuracy in the Nf determination and M∗ (Ha) measurements.
Yilmaz, Tuba; Kılıç, Mahmut Alp; Erdoğan, Melike; Çayören, Mehmet; Tunaoğlu, Doruk; Kurtoğlu, İsmail; Yaslan, Yusuf; Çayören, Hüseyin; Arkan, Akif Enes; Teksöz, Serkan; Cancan, Gülden; Kepil, Nuray; Erdamar, Sibel; Özcan, Murat; Akduman, İbrahim; Kalkan, Tunaya
2016-06-20
In the past decade, extensive research on dielectric properties of biological tissues led to characterization of dielectric property discrepancy between the malignant and healthy tissues. Such discrepancy enabled the development of microwave therapeutic and diagnostic technologies. Traditionally, dielectric property measurements of biological tissues is performed with the well-known contact probe (open-ended coaxial probe) technique. However, the technique suffers from limited accuracy and low loss resolution for permittivity and conductivity measurements, respectively. Therefore, despite the inherent dielectric property discrepancy, a rigorous measurement routine with open-ended coaxial probes is required for accurate differentiation of malignant and healthy tissues. In this paper, we propose to eliminate the need for multiple measurements with open-ended coaxial probe for malignant and healthy tissue differentiation by applying support vector machine (SVM) classification algorithm to the dielectric measurement data. To do so, first, in vivo malignant and healthy rat liver tissue dielectric property measurements are collected with open-ended coaxial probe technique between 500 MHz to 6 GHz. Cole-Cole functions are fitted to the measured dielectric properties and measurement data is verified with the literature. Malign tissue classification is realized by applying SVM to the open-ended coaxial probe measurements where as high as 99.2% accuracy (F1 Score) is obtained.
Process tool monitoring and matching using interferometry technique
NASA Astrophysics Data System (ADS)
Anberg, Doug; Owen, David M.; Mileham, Jeffrey; Lee, Byoung-Ho; Bouche, Eric
2016-03-01
The semiconductor industry makes dramatic device technology changes over short time periods. As the semiconductor industry advances towards to the 10 nm device node, more precise management and control of processing tools has become a significant manufacturing challenge. Some processes require multiple tool sets and some tools have multiple chambers for mass production. Tool and chamber matching has become a critical consideration for meeting today's manufacturing requirements. Additionally, process tools and chamber conditions have to be monitored to ensure uniform process performance across the tool and chamber fleet. There are many parameters for managing and monitoring tools and chambers. Particle defect monitoring is a well-known and established example where defect inspection tools can directly detect particles on the wafer surface. However, leading edge processes are driving the need to also monitor invisible defects, i.e. stress, contamination, etc., because some device failures cannot be directly correlated with traditional visualized defect maps or other known sources. Some failure maps show the same signatures as stress or contamination maps, which implies correlation to device performance or yield. In this paper we present process tool monitoring and matching using an interferometry technique. There are many types of interferometry techniques used for various process monitoring applications. We use a Coherent Gradient Sensing (CGS) interferometer which is self-referencing and enables high throughput measurements. Using this technique, we can quickly measure the topography of an entire wafer surface and obtain stress and displacement data from the topography measurement. For improved tool and chamber matching and reduced device failure, wafer stress measurements can be implemented as a regular tool or chamber monitoring test for either unpatterned or patterned wafers as a good criteria for improved process stability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Testa, M; Paganetti, H; Lu, H-M
2014-06-01
Purpose: To use proton radiography for i) in-vivo range verification of the brain fields of medulloblastoma patients in order to reduce the exit dose to the cranial skin and thus the risk of permanent alopecia; ii) for performing patient specific optimization of the calibration from CT-Hounsfield units to proton relative stopping power in order to minimize uncertainties of proton rang Methods: We developed and tested a prototype proton radiography system based on a single-plane scintillation screen coupled with a fast CCD camera (1ms sampling rate, 0.29x0.29 mm{sup 2} pixel size, 30×30 cm{sup 2} field of view). The method is basedmore » on the principle that, for passively scattered beams, the radiological depth of any point in the plateau of a spread-out Bragg-Peak (SOBP) can be inferred from the time-pattern of the dose rate measurements. We performed detector characterization measurements using complex-shape homogeneous phantoms and an Alderson phanto Results: Detector characterization tests confirmed the robustness of the technique. The results of the phantom measurements are encouraging in terms of achievable accuracy of the water equivalent thickness. A technique to minimize the degradation of spatial resolution due to multiple Coulomb scattering is discussed. Our novel radiographic technique is rapid (100 ms) and simultaneous over the whole field. The dose required to produce one radiograph, with the current settings, is ∼3 cG Conclusion: The results obtained with this simple and innovative radiography method are promising and motivate further development of technique. The system requires only a single-plane 2D dosimeter and it uses the clinical beam for a fraction of second with low dose to the patient.« less
LISA Technology Development at GSFC
NASA Technical Reports Server (NTRS)
Thorpe, James Ira; McWilliams, S.; Baker, J.
2008-01-01
The prime focus of LISA technology development efforts at NASA/GSFC has been in LISA interferometry, specifically in the area of laser frequency noise mitigation. Laser frequency noise is addressed through a combination of stabilization and common-mode rejection. Current plans call for two stages of stabilization, pre-stabilization to a local frequency reference and further stabilization using the constellation as a frequency reference. In order for these techniques to be used simultaneously, the pre-stabilization step must provide an adjustable frequency offset. Here, we report on a modification to the standard modulation/demodulation techniques used to stabilize to optical cavities that generates a frequency-tunable reference from a fixed-length cavity. This technique requires no modifications to the cavity itself and only minor modifications to the components. The measured noise performance and dynamic range of the laboratory prototype meets the LISA requirements.
DOT National Transportation Integrated Search
2015-02-01
Evaluation of the actual performance (quality) of pavements requires : in situ nondestructive testing (NDT) techniques that can accurately : measure the most critical, objective, and sensitive properties of : pavement systems.
Researching the electrical properties of single A3B5 nanowires
NASA Astrophysics Data System (ADS)
Vasiliev, A. A.; Mozharov, A. M.; Komissarenko, F. E.; Cirlin, G. E.; Bouravlev, D. A.; Mukhin, I. S.
2017-11-01
We investigate electrical characteristics of GaN, GaAs and GaP NWs which are grown with MOCVD and MBE. We developed measurement technique and it allows to determine the required properties of the structures.
Clark, Andrew C; Kontoudakis, Nikolaos; Barril, Celia; Schmidtke, Leigh M; Scollary, Geoffrey R
2016-07-01
The presence of copper in wine is known to impact the reductive, oxidative and colloidal stability of wine, and techniques enabling measurement of different forms of copper in wine are of particular interest in understanding these spoilage processes. Electrochemical stripping techniques developed to date require significant pretreatment of wine, potentially disturbing the copper binding equilibria. A thin mercury film on a screen printed carbon electrode was utilised in a flow system for the direct analysis of labile copper in red and white wine by constant current stripping potentiometry with medium exchange. Under the optimised conditions, including an enrichment time of 500s and constant current of 1.0μA, the response range was linear from 0.015 to 0.200mg/L. The analysis of 52 red and white wines showed that this technique generally provided lower labile copper concentrations than reported for batch measurement by related techniques. Studies in a model system and in finished wines showed that the copper sulfide was not measured as labile copper, and that loss of hydrogen sulfide via volatilisation induced an increase in labile copper within the model wine system. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sirorattanakul, Krittanon; Shen, Chong; Ou-Yang, Daniel
Diffusivity governs the dynamics of interacting particles suspended in a solvent. At high particle concentration, the interactions between particles become non-negligible, making the values of self and collective diffusivity diverge and concentration-dependent. Conventional methods for measuring this dependency, such as forced Rayleigh scattering, fluorescence correlation spectroscopy (FCS), and dynamic light scattering (DLS) require preparation of multiple samples. We present a new technique to measure this dependency by using only a single sample. Dielectrophoresis (DEP) is used to create concentration gradient in the solution. Across this concentration distribution, we use FCS to measure the concentration-dependent self diffusivity. Then, we switch off DEP to allow the particles to diffuse back to equilibrium. We obtain the time series of concentration distribution from fluorescence microscopy and use them to determine the concentration-dependent collective diffusivity. We compare the experimental results with computer simulations to verify the validity of this technique. Time and spatial resolution limits of FCS and imaging are also analyzed to estimate the limitation of the proposed technique. NSF DMR-0923299, Lehigh College of Arts and Sciences Undergraduate Research Grant, Lehigh Department of Physics, Emulsion Polymers Institute.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Scott A; Catalfamo, Simone; Brake, Matthew R. W.
2017-01-01
In the study of the dynamics of nonlinear systems, experimental measurements often convolute the response of the nonlinearity of interest and the effects of the experimental setup. To reduce the influence of the experimental setup on the deduction of the parameters of the nonlinearity, the response of a mechanical joint is investigated under various experimental setups. These experiments first focus on quantifying how support structures and measurement techniques affect the natural frequency and damping of a linear system. The results indicate that support structures created from bungees have negligible influence on the system in terms of frequency and damping ratiomore » variations. The study then focuses on the effects of the excitation technique on the response for a linear system. The findings suggest that thinner stingers should not be used, because under the high force requirements the stinger bending modes are excited adding unwanted torsional coupling. The optimal configuration for testing the linear system is then applied to a nonlinear system in order to assess the robustness of the test configuration. Finally, recommendations are made for conducting experiments on nonlinear systems using conventional/linear testing techniques.« less
Advanced study of global oceanographic requirements for EOS A/B: Appendix volume
NASA Technical Reports Server (NTRS)
1972-01-01
Tables and graphs are presented for a review of oceanographic studies using satellite-borne instruments. The topics considered include sensor requirements, error analysis for wind determination from glitter pattern measurements, coverage frequency plots, ground station rise and set times, a technique for reduction and analysis of ocean spectral data, rationale for the selection of a 2 PM descending orbit, and a priority analysis.
NASA Technical Reports Server (NTRS)
Evans, Keith D.; Demoz, Belay B.; Cadirola, Martin P.; Melfi, S. H.; Whiteman, David N.; Schwemmer, Geary K.; Starr, David OC.; Schmidlin, F. J.; Feltz, Wayne
2000-01-01
The NAcA/Goddard Space Flight Center Scanning Raman Lidar has made measurements of water vapor and aerosols for almost ten years. Calibration of the water vapor data has typically been performed by comparison with another water vapor sensor such as radiosondes. We present a new method for water vapor calibration that only requires low clouds, and surface pressure and temperature measurements. A sensitivity study was performed and the cloud base algorithm agrees with the radiosonde calibration to within 10- 15%. Knowledge of the true atmospheric lapse rate is required to obtain more accurate cloud base temperatures. Analysis of water vapor and aerosol measurements made in the vicinity of Hurricane Bonnie are discussed.
Instrumentation Working Group Summary
NASA Technical Reports Server (NTRS)
Zaller, Michelle; Miake-Lye, Richard
1999-01-01
The Instrumentation Working Group compiled a summary of measurement techniques applicable to gas turbine engine aerosol precursors and particulates. An assessment was made of the limits, accuracy, applicability, and technology readiness of the various techniques. Despite advances made in emissions characterization of aircraft engines, uncertainties still exist in the mechanisms by which aerosols and particulates are produced in the near-field engine exhaust. To adequately assess current understanding of the formation of sulfuric acid aerosols in the exhaust plumes of gas turbine engines, measurements are required to determine the degree and importance of sulfur oxidation in the turbine and at the engine exit. Ideally, concentrations of all sulfur species would be acquired, with emphasis on SO2 and SO3. Numerous options exist for extractive and non-extractive measurement of SO2 at the engine exit, most of which are well developed. SO2 measurements should be performed first to place an upper bound on the percentage of SO2 oxidation. If extractive and non-extractive techniques indicate that a large amount of the fuel sulfur is not detected as SO2, then efforts are needed to improve techniques for SO3 measurements. Additional work will be required to account for the fuel sulfur in the engine exhaust. Chemical Ionization Mass Spectrometry (CI-MS) measurements need to be pursued, although a careful assessment needs to be made of the sampling line impact on the extracted sample composition. Efforts should also be placed on implementing non-intrusive techniques and extending their capabilities by maximizing exhaust coverage for line-of-sight measurements, as well as development of 2-D techniques, where feasible. Recommendations were made to continue engine exit and combustor measurements of particulates. Particulate measurements should include particle size distribution, mass fraction, hydration properties, and volatile fraction. However, methods to ensure that unaltered samples are obtained need to be developed. Particulate speciation was also assigned a high priority for quantifying the fractions of carbon soot, PAH, refractory materials, metals, sulfates, and nitrates. High priority was also placed on performing a comparison of particle sizing instruments. Concern was expressed by the workshop attendees who routinely make particulate measurements about the variation in number density measured during in-flight tests by different instruments. In some cases, measurements performed by different groups of researchers during the same flight tests showed an order of magnitude variation. Second priority was assigned to measuring concentrations of odd hydrogen and oxidizing species. Since OH, HO2, H2O2, and O are extremely reactive, non-extractive measurements are recommended. A combination of absorption and fluorescence is anticipated to be effective for OH measurements in the combustor and at the engine exit. Extractive measurements of HO2 have been made in the stratosphere, where the ambient level of OH is relatively low. Use of techniques that convert HO2 to OH for combustor and engine exit measurements needs to be evaluated, since the ratio of HO2/OH may be 1% or less at both the combustor and engine exit. CI-MS might be a viable option for H2O2, subject to sampling line conversion issues. However, H2O2 is a low priority oxidizing species in the combustor and at the engine exit. Two candidates for atomic oxygen measurements are Resonance Enhanced Multi-Photon Ionization (REMPI) and Laser-Induced Fluorescence (LIF). Particulate measurement by simultaneous extractive and non-extractive techniques was given equal priority to the oxidizer measurements. Concern was expressed over the ability of typical ground test sampling lines to deliver an unaltered sample to a remotely located instrument. It was suggested that the sampling probe and line losses be checked out by attempting measurements using an optical or non-extractive technique immediately upstream of the sampling probe. This is a possible application for Laser Induced Incandescence (LII) as a check on the volume fraction of soot. Optical measurements of size distribution are not well developed for ultrafine particles less than about 20 nm in diameter, so a non-extractive technique for particulate size distribution cannot be recommended without further development. Carbon dioxide measurements need to be made to complement other extractive measurement techniques. CO2 measurements enable conversion of other species concentrations to emission indices. Carbon monoxide, which acts as a sink for oxidizing species, should be measured using non-extractive techniques. CO can be rapidly converted to CO2 in extractive probes, and a comparison between extractive and non-extractive measurements should be performed. Development of non-extractive techniques would help to assess the degree of CO conversion, and might be needed to improve the concentration measurement accuracy. Measurements of NO(x) will continue to be critical due to the role of NO and NO2 in atmospheric chemistry, and their influence on atmospheric ozone. Time-resolved measurements of temperature, velocity, and species concentrations were included on the list of desired measurement. Thermocouples are typically adequate for engine exit measurements. PIV and LDV are well established for obtaining velocity profiles. The techniques are listed in the accompanying table; are divided into extractive and non-extractive techniques. Efforts were made to include a measurement uncertainty for each technique. An assessment of the technology readiness was included.
Explosive component acceptance tester using laser interferometer technology
NASA Technical Reports Server (NTRS)
Wickstrom, Richard D.; Tarbell, William W.
1993-01-01
Acceptance testing of explosive components requires a reliable and simple to use testing method that can discern less than optimal performance. For hot-wire detonators, traditional techniques use dent blocks or photographic diagnostic methods. More complicated approaches are avoided because of their inherent problems with setup and maintenance. A recently developed tester is based on using a laser interferometer to measure the velocity of flying plates accelerated by explosively actuated detonators. Unlike ordinary interferometers that monitor displacement of the test article, this device measures velocity directly and is commonly used with non-spectral surfaces. Most often referred to as the VISAR technique (Velocity Interferometer System for Any Reflecting Surface), it has become the most widely-accepted choice for accurate measurement of velocity in the range greater than 1 mm/micro-s. Traditional VISAR devices require extensive setup and adjustment and therefore are unacceptable in a production-testing environment. This paper describes a new VISAR approach which requires virtually no adjustments, yet provides data with accuracy comparable to the more complicated systems. The device, termed the Fixed-Cavity VISAR, is currently being developed to serve as a product verification tool for hot-wire detonators and slappers. An extensive data acquisition and analysis computer code was also created to automate the manipulation of raw data into final results.
Fibre Optic Sensors for Selected Wastewater Characteristics
Chong, Su Sin; Abdul Aziz, A. R.; Harun, Sulaiman W.
2013-01-01
Demand for online and real-time measurements techniques to meet environmental regulation and treatment compliance are increasing. However the conventional techniques, which involve scheduled sampling and chemical analysis can be expensive and time consuming. Therefore cheaper and faster alternatives to monitor wastewater characteristics are required as alternatives to conventional methods. This paper reviews existing conventional techniques and optical and fibre optic sensors to determine selected wastewater characteristics which are colour, Chemical Oxygen Demand (COD) and Biological Oxygen Demand (BOD). The review confirms that with appropriate configuration, calibration and fibre features the parameters can be determined with accuracy comparable to conventional method. With more research in this area, the potential for using FOS for online and real-time measurement of more wastewater parameters for various types of industrial effluent are promising. PMID:23881131
Mattson, Eric C.; Aboualizadeh, Ebrahim; Barabas, Marie E.; Stucky, Cheryl L.; Hirschmugl, Carol J.
2013-01-01
Infrared (IR) spectromicroscopy, or chemical imaging, is an evolving technique that is poised to make significant contributions in the fields of biology and medicine. Recent developments in sources, detectors, measurement techniques and speciman holders have now made diffraction-limited Fourier transform infrared (FTIR) imaging of cellular chemistry in living cells a reality. The availability of bright, broadband IR sources and large area, pixelated detectors facilitate live cell imaging, which requires rapid measurements using non-destructive probes. In this work, we review advances in the field of FTIR spectromicroscopy that have contributed to live-cell two and three-dimensional IR imaging, and discuss several key examples that highlight the utility of this technique for studying the structure and chemistry of living cells. PMID:24256815
Flicker Detection, Measurement and Means of Mitigation: A Review
NASA Astrophysics Data System (ADS)
Virulkar, V. B.; Aware, M. V.
2014-04-01
The voltage fluctuations caused by rapid industrial load change have been a major concern for supply utilities, regulatory agencies and customers. This paper gives a general review about how to examine/assess voltage flicker and methods followed in measuring the flickers due to rapid changing loads and means for its mitigation. It discusses the effects on utilities conditions, compensators response time and compensator capacity of flicker mitigation. A comparison between conventional mitigation techniques and the state-of-art mitigation techniques are carried out. It is shown in many cases that the state-of-art solution provides higher performance compared with conventional mitigation techniques. However, the choice of most suitable solution depends on characteristics of the supply at the point of connection, the requirement of the load and economics.
Drouin, Olivier; Johnson, Jo-Ann; Chaemsaithong, Piya; Metcalfe, Amy; Huber, Janie; Schwarzenberger, Jill; Winters, Erin; Stavness, Lesley; Tse, Ada W T; Lu, Jing; Lim, Wan Teng; Leung, Tak Yeung; Bujold, Emmanuel; Sahota, Daljit; Poon, Liona C
2017-10-04
The objectives of this study were to 1) define the protocol for the first-trimester assessment of the uterine artery pulsatility index (UtA-PI) using the new transverse technique, 2) evaluate UtA-PI measured by the transverse approach versus that obtained by the conventional sagittal approach, and 3) determine if accelerated onsite training (both methods) of inexperienced sonographers can achieve reproducible UtA-PI measurements compared to that measured by an experienced sonographer. The study consists of 2 parts conducted in 2 centers (Part 1, Calgary, Canada and Part 2, Hong Kong). Part 1 Prospective observational study of women with singleton pregnancies between 11-13+6 weeks' gestation. UtA-PI measurements were performed using the 2 techniques (4 sonographers trained in both methods, 10 cases each) and measurement indices (PI), time required and subjective difficulty to obtain satisfactory measurements were compared. One sample t-test and Wilcoxon rank sign test was used when appropriate. Bland-Altman difference plots were used to assess measurement agreement, and intra-class correlation (ICC) was used to evaluate measurement reliability. A target plot was used to assess measures of central tendency and dispersion. Part 2 One experienced and three inexperienced sonographers prospectively measured the UtA-PI at 11-13+6 weeks' gestation in two groups of women (42 and 35, respectively), with singleton pregnancies using both approaches. Inexperienced sonographers underwent accelerated on-site training by the experienced sonographer. Measurement approach and sonographer order were on a random basis. ICC, Bland-Altman and Passing-Bablok analyses were performed to assess measurement agreement, reliability and effect of accelerated training. Part 1 We observed no difference in the mean time to acquire the measurements (Sagittal: 118 seconds vs Transverse: 106 seconds, p=0.38). The 4 sonographers reported the transverse technique was subjectively easier to perform (p=0.04). The bias (95% LOA) and the ICC between sagittal and transverse measurements was -0.05 (-0.48 to 0.37) and 0.94 for the mean UtA-PIs respectively. Measurements obtained using the transverse technique after correcting for gestation were significantly closer to the expected distribution than the sagittal technique. Part 2 There were no significant differences in the median UtA-PI measurements using the different approaches for both experienced and inexperienced sonographers (p>0.05 for all sonographers). Mean UtA-PI measurement reliability between approaches was high for the experienced (ICC=0.92) and inexperienced sonographers (ICC>0.81). UtA-PI measurement approaches did not deviate from linearity whilst biases ranged from -0.10 to 0.07. Median time required was similar (sagittal vs. transverse: 56.11 sec vs. 49.29 sec; p=0.054). This novel transverse approach for the measurement of UtA-PI in the first-trimester appears comparable to the sagittal approach and can be used in first-trimester preeclampsia screening. Providing accelerated onsite training can be helpful to improve UtA-PI measurement reliability and could potentially facilitate the broad implementation of first-trimester preeclampsia screening. This article is protected by copyright. All rights reserved.
Correcting for deformation in skin-based marker systems.
Alexander, E J; Andriacchi, T P
2001-03-01
A new technique is described that reduces error due to skin movement artifact in the opto-electronic measurement of in vivo skeletal motion. This work builds on a previously described point cluster technique marker set and estimation algorithm by extending the transformation equations to the general deformation case using a set of activity-dependent deformation models. Skin deformation during activities of daily living are modeled as consisting of a functional form defined over the observation interval (the deformation model) plus additive noise (modeling error). The method is described as an interval deformation technique. The method was tested using simulation trials with systematic and random components of deformation error introduced into marker position vectors. The technique was found to substantially outperform methods that require rigid-body assumptions. The method was tested in vivo on a patient fitted with an external fixation device (Ilizarov). Simultaneous measurements from markers placed on the Ilizarov device (fixed to bone) were compared to measurements derived from skin-based markers. The interval deformation technique reduced the errors in limb segment pose estimate by 33 and 25% compared to the classic rigid-body technique for position and orientation, respectively. This newly developed method has demonstrated that by accounting for the changing shape of the limb segment, a substantial improvement in the estimates of in vivo skeletal movement can be achieved.
Nonlinear ultrasonics for material state awareness
NASA Astrophysics Data System (ADS)
Jacobs, L. J.
2014-02-01
Predictive health monitoring of structural components will require the development of advanced sensing techniques capable of providing quantitative information on the damage state of structural materials. By focusing on nonlinear acoustic techniques, it is possible to measure absolute, strength based material parameters that can then be coupled with uncertainty models to enable accurate and quantitative life prediction. Starting at the material level, this review will present current research that involves a combination of sensing techniques and physics-based models to characterize damage in metallic materials. In metals, these nonlinear ultrasonic measurements can sense material state, before the formation of micro- and macro-cracks. Typically, cracks of a measurable size appear quite late in a component's total life, while the material's integrity in terms of toughness and strength gradually decreases due to the microplasticity (dislocations) and associated change in the material's microstructure. This review focuses on second harmonic generation techniques. Since these nonlinear acoustic techniques are acoustic wave based, component interrogation can be performed with bulk, surface and guided waves using the same underlying material physics; these nonlinear ultrasonic techniques provide results which are independent of the wave type used. Recent physics-based models consider the evolution of damage due to dislocations, slip bands, interstitials, and precipitates in the lattice structure, which can lead to localized damage.
Optical radiation measurements and instrumentation.
Andersen, F A; Landry, R J
1981-07-01
Accurate measurement of optical radiation is required when sources of optical radiation are used in biological research. Such measurement of broad-band noncoherent optical radiations usually must be performed by a highly trained specialist using sophisticated, complex, and expensive instruments. Presentation of the results of such measurement requires correct use of quantities and units with which many biological researchers are unfamiliar. The measurement process, quantities, units, measurement systems and instruments, and uncertainties associated with optical radiation measurements are reviewed in this paper. A conventional technique for evaluating the potential hazards associated with broad-band sources of optical radiation and a spectroradiometer developed to measure spectral quantities is described. A new prototype ultraviolet radiation hazard monitor which has recently been developed is also presented. This new instrument utilizes a spectrograph and a spectral weighting mechanical mask and provides a direct reading of the effective irradiance for wavelengths less than 315 nm.
High-Temperature Surface Thermometry Technique based on Upconversion Nano-Phosphors
NASA Astrophysics Data System (ADS)
Combs, C.; Clemens, N.; Guo, X.; Song, H.; Zhao, H.; Li, K. K.; Zou, Y. K.; Jiang, H.
2011-11-01
Downconversion thermographic phosphors have been extensively used for high-temperature surface thermometry applications (e.g., aerothermodynamics, turbine blades) where temperature-sensitive paint is not viable. In downconversion techniques the phosphorescence is at longer wavelengths than the excitation source. We are developing a new upconversion thermographic phosphor technique that employs rare-earth-doped ceramics whose phosphorescence exhibit a strong temperature dependence. In the upconversion technique the phosphor is excited with near-IR light and emission is at visible wavelengths; thus, it does not require expensive UV windows and does not suffer from interference from background fluorescence. In this work the upconversion phosphors have been characterized in terms of their intensity, lifetimes and spectral content over a temperature range of 300K to 1500K. The technique has been evaluated for applications of 2D surface temperature measurements by using the total integrated intensity and the ratio of emission in different visible color bands. The results indicate that upconversion phosphor thermometry is a promising technique for making non-contact high-surface temperature measurements with good accuracy. Work supported by NASA under contract NNX11CG89P.
Electron Beam Instrumentation Techniques Using Coherent Radiation
NASA Astrophysics Data System (ADS)
Wang, D. X.
1997-05-01
In recent years, there has been increasing interest in short electron bunches for different applications such as short wavelength FELs, linear colliders, advanced accelerators such as laser or plasma wakefield accelerators, and Compton backscattering X-ray sources. A short bunch length is needed to meet various requirements such as high peak current, low momentum spread, high luminosity, small ratio of bunch length to plasma wavelength, or accurate timing. Meanwhile, much progress has been made on photoinjectors and different magnetic and RF bunching schemes to produce very short bunches. Measurement of those short bunches becomes essential to develop, characterize, and operate such demanding machines. Conventionally, bunch duration of short electron bunches is measured by transverse RF deflecting cavities or streak camera. With such devices it becomes very challenging to measure bunch length down to a few hundred femtoseconds. Many frequency domain techniques have been recently developed, based on a relation between bunch profile and coherent radiation spectrum. These techniques provide excellent performance for short bunches. In this paper, coherent radiation and its applications to bunch length measurement will be discussed. A strategy for bunch length control at Jefferson Lab will be presented, which includes a noninvasive coherent synchrotron radiation (CSR) monitor, a zero-phasing technique used to calibrate the CSR detector, and phase transfer measurement used to correct RF phase drifts.
NASA Astrophysics Data System (ADS)
Royo, Santiago; Arranz, Maria J.; Arasa, Josep; Cattoen, Michel; Bosch, Thierry
2005-02-01
The present works depicts a measurement technique intended to enhance the characterization procedures of the photometric emissions of automotive headlamps, with potential applications to any light source emission, either automotive or non-automotive. A CCD array with a precisely characterized optical system is used for sampling the luminance field of the headlamp just a few centimetres in front of it, by combining deflectometric techniques (yielding the direction of the light beams) and photometric techniques (yielding the energy travelling in each direction). The CCD array scans the measurement plane using a self-developed mechanical unit and electronics, and then image-processing techniques are used for obtaining the photometric behaviour of the headlamp in any given plane, in particular in the plane and positions required by current normative, but also on the road, on traffic signs, etc. An overview of the construction of the system, of the considered principle of measurement, and of the main calibrations performed on the unit is presented. First results concerning relative measurements are presented compared both to reference data from a photometric tunnel and from a plane placed 5m away from the source. Preliminary results for the absolute photometric calibration of the system are also presented for different illumination beams of different headlamps (driving and passing beam).
Videogrammetric Model Deformation Measurement System User's Manual
NASA Technical Reports Server (NTRS)
Dismond, Harriett R.
2002-01-01
The purpose of this manual is to provide the user of the NASA VMD system, running the MDef software, Version 1.10, all information required to operate the system. The NASA Videogrammetric Model Deformation system consists of an automated videogrammetric technique used to measure the change in wing twist and bending under aerodynamic load in a wind tunnel. The basic instrumentation consists of a single CCD video camera and a frame grabber interfaced to a computer. The technique is based upon a single view photogrammetric determination of two-dimensional coordinates of wing targets with fixed (and known) third dimensional coordinate, namely the span-wise location. The major consideration in the development of the measurement system was that productivity must not be appreciably reduced.
Discussion of flight experiments with an entry research vehicle
NASA Technical Reports Server (NTRS)
Potter, J. L.
1985-01-01
The focus of interest is the maneuvering flight of advanced entry vehicles operating at altitudes above 50 km and at velocities of 5 to 8 km/s. Information resulting in more accurate aerodynamic analysis is sought and measurement techniques that appear to be applicable are identified. Measurements discussed include: shock layer or boundary layer profiles of velocity, temperature, species mass fractions, and other gas properties associated with aerodynamic heating; surface energy transfer process; nonequilibrium flow processes and pressure distribution; separated, vortic leeside flow of nonequilibrium fluid; boundary layer transition on highly swept configurations; and shock and surface slip and gas/surface interaction. Further study should focus on evolving measurement techniques, installation requirements, and on identification of the portions of flights where successful results seem probable.
[Remote sensing of atmospheric trace gas by airborne passive FTIR].
Gao, Min-quang; Liu, Wen-qing; Zhang, Tian-shu; Liu, Jian-guo; Lu, Yi-huai; Wang, Ya-ping; Xu, Liang; Zhu, Jun; Chen, Jun
2006-12-01
The present article describes the details of aviatic measurement for remote sensing trace gases in atmosphere under various surface backgrounds with airborne passive FTIR. The passive down viewing and remote sensing technique used in the experiment is discussed. The method of acquiring atmospheric trace gases infrared characteristic spectra in complicated background and the algorithm of concentration retrieval are discussed. The concentrations of CO and N2O of boundary-layer atmosphere in experimental region below 1000 m are analyzed quantitatively. This measurement technique and the data analysis method, which does not require a previously measured background spectrum, allow fast and mobile remote detection and identification of atmosphere trace gas in large area, and also can be used for urgent monitoring of pollution accidental breakout.
Computational multiheterodyne spectroscopy
Burghoff, David; Yang, Yang; Hu, Qing
2016-01-01
Dual-comb spectroscopy allows for high-resolution spectra to be measured over broad bandwidths, but an essential requirement for coherent integration is the availability of a phase reference. Usually, this means that the combs’ phase and timing errors must be measured and either minimized by stabilization or removed by correction, limiting the technique’s applicability. We demonstrate that it is possible to extract the phase and timing signals of a multiheterodyne spectrum completely computationally, without any extra measurements or optical elements. These techniques are viable even when the relative linewidth exceeds the repetition rate difference and can tremendously simplify any dual-comb system. By reconceptualizing frequency combs in terms of the temporal structure of their phase noise, not their frequency stability, we can greatly expand the scope of multiheterodyne techniques. PMID:27847870
Follett, R K; Delettrez, J A; Edgell, D H; Henchen, R J; Katz, J; Myatt, J F; Froula, D H
2016-11-01
Collective Thomson scattering is a technique for measuring the plasma conditions in laser-plasma experiments. Simultaneous measurements of ion-acoustic and electron plasma-wave spectra were obtained using a 263.25-nm Thomson-scattering probe beam. A fully reflective collection system was used to record light scattered from electron plasma waves at electron densities greater than 10 21 cm -3 , which produced scattering peaks near 200 nm. An accurate analysis of the experimental Thomson-scattering spectra required accounting for plasma gradients, instrument sensitivity, optical effects, and background radiation. Practical techniques for including these effects when fitting Thomson-scattering spectra are presented and applied to the measured spectra to show the improvements in plasma characterization.
Li, Ling; Willard, Belinda; Rachdaoui, Nadia; Kirwan, John P.; Sadygov, Rovshan G.; Stanley, William C.; Previs, Stephen; McCullough, Arthur J.; Kasumov, Takhar
2012-01-01
Understanding the pathologies related to the regulation of protein metabolism requires methods for studying the kinetics of individual proteins. We developed a 2H2O metabolic labeling technique and software for protein kinetic studies in free living organisms. This approach for proteome dynamic studies requires the measurement of total body water enrichments by GC-MS, isotopic distribution of the tryptic peptide by LC-MS/MS, and estimation of the asymptotical number of deuterium incorporated into a peptide by software. We applied this technique to measure the synthesis rates of several plasma lipoproteins and acute phase response proteins in rats. Samples were collected at different time points, and proteins were separated by a gradient gel electrophoresis. 2H labeling of tryptic peptides was analyzed by ion trap tandem mass spectrometry (LTQ MS/MS) for measurement of the fractional synthesis rates of plasma proteins. The high sensitivity of LTQ MS in zoom scan mode in combination with 2H label amplification in proteolytic peptides allows detection of the changes in plasma protein synthesis related to animal nutritional status. Our results demonstrate that fasting has divergent effects on the rate of synthesis of plasma proteins, increasing synthesis of ApoB 100 but decreasing formation of albumin and fibrinogen. We conclude that this technique can effectively measure the synthesis of plasma proteins and can be used to study the regulation of protein homeostasis under physiological and pathological conditions. PMID:22393261
A nonlinear OPC technique for laser beam control in turbulent atmosphere
NASA Astrophysics Data System (ADS)
Markov, V.; Khizhnyak, A.; Sprangle, P.; Ting, A.; DeSandre, L.; Hafizi, B.
2013-05-01
A viable beam control technique is critical for effective laser beam transmission through turbulent atmosphere. Most of the established approaches require information on the impact of perturbations on wavefront propagated waves. Such information can be acquired by measuring the characteristics of the target-scattered light arriving from a small, preferably diffraction-limited, beacon. This paper discusses an innovative beam control approach that can support formation of a tight laser beacon in deep turbulence conditions. The technique employs Brillouin enhanced fourwave mixing (BEFWM) to generate a localized beacon spot on a remote image-resolved target. Formation of the tight beacon doesn't require a wavefront sensor, AO system, or predictive feedback algorithm. Unlike conventional adaptive optics methods which allow wavefront conjugation, the proposed total field conjugation technique is critical for beam control in the presence of strong turbulence and can be achieved by using this non-linear BEFWM technique. The phase information retrieved from the established beacon beam can then be used in conjunction with an AO system to propagate laser beams in deep turbulence.
A convenient technique for polarimetric calibration of single-antenna radar systems
NASA Technical Reports Server (NTRS)
Sarabandi, Kamal; Ulaby, Fawwaz T.
1990-01-01
A practical technique for calibrating single-antenna polarimetric radar systems is introduced. This technique requires only a single calibration target such as a conducting sphere or a trihedral corner reflector to calibrate the radar system, both in amplitude and phase, for all linear polarization configurations. By using a metal sphere, which is orientation independent, error in calibration measurement is minimized while simultaneously calibrating the crosspolarization channels. The antenna system and two orthogonal channels (in free space) are modeled as a four-port passive network. Upon using the reciprocity relations for the passive network and assuming the crosscoupling terms of the antenna to be equal, the crosstalk factors of the antenna system and the transmit and receive channel imbalances can be obtained from measurement of the backscatter from a metal sphere. For an X-band radar system with crosspolarization isolation of 25 dB, comparison of values measured for a sphere and a cylinder with theoretical values shows agreement within 0.4 dB in magnitude and 5 deg in phase. An effective polarization isolation of 50 dB is achieved using this calibration technique.
Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
Sturtevant, Blake T; Pantea, Cristian; Sinha, Dipen N
2016-10-01
A simple and inexpensive approach to acquiring signals in the megahertz frequency range using a smartphone is described. The approach is general, applicable to electromagnetic as well as acoustic measurements, and makes available to undergraduate teaching laboratories experiments that are traditionally inaccessible due to the expensive equipment that are required. This paper focuses on megahertz range ultrasonic resonance spectra in liquids and solids, although there is virtually no upper limit on frequencies measurable using this technique. Acoustic resonance measurements in water and Fluorinert in a one dimensional (1D) resonant cavity were conducted and used to calculate sound speed. The technique is shown to have a precision and accuracy significantly better than one percent in liquid sound speed. Measurements of 3D resonances in an isotropic solid sphere were also made and used to determine the bulk and shear moduli of the sample. The elastic moduli determined from the solid resonance measurements agreed with those determined using a research grade vector network analyzer to better than 0.5%. The apparatus and measurement technique described can thus make research grade measurements using standardly available laboratory equipment for a cost that is two-to-three orders of magnitude less than the traditional measurement equipment used for these measurements.
Centralized light-source optical access network based on polarization multiplexing.
Grassi, Fulvio; Mora, José; Ortega, Beatriz; Capmany, José
2010-03-01
This paper presents and demonstrates a centralized light source optical access network based on optical polarization multiplexing technique. By using two optical sources emitting light orthogonally polarized in the Central Node for downstream and upstream operations, the Remote Node is kept source-free. EVM values below telecommunication standard requirements have been measured experimentally when bidirectional digital signals have been transmitted over 10 km of SMF employing subcarrier multiplexing technique in the electrical domain.
Orthognathic model surgery with LEGO key-spacer.
Tsang, Alfred Chee-Ching; Lee, Alfred Siu Hong; Li, Wai Keung
2013-12-01
A new technique of model surgery using LEGO plates as key-spacers is described. This technique requires less time to set up compared with the conventional plaster model method. It also retains the preoperative setup with the same set of models. Movement of the segments can be measured and examined in detail with LEGO key-spacers. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Evaluation of response variables in computer-simulated virtual cataract surgery
NASA Astrophysics Data System (ADS)
Söderberg, Per G.; Laurell, Carl-Gustaf; Simawi, Wamidh; Nordqvist, Per; Skarman, Eva; Nordh, Leif
2006-02-01
We have developed a virtual reality (VR) simulator for phacoemulsification (phaco) surgery. The current work aimed at evaluating the precision in the estimation of response variables identified for measurement of the performance of VR phaco surgery. We identified 31 response variables measuring; the overall procedure, the foot pedal technique, the phacoemulsification technique, erroneous manipulation, and damage to ocular structures. Totally, 8 medical or optometry students with a good knowledge of ocular anatomy and physiology but naive to cataract surgery performed three sessions each of VR Phaco surgery. For measurement, the surgical procedure was divided into a sculpting phase and an evacuation phase. The 31 response variables were measured for each phase in all three sessions. The variance components for individuals and iterations of sessions within individuals were estimated with an analysis of variance assuming a hierarchal model. The consequences of estimated variabilities for sample size requirements were determined. It was found that generally there was more variability for iterated sessions within individuals for measurements of the sculpting phase than for measurements of the evacuation phase. This resulted in larger required sample sizes for detection of difference between independent groups or change within group, for the sculpting phase as compared to for the evacuation phase. It is concluded that several of the identified response variables can be measured with sufficient precision for evaluation of VR phaco surgery.
NASA Technical Reports Server (NTRS)
Lesco, D. J.; Weikle, D. H.
1980-01-01
The wideband electric power measurement related topics of electronic wattmeter calibration and specification are discussed. Tested calibration techniques are described in detail. Analytical methods used to determine the bandwidth requirements of instrumentation for switching circuit waveforms are presented and illustrated with examples from electric vehicle type applications. Analog multiplier wattmeters, digital wattmeters and calculating digital oscilloscopes are compared. The instrumentation characteristics which are critical to accurate wideband power measurement are described.
Parks, Donovan H; Beiko, Robert G
2013-01-01
High-throughput sequencing techniques have made large-scale spatial and temporal surveys of microbial communities routine. Gaining insight into microbial diversity requires methods for effectively analyzing and visualizing these extensive data sets. Phylogenetic β-diversity measures address this challenge by allowing the relationship between large numbers of environmental samples to be explored using standard multivariate analysis techniques. Despite the success and widespread use of phylogenetic β-diversity measures, an extensive comparative analysis of these measures has not been performed. Here, we compare 39 measures of phylogenetic β diversity in order to establish the relative similarity of these measures along with key properties and performance characteristics. While many measures are highly correlated, those commonly used within microbial ecology were found to be distinct from those popular within classical ecology, and from the recently recommended Gower and Canberra measures. Many of the measures are surprisingly robust to different rootings of the gene tree, the choice of similarity threshold used to define operational taxonomic units, and the presence of outlying basal lineages. Measures differ considerably in their sensitivity to rare organisms, and the effectiveness of measures can vary substantially under alternative models of differentiation. Consequently, the depth of sequencing required to reveal underlying patterns of relationships between environmental samples depends on the selected measure. Our results demonstrate that using complementary measures of phylogenetic β diversity can further our understanding of how communities are phylogenetically differentiated. Open-source software implementing the phylogenetic β-diversity measures evaluated in this manuscript is available at http://kiwi.cs.dal.ca/Software/ExpressBetaDiversity.
Flexible Micropost Arrays for Shear Stress Measurement
NASA Technical Reports Server (NTRS)
Wohl, Christopher J.; Palmieri, Frank L.; Hopkins, John W.; Jackson, Allen M.; Connell, John W.; Lin, Yi; Cisotto, Alexxandra A.
2015-01-01
Increased fuel costs, heightened environmental protection requirements, and noise abatement continue to place drag reduction at the forefront of aerospace research priorities. Unfortunately, shortfalls still exist in the fundamental understanding of boundary-layer airflow over aerodynamic surfaces, especially regarding drag arising from skin friction. For example, there is insufficient availability of instrumentation to adequately characterize complex flows with strong pressure gradients, heat transfer, wall mass flux, three-dimensionality, separation, shock waves, and transient phenomena. One example is the acoustic liner efficacy on aircraft engine nacelle walls. Active measurement of shear stress in boundary layer airflow would enable a better understanding of how aircraft structure and flight dynamics affect skin friction. Current shear stress measurement techniques suffer from reliability, complexity, and airflow disruption, thereby compromising resultant shear stress data. The state-of-the-art for shear stress sensing uses indirect or direct measurement techniques. Indirect measurements (e.g., hot-wire, heat flux gages, oil interferometry, laser Doppler anemometry, small scale pressure drag surfaces, i.e., fences) require intricate knowledge of the studied flow, restrictive instrument arrangements, large surface areas, flow disruption, or seeding material; with smaller, higher bandwidth probes under development. Direct measurements involve strain displacement of a sensor element and require no prior knowledge of the flow. Unfortunately, conventional "floating" recessed components for direct measurements are mm to cm in size. Whispering gallery mode devices and Fiber Bragg Gratings are examples of recent additions to this type of sensor with much smaller (?m) sensor components. Direct detection techniques are often single point measurements and difficult to calibrate and implement in wind tunnel experiments. In addition, the wiring, packaging, and installation of delicate micro-electromechanical devices impede the use of most direct shear sensors. Similarly, the cavity required for sensing element displacement is sensitive to particulate obstruction. This work was focused on developing a shear stress sensor for use in subsonic wind tunnel test facilities applicable to an array of test configurations. The non-displacement shear sensors described here have minimal packaging requirements resulting in minimal or no disturbance of boundary layer flow. Compared to previous concepts, device installation could be simple with reduced cost and down-time. The novelty lies in the creation of low profile (nanoscale to 100 µm) micropost arrays that stay within the viscous sub-layer of the airflow. Aerodynamic forces, which are related to the surface shear stress, cause post deflection and optical property changes. Ultimately, a reliable, accurate shear stress sensor that does not disrupt the airflow has the potential to provide high value data for flow physics researchers, aerodynamicists, and aircraft manufacturers leading to greater flight efficiency arising from more in-depth knowledge on how aircraft design impacts near surface properties.
Investigating the use of multi-point coupling for single-sensor bearing estimation in one direction
NASA Astrophysics Data System (ADS)
Woolard, Americo G.; Phoenix, Austin A.; Tarazaga, Pablo A.
2018-04-01
Bearing estimation of radially propagating symmetric waves in solid structures typically requires a minimum of two sensors. As a test specimen, this research investigates the use of multi-point coupling to provide directional inference using a single-sensor. By this provision, the number of sensors required for localization can be reduced. A finite-element model of a beam is constructed with a symmetrically placed bipod that has asymmetric joint-stiffness properties. Impulse loading is applied at different points along the beam, and measurements are taken from the apex of the bipod. A technique is developed to determine the direction-of-arrival of the propagating wave. The accuracy when using the bipod with the developed technique is compared against results gathered without the bipod and measuring from an asymmetric location along the beam. The results show 92% accuracy when the bipod is used, compared to 75% when measuring without the bipod from an asymmetric location. A geometry investigation finds the best accuracy results when one leg of the bipod has a low stiffness and a large diameter relative to the other leg.
NASA Astrophysics Data System (ADS)
Uchida, Takeyoshi; Kikuchi, Tsuneo
2013-07-01
Ultrasonic power is one of the key quantities closely related to the safety of medical ultrasonic equipment. An ultrasonic power standard is required for establishment of safety. Generally, an ultrasonic power standard below approximately 20 W is established by the radiation force balance (RFB) method as the most accurate measurement method. However, RFB is not suitable for high ultrasonic power because of thermal damage to the absorbing target. Consequently, an alternative method to RFB is required. We have been developing a measurement technique for high ultrasonic power by the calorimetric method. In this study, we examined the effect of heat generation of an ultrasound transducer on ultrasonic power measured by the calorimetric method. As a result, an excessively high ultrasonic power was measured owing to the effect of heat generation from internal loss in the transducer. A reference ultrasound transducer with low heat generation is required for a high ultrasonic power standard established by the calorimetric method.
The measurement of the transmission loss of single leaf walls and panels by an impulse method
NASA Astrophysics Data System (ADS)
Balilah, Y. A.; Gibbs, B. M.
1988-06-01
The standard methods of measurement and rating of sound insulation of panels and walls are generally time-consuming and require expensive and often bulky equipment. In addition, the methods establish only that there has been failure to comply with insulation requirements without indicating the mode of failure. An impulse technique is proposed for the measurement of walls and partitions in situ. The method requires the digital capture of a short duration signal generated by a loudspeaker, and the isolation of the direct component from other reflected and scattered components by time-of-flight methods and windowing. The signal, when transferred from the time to frequency domain by means of fast Fourier transforms, can yield the sound insulation of a partition expressed as a transfer function. Experimental problems in the use of this technique, including those resulting from sphericity of the incident wave front and concentric bending excitation of the partition, are identified and methods proposed for their elimination. Most of the results presented are of single leaf panels subjected to sound at normal incidence, although some measurements were undertaken at oblique incidence. The range of surface densities considered was 7-500 kg/m 2, the highest value corresponding to a brick and plaster wall of thickness 285 mm. Measurement is compared with theoretical prediction, at one-third octave intervals in a frequency range of 100-5000 Hz, or as a continuous function of frequency with a typical resolution of 12·5 Hz. The dynamic range of the measurement equipment sets an upper limit to the measurable transmission loss. For the equipment eventually employed this was represented by a random incidence value of 50 dB.
Ultrasonic techniques for measuring physical properties of fluids in harsh environments
NASA Astrophysics Data System (ADS)
Pantea, Cristian
Ultrasonic-based measurement techniques, either in the time domain or in the frequency domain, include a wide range of experimental methods for investigating physical properties of materials. This discussion is specifically focused on ultrasonic methods and instrumentation development for the determination of liquid properties at conditions typically found in subsurface environments (in the U.S., more than 80% of total energy needs are provided by subsurface energy sources). Such sensors require materials that can withstand harsh conditions of high pressure, high temperature and corrosiveness. These include the piezoelectric material, electrically conductive adhesives, sensor housings/enclosures, and the signal carrying cables, to name a few. A complete sensor package was developed for operation at high temperatures and pressures characteristic to geothermal/oil-industry reservoirs. This package is designed to provide real-time, simultaneous measurements of multiple physical parameters, such as temperature, pressure, salinity and sound speed. The basic principle for this sensor's operation is an ultrasonic frequency domain technique, combined with transducer resonance tracking. This multipurpose acoustic sensor can be used at depths of several thousand meters, temperatures up to 250 °C, and in a very corrosive environment. In the context of high precision measurement of sound speed, the determination of acoustic nonlinearity of liquids will also be discussed, using two different approaches: (i) the thermodynamic method, in which precise and accurate frequency domain sound speed measurements are performed at high pressure and high temperature, and (ii) a modified finite amplitude method, requiring time domain measurements of the second harmonic at room temperature. Efforts toward the development of an acoustic source of collimated low-frequency (10-150 kHz) beam, with applications in imaging, will also be presented.
Rogge, Matthew D; Leckey, Cara A C
2013-09-01
Delaminations in composite laminates resulting from impact events may be accompanied by minimal indication of damage at the surface. As such, inspections are required to ensure defects are within allowable limits. Conventional ultrasonic scanning techniques have been shown to effectively characterize the size and depth of delaminations but require physical contact with the structure and considerable setup time. Alternatively, a non-contact scanning laser vibrometer may be used to measure guided wave propagation in the laminate structure generated by permanently bonded transducers. A local Fourier domain analysis method is presented for processing guided wavefield data to estimate spatially dependent wavenumber values, which can be used to determine delamination depth. The technique is applied to simulated wavefields and results are analyzed to determine limitations of the technique with regards to determining defect size and depth. Based on simulation results, guidelines for application of the technique are developed. Finally, experimental wavefield data is obtained in quasi-isotropic carbon fiber reinforced polymer (CFRP) laminates with impact damage. The recorded wavefields are analyzed and wavenumber is measured to an accuracy of up to 8.5% in the region of shallow delaminations. These results show the promise of local wavenumber domain analysis to characterize the depth of delamination damage in composite laminates. The technique can find application in automated vehicle health assurance systems with potential for high detection rates and greatly reduced operator effort and setup time. Published by Elsevier B.V.
Automated quantification of the synchrogram by recurrence plot analysis.
Nguyen, Chinh Duc; Wilson, Stephen James; Crozier, Stuart
2012-04-01
Recently, the concept of phase synchronization of two weakly coupled oscillators has raised a great research interest and has been applied to characterize synchronization phenomenon in physiological data. Phase synchronization of cardiorespiratory coupling is often studied by a synchrogram analysis, a graphical tool investigating the relationship between instantaneous phases of two signals. Although several techniques have been proposed to automatically quantify the synchrogram, most of them require a preselection of a phase-locking ratio by trial and error. One technique does not require this information; however, it is based on the power spectrum of phase's distribution in the synchrogram, which is vulnerable to noise. This study aims to introduce a new technique to automatically quantify the synchrogram by studying its dynamic structure. Our technique exploits recurrence plot analysis, which is a well-established tool for characterizing recurring patterns and nonstationarities in experiments. We applied our technique to detect synchronization in simulated and measured infants' cardiorespiratory data. Our results suggest that the proposed technique is able to systematically detect synchronization in noisy and chaotic data without preselecting the phase-locking ratio. By embedding phase information of the synchrogram into phase space, the phase-locking ratio is automatically unveiled as the number of attractors.
Deairing Techniques for Double-Ended Centrifugal Total Artificial Heart Implantation.
Karimov, Jamshid H; Horvath, David J; Byram, Nicole; Sunagawa, Gengo; Grady, Patrick; Sinkewich, Martin; Moazami, Nader; Sale, Shiva; Golding, Leonard A R; Fukamachi, Kiyotaka
2017-06-01
The unique device architecture of the Cleveland Clinic continuous-flow total artificial heart (CFTAH) requires dedicated and specific air-removal techniques during device implantation in vivo. These procedures comprise special surgical techniques and intraoperative manipulations, as well as engineering design changes and optimizations to the device itself. The current study evaluated the optimal air-removal techniques during the Cleveland Clinic double-ended centrifugal CFTAH in vivo implants (n = 17). Techniques and pump design iterations consisted of developing a priming method for the device and the use of built-in deairing ports in the early cases (n = 5). In the remaining cases (n = 12), deairing ports were not used. Dedicated air-removal ports were not considered an essential design requirement, and such ports may represent an additional risk for pump thrombosis. Careful passive deairing was found to be an effective measure with a centrifugal pump of this design. In this report, the techniques and design changes that were made during this CFTAH development program to enable effective residual air removal and prevention of air embolism during in vivo device implantation are explained. © 2016 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Ogunlade, Olumide; Chen, Yifan; Kosmas, Panagiotis
2010-01-01
Measurements of the complex permittivity of various concentrations of microbubbles in ethylene glycol liquid phantom have been carried out. A cavity perturbation technique using custom rectangular waveguide cavities, which are sensitive to small changes in the permittivity of the perturber, has been employed. Three different frequencies within the ultra-wideband (UWB) frequency spectrum have been used for the experiments. The results show that the concentration of the air filled microbubbles required to achieve a dielectric contrast as little as 2% exceeds the recommended dosage used in clinical ultrasound applications, by more than two orders of magnitude.
A technique for plasma velocity-space cross-correlation
NASA Astrophysics Data System (ADS)
Mattingly, Sean; Skiff, Fred
2018-05-01
An advance in experimental plasma diagnostics is presented and used to make the first measurement of a plasma velocity-space cross-correlation matrix. The velocity space correlation function can detect collective fluctuations of plasmas through a localized measurement. An empirical decomposition, singular value decomposition, is applied to this Hermitian matrix in order to obtain the plasma fluctuation eigenmode structure on the ion distribution function. A basic theory is introduced and compared to the modes obtained by the experiment. A full characterization of these modes is left for future work, but an outline of this endeavor is provided. Finally, the requirements for this experimental technique in other plasma regimes are discussed.
Study of optical techniques for the Ames unitary wind tunnels. Part 4: Model deformation
NASA Technical Reports Server (NTRS)
Lee, George
1992-01-01
A survey of systems capable of model deformation measurements was conducted. The survey included stereo-cameras, scanners, and digitizers. Moire, holographic, and heterodyne interferometry techniques were also looked at. Stereo-cameras with passive or active targets are currently being deployed for model deformation measurements at NASA Ames and LaRC, Boeing, and ONERA. Scanners and digitizers are widely used in robotics, motion analysis, medicine, etc., and some of the scanner and digitizers can meet the model deformation requirements. Commercial stereo-cameras, scanners, and digitizers are being improved in accuracy, reliability, and ease of operation. A number of new systems are coming onto the market.
Mortimer, Duncan; Segal, Leonie
2008-01-01
Algorithms for converting descriptive measures of health status into quality-adjusted life year (QALY)--weights are now widely available, and their application in economic evaluation is increasingly commonplace. The objective of this study is to describe and compare existing conversion algorithms and to highlight issues bearing on the derivation and interpretation of the QALY-weights so obtained. Systematic review of algorithms for converting descriptive measures of health status into QALY-weights. The review identified a substantial body of literature comprising 46 derivation studies and 16 studies that provided evidence or commentary on the validity of conversion algorithms. Conversion algorithms were derived using 1 of 4 techniques: 1) transfer to utility regression, 2) response mapping, 3) effect size translation, and 4) "revaluing" outcome measures using preference-based scaling techniques. Although these techniques differ in their methodological/theoretical tradition, data requirements, and ease of derivation and application, the available evidence suggests that the sensitivity and validity of derived QALY-weights may be more dependent on the coverage and sensitivity of measures and the disease area/patient group under evaluation than on the technique used in derivation. Despite the recent proliferation of conversion algorithms, a number of questions bearing on the derivation and interpretation of derived QALY-weights remain unresolved. These unresolved issues suggest directions for future research in this area. In the meantime, analysts seeking guidance in selecting derived QALY-weights should consider the validity and feasibility of each conversion algorithm in the disease area and patient group under evaluation rather than restricting their choice to weights from a particular derivation technique.
A comparison of measured and theoretical predictions for STS ascent and entry sonic booms
NASA Technical Reports Server (NTRS)
Garcia, F., Jr.; Jones, J. H.; Henderson, H. R.
1983-01-01
Sonic boom measurements have been obtained during the flights of STS-1 through 5. During STS-1, 2, and 4, entry sonic boom measurements were obtained and ascent measurements were made on STS-5. The objectives of this measurement program were (1) to define the sonic boom characteristics of the Space Transportation System (STS), (2) provide a realistic assessment of the validity of xisting theoretical prediction techniques, and (3) establish a level of confidence for predicting future STS configuration sonic boom environments. Detail evaluation and reporting of the results of this program are in progress. This paper will address only the significant results, mainly those data obtained during the entry of STS-1 at Edwards Air Force Base (EAFB), and the ascent of STS-5 from Kennedy Space Center (KSC). The theoretical prediction technique employed in this analysis is the so called Thomas Program. This prediction technique is a semi-empirical method that required definition of the near field signatures, detailed trajectory characteristics, and the prevailing meteorological characteristics as an input. This analytical procedure then extrapolates the near field signatures from the flight altitude to an altitude consistent with each measurement location.
The role of global cloud climatologies in validating numerical models
NASA Technical Reports Server (NTRS)
HARSHVARDHAN
1991-01-01
The net upward longwave surface radiation is exceedingly difficult to measure from space. A hybrid method using General Circulation Model (GCM) simulations and satellite data from the Earth Radiation Budget Experiment (ERBE) and the International Satellite Cloud Climatology Project (ISCCP) was used to produce global maps of this quantity over oceanic areas. An advantage of this technique is that no independent knowledge or assumptions regarding cloud cover for a particular month are required. The only information required is a relationship between the cloud radiation forcing (CRF) at the top of the atmosphere and that at the surface, which is obtained from the GCM simulation. A flow diagram of the technique and results are given.
NASA Astrophysics Data System (ADS)
McCarren, Dustin; Vandervort, Robert; Soderholm, Mark; Carr, Jerry, Jr.; Galante, Matthew; Magee, Richard; Scime, Earl
2013-10-01
Cavity Ring-Down Spectroscopy CRDS is a proven, ultra-sensitive, cavity enhanced absorption spectroscopy technique. When combined with a continuous wavelength (CW) diode laser that has a sufficiently narrow line width, the Doppler broadened absorption line, i.e., the velocity distribution functions (IVDFs), can be measured. Measurements of IVDFS can be made using established techniques, such as laser induced fluorescence (LIF). However, LIF suffers from the requirement that the initial state of the LIF sequence have a substantial density. This usually limits LIF to ions and atoms with large metastable state densities for the given plasma conditions. CW-CRDS is considerably more sensitive than LIF and can potentially be applied to much lower density populations of ion and atom states. In this work we present ongoing measurements of the CW-CRDS diagnostic and discuss the technical challenges of using CW-CRDS to make measurements in a helicon plasma.
Precision manufacturing for clinical-quality regenerative medicines.
Williams, David J; Thomas, Robert J; Hourd, Paul C; Chandra, Amit; Ratcliffe, Elizabeth; Liu, Yang; Rayment, Erin A; Archer, J Richard
2012-08-28
Innovations in engineering applied to healthcare make a significant difference to people's lives. Market growth is guaranteed by demographics. Regulation and requirements for good manufacturing practice-extreme levels of repeatability and reliability-demand high-precision process and measurement solutions. Emerging technologies using living biological materials add complexity. This paper presents some results of work demonstrating the precision automated manufacture of living materials, particularly the expansion of populations of human stem cells for therapeutic use as regenerative medicines. The paper also describes quality engineering techniques for precision process design and improvement, and identifies the requirements for manufacturing technology and measurement systems evolution for such therapies.
Arm Locking for the Laser Interferometer Space Antenna
NASA Technical Reports Server (NTRS)
Maghami, P. G.; Thorpe, J. I.; Livas, J.
2009-01-01
The Laser Interferometer Space Antenna (LISA) mission is a planned gravitational wave detector consisting of three spacecraft in heliocentric orbit. Laser interferometry is used to measure distance fluctuations between test masses aboard each spacecraft to the picometer level over a 5 million kilometer separation. Laser frequency fluctuations must be suppressed in order to meet the measurement requirements. Arm-locking, a technique that uses the constellation of spacecraft as a frequency reference, is a proposed method for stabilizing the laser frequency. We consider the problem of arm-locking using classical optimal control theory and find that our designs satisfy the LISA requirements.
Use of graph theory measures to identify errors in record linkage.
Randall, Sean M; Boyd, James H; Ferrante, Anna M; Bauer, Jacqueline K; Semmens, James B
2014-07-01
Ensuring high linkage quality is important in many record linkage applications. Current methods for ensuring quality are manual and resource intensive. This paper seeks to determine the effectiveness of graph theory techniques in identifying record linkage errors. A range of graph theory techniques was applied to two linked datasets, with known truth sets. The ability of graph theory techniques to identify groups containing errors was compared to a widely used threshold setting technique. This methodology shows promise; however, further investigations into graph theory techniques are required. The development of more efficient and effective methods of improving linkage quality will result in higher quality datasets that can be delivered to researchers in shorter timeframes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Safina, Gulnara
2012-01-27
Carbohydrates (glycans) and their conjugates with proteins and lipids contribute significantly to many biological processes. That makes these compounds important targets to be detected, monitored and identified. The identification of the carbohydrate content in their conjugates with proteins and lipids (glycoforms) is often a challenging task. Most of the conventional instrumental analytical techniques are time-consuming and require tedious sample pretreatment and utilising various labeling agents. Surface plasmon resonance (SPR) has been intensively developed during last two decades and has received the increasing attention for different applications, from the real-time monitoring of affinity bindings to biosensors. SPR does not require any labels and is capable of direct measurement of biospecific interaction occurring on the sensing surface. This review provides a critical comparison of modern analytical instrumental techniques with SPR in terms of their analytical capabilities to detect carbohydrates, their conjugates with proteins and lipids and to study the carbohydrate-specific bindings. A few selected examples of the SPR approaches developed during 2004-2011 for the biosensing of glycoforms and for glycan-protein affinity studies are comprehensively discussed. Copyright © 2011 Elsevier B.V. All rights reserved.
Vaidya, Sharad; Parkash, Hari; Bhargava, Akshay; Gupta, Sharad
2014-01-01
Abundant resources and techniques have been used for complete coverage crown fabrication. Conventional investing and casting procedures for phosphate-bonded investments require a 2- to 4-h procedure before completion. Accelerated casting techniques have been used, but may not result in castings with matching marginal accuracy. The study measured the marginal gap and determined the clinical acceptability of single cast copings invested in a phosphate-bonded investment with the use of conventional and accelerated methods. One hundred and twenty cast coping samples were fabricated using conventional and accelerated methods, with three finish lines: Chamfer, shoulder and shoulder with bevel. Sixty copings were prepared with each technique. Each coping was examined with a stereomicroscope at four predetermined sites and measurements of marginal gaps were documented for each. A master chart was prepared for all the data and was analyzed using Statistical Package for the Social Sciences version. Evidence of marginal gap was then evaluated by t-test. Analysis of variance and Post-hoc analysis were used to compare two groups as well as to make comparisons between three subgroups . Measurements recorded showed no statistically significant difference between conventional and accelerated groups. Among the three marginal designs studied, shoulder with bevel showed the best marginal fit with conventional as well as accelerated casting techniques. Accelerated casting technique could be a vital alternative to the time-consuming conventional casting technique. The marginal fit between the two casting techniques showed no statistical difference.
Eccentricity Fluctuations Make Flow Measurable in High Multiplicity p-p Collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casalderrey-Solana, Jorge; Wiedemann, Urs Achim
2010-03-12
Elliptic flow is a hallmark of collectivity in hadronic collisions. Its measurement relies on analysis techniques which require high event multiplicity and so far can only be applied to heavy ion collisions. Here, we delineate the conditions under which elliptic flow becomes measurable in the samples of high-multiplicity (dN{sub ch}/dy>=50) p-p collisions, which will soon be collected at the LHC. We observe that fluctuations in the p-p interaction region can result in a sizable spatial eccentricity even for the most central p-p collisions. Under relatively mild assumptions on the nature of such fluctuations and on the eccentricity scaling of ellipticmore » flow, we find that the resulting elliptic flow signal in high-multiplicity p-p collisions at the LHC becomes measurable with standard techniques.« less
Program for an improved hypersonic temperature-sensing probe
NASA Technical Reports Server (NTRS)
Reilly, Richard J.
1993-01-01
Under a NASA Dryden-sponsored contract in the mid 1960s, temperatures of up to 2200 C were successfully measured using a fluid oscillator. The current program, although limited in scope, explores the problem areas which must be solved if this technique is to be extended to 10,000 R. The potential for measuring extremely high temperatures, using fluid oscillator techniques, stems from the fact that the measuring element is the fluid itself. The containing structure of the oscillator need not be brought to equilibrium temperature with with the fluid for temperature measurement, provided that a suitable calibration can be arranged. This program concentrated on review of high-temperature material developments since the original program was completed. Other areas of limited study included related pressure instrumentation requirements, dissociation, rarefied gas effects, and analysis of sensor time response.
On Measuring Cosmic Ray Energy Spectra with the Rapidity Distributions
NASA Technical Reports Server (NTRS)
Bashindzhagyan, G.; Adams, J.; Chilingarian, A.; Drury, L.; Egorov, N.; Golubkov, S.; Korotkova, N.; Panasyuk, M.; Podorozhnyi, D.; Procqureur, J.
2000-01-01
An important goal of cosmic ray research is to measure the elemental energy spectra of galactic cosmic rays up to 10(exp 16) eV. This goal cannot be achieved with an ionization calorimeter because the required instrument is too massive for space flight. An alternate method will be presented. This method is based on measuring the primary particle energy by determining the angular distribution of secondaries produced in a target layer. The proposed technique can be used over a wide range of energies (10 (exp 11) -10 (exp 16) eV) and gives an energy resolution of 60% or better. Based on this technique, a conceptual design for a new instrument (KLEM) will be presented. Due to its light weight, this instrument can have a large aperture enabling the direct measurement of cosmic rays to 1016 eV.
NASA Astrophysics Data System (ADS)
Cordes, A.; Pollig, D.; Leonhardt, S.
2010-04-01
For monitoring the health status of individuals, proper monitoring of ventilation is desirable. Therefore, a continuous measurement technique is an advantage for many patients since it allows personal home care scenarios. As an example, monitoring of elderly people at home could enable them to live in their familiar environment on their own with the safety of a continuous monitoring. Therefore, a measurement technique without the restriction of mobility is required. Since it is possible to monitor ventilation with magnetic impedance measurements without conductive contact, this technique is well suited for the mentioned scenario. Integrated in a chair, a person's health state could be monitored in many situations, e.g. during meals, while watching TV or reading a book. In this paper, we compare different positions of coil arrays for a magnetic impedance measurement system integrated in a chair in order to monitor ventilation continuously. For limiting the costs and technical complexity of the magnetic impedance measurement system, we have a focus on coil configurations with one RF channel. To limit the needed space and thickness of the array in the backrest, planar gradiometer coil setups are investigated. All measurements will be performed with a new developed portable magnetic impedance measurement system and a standard office chair.
NASA Astrophysics Data System (ADS)
Peyton, David; Kinoshita, Hiroyuki; Lo, G. Q.; Kwong, Dim-Lee
1991-04-01
Rapid Thermal Processing (RTP) is becoming a popular approach for future ULSI manufacturing due to its unique low thermal budget and process flexibility. Furthermore when RTP is combined with Chemical Vapor Deposition (CVD) the so-called RTP-CVD technology it can be used to deposit ultrathin films with extremely sharp interfaces and excellent material qualities. One major consequence of this type of processing however is the need for extremely tight control of wafer temperature both to obtain reproducible results for process control and to minimize slip and warpage arising from nonuniformities in temperature. Specifically temperature measurement systems suitable for RiP must have both high precision--within 1-2 degrees--and a short response time--to output an accurate reading on the order of milliseconds for closedloop control. Any such in-situ measurement technique must be non-contact since thermocouples cannot meet the response time requirements and have problems with conductive heat flow in the wafer. To date optical pyrometry has been the most widely used technique for RiP systems although a number of other techniques are being considered and researched. This article examines several such techniques from a systems perspective: optical pyrometry both conventional and a new approach using ellipsometric techniques for concurrent emissivity measurement Raman scattering infrared laser thermometry optical diffraction thermometry and photoacoustic thermometry. Each approach is evaluated in terms of its actual or estimated manufacturing cost remote sensing capability precision repeatability dependence on processing history range
TIGER TM : Intelligent continuous monitoring of gas turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKay, I.; Hibbert, J.; Milne, R.
1998-07-01
The field of condition monitoring has been an area of rapid growth, with many specialized techniques being developed to measure or predict the health of a particular item of plant. Much of the most recent work has gone into the diagnosis of problems on rotating machines through the application of vibration analysis techniques. These techniques though useful can have a number of limiting factors, such as the need to install specialized sensors and measurement equipment, or the limited scope of the type of data measured. It was recognized in 1992, that the surveillance and condition monitoring procedures available for criticalmore » plant, such as gas turbines, were not as comprehensive as they might be and that a novel approach was required to give the operator the necessary holistic view of the health of the plant. This would naturally provide an assessment of the maintenance practices required to yield the highest possible availability without the need to install extensive new instrumentation. From the above objective, the TIGER system was designed which utilizes available data from the gas turbine control system or additionally the plant DCS to measure the behavior of the gas turbine and its associated sub systems. These measured parameters are then compared with an internal model of the turbine system and used to diagnose incorrect responses and therefore the item that is at fault, allowing the operator to quickly restart the turbine after a trip or perform condition based maintenance at the next scheduled outage. This philosophy has been built into the TIGER system and the purpose of this paper is to illustrate its functionality and some of the innovative techniques used in the diagnosis of real gas turbine problems. This is achieved by discussing three case studies where TIGER was integral in returning the plant to operation more quickly than can normally be expected.« less
NASA Technical Reports Server (NTRS)
Fladeland; Yates, Emma Louise; Bui, Thaopaul Van; Dean-Day, Jonathan; Kolyer, Richard
2011-01-01
The Eddy-Covariance Method for quantifying surface-atmosphere fluxes is a foundational technique for measuring net ecosystem exchange and validating regional-to-global carbon cycle models. While towers or ships are the most frequent platform for measuring surface-atmosphere exchange, experiments using aircraft for flux measurements have yielded contributions to several large-scale studies including BOREAS, SMACEX, RECAB by providing local-to-regional coverage beyond towers. The low-altitude flight requirements make airborne flux measurements particularly dangerous and well suited for unmanned aircraft.
NASA Astrophysics Data System (ADS)
Monchau, Jean-Pierre; Hameury, Jacques; Ausset, Patrick; Hay, Bruno; Ibos, Laurent; Candau, Yves
2018-05-01
Accurate knowledge of infrared emissivity is important in applications such as surface temperature measurements by infrared thermography or thermal balance for building walls. A comparison of total hemispherical emissivity measurement was performed by two laboratories: the Laboratoire National de Métrologie et d'Essais (LNE) and the Centre d'Études et de Recherche en Thermique, Environnement et Systèmes (CERTES). Both laboratories performed emissivity measurements on four samples, chosen to cover a large range of emissivity values and angular reflectance behaviors. The samples were polished aluminum (highly specular, low emissivity), bulk PVC (slightly specular, high emissivity), sandblasted aluminum (diffuse surface, medium emissivity), and aluminum paint (slightly specular surface, medium emissivity). Results obtained using five measurement techniques were compared. LNE used a calorimetric method for direct total hemispherical emissivity measurement [1], an absolute reflectometric measurement method [2], and a relative reflectometric measurement method. CERTES used two total hemispherical directional reflectometric measurement methods [3, 4]. For indirect techniques by reflectance measurements, the total hemispherical emissivity values were calculated from directional hemispherical reflectance measurement results using spectral integration when required and directional to hemispherical extrapolation. Results were compared, taking into account measurement uncertainties; an added uncertainty was introduced to account for heterogeneity over the surfaces of the samples and between samples. All techniques gave large relative uncertainties for a low emissive and very specular material (polished aluminum), and results were quite scattered. All the indirect techniques by reflectance measurement gave results within ±0.01 for a high emissivity material. A commercial aluminum paint appears to be a good candidate for producing samples with medium level of emissivity (about 0.4) and with good uniformity of emissivity values (within ±0.015).
Partial information decomposition as a spatiotemporal filter.
Flecker, Benjamin; Alford, Wesley; Beggs, John M; Williams, Paul L; Beer, Randall D
2011-09-01
Understanding the mechanisms of distributed computation in cellular automata requires techniques for characterizing the emergent structures that underlie information processing in such systems. Recently, techniques from information theory have been brought to bear on this problem. Building on this work, we utilize the new technique of partial information decomposition to show that previous information-theoretic measures can confound distinct sources of information. We then propose a new set of filters and demonstrate that they more cleanly separate out the background domains, particles, and collisions that are typically associated with information storage, transfer, and modification in cellular automata.
NASA Astrophysics Data System (ADS)
McCarthy, Darragh; Trappe, Neil; Murphy, J. Anthony; O'Sullivan, Créidhe; Gradziel, Marcin; Doherty, Stephen; Huggard, Peter G.; Polegro, Arturo; van der Vorst, Maarten
2016-05-01
In order to investigate the origins of the Universe, it is necessary to carry out full sky surveys of the temperature and polarisation of the Cosmic Microwave Background (CMB) radiation, the remnant of the Big Bang. Missions such as COBE and Planck have previously mapped the CMB temperature, however in order to further constrain evolutionary and inflationary models, it is necessary to measure the polarisation of the CMB with greater accuracy and sensitivity than before. Missions undertaking such observations require large arrays of feed horn antennas to feed the detector arrays. Corrugated horns provide the best performance, however owing to the large number required (circa 5000 in the case of the proposed COrE+ mission), such horns are prohibitive in terms of thermal, mechanical and cost limitations. In this paper we consider the optimisation of an alternative smooth-walled piecewise conical profiled horn, using the mode-matching technique alongside a genetic algorithm. The technique is optimised to return a suitable design using efficient modelling software and standard desktop computing power. A design is presented showing a directional beam pattern and low levels of return loss, cross-polar power and sidelobes, as required by future CMB missions. This design is manufactured and the measured results compared with simulation, showing excellent agreement and meeting the required performance criteria. The optimisation process described here is robust and can be applied to many other applications where specific performance characteristics are required, with the user simply defining the beam requirements.
2012-01-01
Background Hyperpolarised helium MRI (He3 MRI) is a new technique that enables imaging of the air distribution within the lungs. This allows accurate determination of the ventilation distribution in vivo. The technique has the disadvantages of requiring an expensive helium isotope, complex apparatus and moving the patient to a compatible MRI scanner. Electrical impedance tomography (EIT) a non-invasive bedside technique that allows constant monitoring of lung impedance, which is dependent on changes in air space capacity in the lung. We have used He3MRI measurements of ventilation distribution as the gold standard for assessment of EIT. Methods Seven rats were ventilated in supine, prone, left and right lateral position with 70% helium/30% oxygen for EIT measurements and pure helium for He3 MRI. The same ventilator and settings were used for both measurements. Image dimensions, geometric centre and global in homogeneity index were calculated. Results EIT images were smaller and of lower resolution and contained less anatomical detail than those from He3 MRI. However, both methods could measure positional induced changes in lung ventilation, as assessed by the geometric centre. The global in homogeneity index were comparable between the techniques. Conclusion EIT is a suitable technique for monitoring ventilation distribution and inhomgeneity as assessed by comparison with He3 MRI. PMID:22966835
High-precision buffer circuit for suppression of regenerative oscillation
NASA Technical Reports Server (NTRS)
Tripp, John S.; Hare, David A.; Tcheng, Ping
1995-01-01
Precision analog signal conditioning electronics have been developed for wind tunnel model attitude inertial sensors. This application requires low-noise, stable, microvolt-level DC performance and a high-precision buffered output. Capacitive loading of the operational amplifier output stages due to the wind tunnel analog signal distribution facilities caused regenerative oscillation and consequent rectification bias errors. Oscillation suppression techniques commonly used in audio applications were inadequate to maintain the performance requirements for the measurement of attitude for wind tunnel models. Feedback control theory is applied to develop a suppression technique based on a known compensation (snubber) circuit, which provides superior oscillation suppression with high output isolation and preserves the low-noise low-offset performance of the signal conditioning electronics. A practical design technique is developed to select the parameters for the compensation circuit to suppress regenerative oscillation occurring when typical shielded cable loads are driven.
Increasing the speed of tumour diagnosis during surgery with selective scanning Raman microscopy
NASA Astrophysics Data System (ADS)
Kong, Kenny; Rowlands, Christopher J.; Varma, Sandeep; Perkins, William; Leach, Iain H.; Koloydenko, Alexey A.; Pitiot, Alain; Williams, Hywel C.; Notingher, Ioan
2014-09-01
One of the main challenges in cancer surgery is ensuring that all tumour cells are removed during surgery, while sparing as much healthy tissue as possible. Histopathology, the gold-standard technique for cancer diagnosis, is often impractical for intra-operative use because of the time-consuming tissue preparation procedures (sectioning and staining). Raman micro-spectroscopy is a powerful technique that can discriminate between tumours and healthy tissues with high accuracy, based entirely on intrinsic chemical differences. However, raster-scanning Raman micro-spectroscopy is a slow imaging technique that typically requires data acquisition times as long as several days for typical tissue samples obtained during surgery (1 × 1 cm2) - in particular when high signal-to-noise ratio spectra are required to ensure accurate diagnosis. In this paper we present two techniques based on selective sampling Raman micro-spectroscopy that can overcome these limitations. In selective sampling, information regarding the spatial features of the tissue, either measured by an alternative optical technique or estimated in real-time from the Raman spectra, can be used to drastically reduce the number of Raman spectra required for diagnosis. These sampling strategies allowed diagnosis of basal cell carcinoma in skin tissue samples excised during Mohs micrographic surgery faster than frozen section histopathology, and two orders of magnitude faster than previous techniques based on raster-scanning Raman microscopy. Further development of these techniques may help during cancer surgery by providing a fast and objective way for surgeons to ensure the complete removal of tumour cells while sparing as much healthy tissue as possible.
Measuring the free neutron lifetime to <= 0.3s via the beam method
NASA Astrophysics Data System (ADS)
Fomin, Nadia; Mulholland, Jonathan
2015-04-01
Neutron beta decay is an archetype for all semi-leptonic charged-current weak processes. A precise value for the neutron lifetime is required for consistency tests of the Standard Model and is needed to predict the primordial 4 He abundance from the theory of Big Bang Nucleosynthesis. An effort has begun for an in-beam measurement of the neutron lifetime with an projected <=0.3s uncertainty. This effort is part of a phased campaign of neutron lifetime measurements based at the NIST Center for Neutron Research, using the Sussex-ILL-NIST technique. Recent advances in neutron fluence measurement techniques as well as new large area silicon detector technology address the two largest sources of uncertainty of in-beam measurements, paving the way for a new measurement. The experimental design and projected uncertainties for the 0.3s measurement will be discussed. This work is supported by the DOE office of Science, NIST and NSF.
Application of Modern Design of Experiments to CARS Thermometry in a Model Scramjet Engine
NASA Technical Reports Server (NTRS)
Danehy, P. M.; DeLoach, R.; Cutler, A. D.
2002-01-01
We have applied formal experiment design and analysis to optimize the measurement of temperature in a supersonic combustor at NASA Langley Research Center. We used the coherent anti-Stokes Raman spectroscopy (CARS) technique to map the temperature distribution in the flowfield downstream of an 1160 K, Mach 2 freestream into which supersonic hydrogen fuel is injected at an angle of 30 degrees. CARS thermometry is inherently a single-point measurement technique; it was used to map thc flow by translating the measurement volume through the flowfield. The method known as "Modern Design of Experiments" (MDOE) was used to estimate the data volume required, design the test matrix, perform the experiment and analyze the resulting data. MDOE allowed us to match the volume of data acquired to the precision requirements of the customer. Furthermore, one aspect of MDOE, known as response surface methodology, allowed us to develop precise maps of the flowfield temperature, allowing interpolation between measurement points. An analytic function in two spatial variables was fit to the data from a single measurement plane. Fitting with a Cosine Series Bivariate Function allowed the mean temperature to be mapped with 95% confidence interval half-widths of +/- 30 Kelvin, comfortably meeting the confidence of +/- 50 Kelvin specified prior to performing the experiments. We estimate that applying MDOE to the present experiment saved a factor of 5 in data volume acquired, compared to experiments executed in the traditional manner. Furthermore, the precision requirements could have been met with less than half the data acquired.
The value of job analysis, job description and performance.
Wolfe, M N; Coggins, S
1997-01-01
All companies, regardless of size, are faced with the same employment concerns. Efficient personnel management requires the use of three human resource techniques--job analysis, job description and performance appraisal. These techniques and tools are not for large practices only. Small groups can obtain the same benefits by employing these performance control measures. Job analysis allows for the development of a compensation system. Job descriptions summarize the most important duties. Performance appraisals help reward outstanding work.
Skylab medical technology utilization
NASA Technical Reports Server (NTRS)
Stonesifer, J. C.
1974-01-01
To perform the extensive medical experimentation on man in a long-term, zero-g environment, new medical measuring and monitoring equipment had to be developed, new techniques in training and operations were required, and new methods of collecting and analyzing the great amounts of medical data were developed. Examples of technology transfers to the public sector resulted from the development of new equipment, methods, techniques, and data. This paper describes several of the examples that stemmed directly from Skylab technology.
High resolution frequency analysis techniques with application to the redshift experiment
NASA Technical Reports Server (NTRS)
Decher, R.; Teuber, D.
1975-01-01
High resolution frequency analysis methods, with application to the gravitational probe redshift experiment, are discussed. For this experiment a resolution of .00001 Hz is required to measure a slowly varying, low frequency signal of approximately 1 Hz. Major building blocks include fast Fourier transform, discrete Fourier transform, Lagrange interpolation, golden section search, and adaptive matched filter technique. Accuracy, resolution, and computer effort of these methods are investigated, including test runs on an IBM 360/65 computer.
Report of secondary flows, boundary layers, turbulence and wave team, report 1
NASA Technical Reports Server (NTRS)
Scoggins, J. R.; Fitzjarrald, D.; Doviak, R.; Cliff, W.
1980-01-01
General criteria for a flight test option are that: (1) there be a good opportunity for comparison with other measurement techniques; (2) the flow to be measured is of considerable scientific or practical interest; and (3) the airborne laser Doppler system is well suited to measure the required quantities. The requirement for comparison, i.e., ground truth, is particularly important because this is the first year of operation for the system. It is necessary to demonstrate that the system does actually measure the winds and compare the results with other methods to provide a check on the system error analysis. The uniqueness of the laser Doppler system precludes any direct comparison, but point measurements from tower mounted wind sensors and two dimensional fields obtained from radars with substantially different sampling volumes are quite useful.
40 CFR 141.135 - Treatment technique for control of disinfection byproduct (DBP) precursors.
Code of Federal Regulations, 2010 CFR
2010-07-01
... or enhanced softening to achieve the TOC percent removal levels specified in paragraph (b) of this... requirements in § 141.132(d). (i) The system's source water TOC level, measured according to § 141.131(d)(3... water TOC level, measured according to § 141.131(d)(3), is less than 2.0 mg/L, calculated quarterly as a...
Baker, Kevin Louis
2013-01-08
X-ray phase sensitive wave-front sensor techniques are detailed that are capable of measuring the entire two-dimensional x-ray electric field, both the amplitude and phase, with a single measurement. These Hartmann sensing and 2-D Shear interferometry wave-front sensors do not require a temporally coherent source and are therefore compatible with x-ray tubes and also with laser-produced or x-pinch x-ray sources.
NASA Astrophysics Data System (ADS)
Prykäri, Tuukka; Czajkowski, Jakub; Alarousu, Erkki; Myllylä, Risto
2010-05-01
Optical coherence tomography (OCT), a technique for the noninvasive imaging of turbid media, based on low-coherence interferometry, was originally developed for the imaging of biological tissues. Since the development of the technique, most of its applications have been related to the area of biomedicine. However, from early stages, the vertical resolution of the technique has already been improved to a submicron scale. This enables new possibilities and applications. This article presents the possible applications of OCT in paper industry, where submicron or at least a resolution close to one micron is required. This requirement comes from the layered structure of paper products, where layer thickness may vary from single microns to tens of micrometers. This is especially similar to the case with high-quality paper products, where several different coating layers are used to obtain a smooth surface structure and a high gloss. In this study, we demonstrate that optical coherence tomography can be used to measure and evaluate the quality of the coating layer of a premium glossy photopaper. In addition, we show that for some paper products, it is possible to measure across the entire thickness range of a paper sheet. Furthermore, we suggest that in addition to topography and tomography images of objects, it is possible to obtain information similar to gloss by tracking the magnitude of individual interference signals in optical coherence tomography.
Phloem water relations and translocation.
Kaufmann, M R; Kramer, P J
1967-02-01
Satisfactory measurements of phloem water potential of trees can be obtained with the Richards and Ogata psychrometer and the vapor equilibration techniques, although corrections for loss of dry weight and for heating by respiration are required for the vapor equilibrium values. The psychrometer technique is the more satisfactory of the 2 because it requires less time for equilibration, less tissue, and less handling of tissue. Phloem water potential of a yellow-poplar tree followed a diurnal pattern quite similar to that of leaves, except that the values were higher (less negative) and changed less than in the leaves.The psychrometer technique permits a different approach to the study of translocation in trees. Measurements of water potential of phloem discs followed by freezing of samples and determination of osmotic potential allows estimation of turgor pressure in various parts of trees as the difference between osmotic potential and total water potential. This technique was used in evaluating gradients in water potential, osmotic potential, and turgor pressure in red maple trees. The expected gradients in osmotic potential were observed in the phloem, osmotic potential of the cell sap increasing (sap becoming more dilute) down the trunk. However, values of water potential were such that a gradient in turgor pressure apparently did not exist at a time when rate of translocation was expected to be high. These results do not support the mass flow theory of translocation favored by many workers.
Phloem Water Relations and Translocation 1
Kaufmann, Merrill R.; Kramer, Paul J.
1967-01-01
Satisfactory measurements of phloem water potential of trees can be obtained with the Richards and Ogata psychrometer and the vapor equilibration techniques, although corrections for loss of dry weight and for heating by respiration are required for the vapor equilibrium values. The psychrometer technique is the more satisfactory of the 2 because it requires less time for equilibration, less tissue, and less handling of tissue. Phloem water potential of a yellow-poplar tree followed a diurnal pattern quite similar to that of leaves, except that the values were higher (less negative) and changed less than in the leaves. The psychrometer technique permits a different approach to the study of translocation in trees. Measurements of water potential of phloem discs followed by freezing of samples and determination of osmotic potential allows estimation of turgor pressure in various parts of trees as the difference between osmotic potential and total water potential. This technique was used in evaluating gradients in water potential, osmotic potential, and turgor pressure in red maple trees. The expected gradients in osmotic potential were observed in the phloem, osmotic potential of the cell sap increasing (sap becoming more dilute) down the trunk. However, values of water potential were such that a gradient in turgor pressure apparently did not exist at a time when rate of translocation was expected to be high. These results do not support the mass flow theory of translocation favored by many workers. PMID:16656495
Ding, Shiming; Wang, Yan; Xu, Di; Zhu, Chungang; Zhang, Chaosheng
2013-07-16
We report a highly promising technique for the high-resolution imaging of labile phosphorus (P) in sediments and soils in combination with the diffusive gradients in thin films (DGT). This technique was based on the surface coloration of the Zr-oxide binding gel using the conventional molybdenum blue method following the DGT uptake of P to this gel. The accumulated mass of the P in the gel was then measured according to the grayscale intensity on the gel surface using computer-imaging densitometry. A pretreatment of the gel in hot water (85 °C) for 5 d was required to immobilize the phosphate and the formed blue complex in the gel during the color development. The optimal time required for a complete color development was determined to be 45 min. The appropriate volume of the coloring reagent added was 200 times of that of the gel. A calibration equation was established under the optimized conditions, based on which a quantitative measurement of P was obtained when the concentration of P in solutions ranged from 0.04 mg L(-1) to 4.1 mg L(-1) for a 24 h deployment of typical DGT devices at 25 °C. The suitability of the coloration technique was well demonstrated by the observation of small, discrete spots with elevated P concentrations in a sediment profile.
Particle Swarm Imaging (PSIM) - Innovative Gamma-Ray Assay - 13497
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parvin, Daniel; Clarke, Sean; Humes, Sarah J.
2013-07-01
Particle Swarm Imaging is an innovative technique used to perform quantitative gamma-ray assay. The innovation overcomes some of the difficulties associated with the accurate measurement and declaration of measurement uncertainties of radionuclide inventories within waste items when the distribution of activity is unknown. Implementation requires minimal equipment, with field measurements and results obtained using only a single electrically cooled HRGS gamma-ray detector. Examples of its application in the field are given in this paper. (authors)
Phase-sensitive two-dimensional neutron shearing interferometer and Hartmann sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kevin
2015-12-08
A neutron imaging system detects both the phase shift and absorption of neutrons passing through an object. The neutron imaging system is based on either of two different neutron wavefront sensor techniques: 2-D shearing interferometry and Hartmann wavefront sensing. Both approaches measure an entire two-dimensional neutron complex field, including its amplitude and phase. Each measures the full-field, two-dimensional phase gradients and, concomitantly, the two-dimensional amplitude mapping, requiring only a single measurement.
NASA Astrophysics Data System (ADS)
Jaspers, Mariëlle E. H.; Maltha, Ilse; Klaessens, John H. G. M.; de Vet, Henrica C. W.; Verdaasdonk, Rudolf M.; van Zuijlen, Paul P. M.
2016-09-01
Adequate assessment of burn wounds is crucial in the management of burn patients. Thermography, as a noninvasive measurement tool, can be utilized to detect the remaining perfusion over large burn wound areas by measuring temperature, thereby reflecting the healing potential (HP) (i.e., number of days that burns require to heal). The objective of this study was to evaluate the clinimetric properties (i.e., reliability and validity) of thermography for measuring burn wound HP. To evaluate reliability, two independent observers performed a thermography measurement of 50 burns. The intraclass correlation coefficient (ICC), the standard error of measurement (SEM), and the limits of agreement (LoA) were calculated. To assess validity, temperature differences between burned and nonburned skin (ΔT) were compared to the HP found by laser Doppler imaging (serving as the reference standard). By applying a visual method, one ΔT cutoff point was identified to differentiate between burns requiring conservative versus surgical treatment. The ICC was 0.99, expressing an excellent correlation between two measurements. The SEM was calculated at 0.22°C, the LoA at -0.58°C and 0.64°C. The ΔT cutoff point was -0.07°C (sensitivity 80% specificity 80%). These results show that thermography is a reliable and valid technique in the assessment of burn wound HP.
Biomedical and Human Factors Requirements for a Manned Earth Orbiting Station
NASA Technical Reports Server (NTRS)
Helvey, W.; Martell, C.; Peters, J.; Rosenthal, G.; Benjamin, F.; Albright, G.
1964-01-01
The primary objective of this study is to determine which biomedical and human factors measurements must be made aboard a space station to assure adequate evaluation of the astronaut's health and performance during prolonged space flights. The study has employed, where possible, a medical and engineering systems analysis to define the pertinent life sciences and space station design parameters and their influence on a measurement program. The major areas requiring evaluation in meeting the study objectives include a definition of the space environment, man's response to the environment, selection of measurement and data management techniques, experimental program, space station design requirements, and a trade-off analysis with final recommendations. The space environment factors that are believed to have a significant effect on man were evaluated. This includes those factors characteristic of the space environment (e. g. weightlessness, radiation) as well as those created within the space station (e. g. toxic contaminants, capsule atmosphere). After establishing the general features of the environment, an appraisal was made of the anticipated response of the astronaut to each of these factors. For thoroughness, the major organ systems and functions of the body were delineated, and a determination was made of their anticipated response to each of the environmental categories. A judgment was then made on the medical significance or importance of each response, which enabled a determination of which physiological and psychological effects should be monitored. Concurrently, an extensive list of measurement techniques and methods of data management was evaluated for applicability to the space station program. The various space station configurations and design parameters were defined in terms of the biomedical and human factors requirements to provide the measurements program. Research design of experimental programs for various station configurations, mission durations, and crew sizes were prepared, and, finally, a trade-off analysis of the critical variables in the station planning was completed with recommendations to enhance the confidence in the measurement program.
Manufacturing implant supported auricular prostheses by rapid prototyping techniques.
Karatas, Meltem Ozdemir; Cifter, Ebru Demet; Ozenen, Didem Ozdemir; Balik, Ali; Tuncer, Erman Bulent
2011-08-01
Maxillofacial prostheses are usually fabricated on the models obtained following the impression procedures. Disadvantages of conventional impression techniques used in production of facial prosthesis are deformation of soft tissues caused by impression material and disturbance of the patient due to. Additionally production of prosthesis by conventional methods takes longer time. Recently, rapid prototyping techniques have been developed for extraoral prosthesis in order to reduce these disadvantages of conventional methods. Rapid prototyping technique has the potential to simplify the procedure and decrease the laboratory work required. It eliminates the need for measurement impression procedures and preparation of wax model to be performed by prosthodontists themselves In the near future this technology will become a standard for fabricating maxillofacial prostheses.
Chadwick, R G; McCabe, J F; Walls, A W; Mitchell, H L; Storer, R
1991-02-01
This paper describes monitoring the wear of restorations borne by partial dentures over a 12 months period using a novel photogrammetric technique and modified United States Public Health Service (USPHS) criteria. The performance of Class II restorations of Dispersalloy was compared with that of similar restorations of either KetacFil or Occlusin. The photogrammetric technique highlighted differences in performance not detected by the modified USPHS criteria. It is concluded that the photogrammetric technique should prove valuable in the in vivo assessment of the performance of restorative materials but that further refinement of the method is required particularly with regard to the orientation of replicas for sequential measurements.
Optimizing Lidar Scanning Strategies for Wind Energy Measurements (Invited)
NASA Astrophysics Data System (ADS)
Newman, J. F.; Bonin, T. A.; Klein, P.; Wharton, S.; Chilson, P. B.
2013-12-01
Environmental concerns and rising fossil fuel prices have prompted rapid development in the renewable energy sector. Wind energy, in particular, has become increasingly popular in the United States. However, the intermittency of available wind energy makes it difficult to integrate wind energy into the power grid. Thus, the expansion and successful implementation of wind energy requires accurate wind resource assessments and wind power forecasts. The actual power produced by a turbine is affected by the wind speeds and turbulence levels experienced across the turbine rotor disk. Because of the range of measurement heights required for wind power estimation, remote sensing devices (e.g., lidar) are ideally suited for these purposes. However, the volume averaging inherent in remote sensing technology produces turbulence estimates that are different from those estimated by a sonic anemometer mounted on a standard meteorological tower. In addition, most lidars intended for wind energy purposes utilize a standard Doppler beam-swinging or Velocity-Azimuth Display technique to estimate the three-dimensional wind vector. These scanning strategies are ideal for measuring mean wind speeds but are likely inadequate for measuring turbulence. In order to examine the impact of different lidar scanning strategies on turbulence measurements, a WindCube lidar, a scanning Halo lidar, and a scanning Galion lidar were deployed at the Southern Great Plains Atmospheric Radiation Measurement (ARM) site in Summer 2013. Existing instrumentation at the ARM site, including a 60-m meteorological tower and an additional scanning Halo lidar, were used in conjunction with the deployed lidars to evaluate several user-defined scanning strategies. For part of the experiment, all three scanning lidars were pointed at approximately the same point in space and a tri-Doppler analysis was completed to calculate the three-dimensional wind vector every 1 second. In another part of the experiment, one of the scanning lidars ran a Doppler beam-swinging technique identical to that used by the WindCube lidar while another scanning lidar used a novel six-beam technique that has been presented in the literature as a better alternative for measuring turbulence. In this presentation, turbulence measurements from these techniques are compared to turbulence measured by the WindCube lidar and sonic anemometers on the 60-m meteorological tower. In addition, recommendations are made for lidar measurement campaigns for wind energy applications.
Experimental measurement of structural power flow on an aircraft fuselage
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1989-01-01
An experimental technique is used to measure the structural power flow through an aircraft fuselage with the excitation near the wing attachment location. Because of the large number of measurements required to analyze the whole of an aircraft fuselage, it is necessary that a balance be achieved between the number of measurement transducers, the mounting of these transducers, and the accuracy of the measurements. Using four transducers mounted on a bakelite platform, the structural intensity vectors at locations distributed throughout the fuselage are measured. To minimize the errors associated with using a four transducers technique the measurement positions are selected away from bulkheads and stiffeners. Because four separate transducers are used, with each transducer having its own drive and conditioning amplifiers, phase errors are introduced in the measurements that can be much greater than the phase differences associated with the measurements. To minimize these phase errors two sets of measurements are taken for each position with the orientation of the transducers rotated by 180 deg and an average taken between the two sets of measurements. Results are presented and discussed.
Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca; Palmer, Kevin; Deutsch, Clayton V.
High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit inmore » South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.« less
A simplified gross thrust computing technique for an afterburning turbofan engine
NASA Technical Reports Server (NTRS)
Hamer, M. J.; Kurtenbach, F. J.
1978-01-01
A simplified gross thrust computing technique extended to the F100-PW-100 afterburning turbofan engine is described. The technique uses measured total and static pressures in the engine tailpipe and ambient static pressure to compute gross thrust. Empirically evaluated calibration factors account for three-dimensional effects, the effects of friction and mass transfer, and the effects of simplifying assumptions for solving the equations. Instrumentation requirements and the sensitivity of computed thrust to transducer errors are presented. NASA altitude facility tests on F100 engines (computed thrust versus measured thrust) are presented, and calibration factors obtained on one engine are shown to be applicable to the second engine by comparing the computed gross thrust. It is concluded that this thrust method is potentially suitable for flight test application and engine maintenance on production engines with a minimum amount of instrumentation.
An Adaptive Kalman Filter Using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
Real-time Measurement of Epithelial Barrier Permeability in Human Intestinal Organoids.
Hill, David R; Huang, Sha; Tsai, Yu-Hwai; Spence, Jason R; Young, Vincent B
2017-12-18
Advances in 3D culture of intestinal tissues obtained through biopsy or generated from pluripotent stem cells via directed differentiation, have resulted in sophisticated in vitro models of the intestinal mucosa. Leveraging these emerging model systems will require adaptation of tools and techniques developed for 2D culture systems and animals. Here, we describe a technique for measuring epithelial barrier permeability in human intestinal organoids in real-time. This is accomplished by microinjection of fluorescently-labeled dextran and imaging on an inverted microscope fitted with epifluorescent filters. Real-time measurement of the barrier permeability in intestinal organoids facilitates the generation of high-resolution temporal data in human intestinal epithelial tissue, although this technique can also be applied to fixed timepoint imaging approaches. This protocol is readily adaptable for the measurement of epithelial barrier permeability following exposure to pharmacologic agents, bacterial products or toxins, or live microorganisms. With minor modifications, this protocol can also serve as a general primer on microinjection of intestinal organoids and users may choose to supplement this protocol with additional or alternative downstream applications following microinjection.