Science.gov

Sample records for advanced estimation techniques

  1. Advances in parameter estimation techniques applied to flexible structures

    NASA Technical Reports Server (NTRS)

    Maben, Egbert; Zimmerman, David C.

    1994-01-01

    In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.

  2. Development of advanced techniques for rotorcraft state estimation and parameter identification

    NASA Technical Reports Server (NTRS)

    Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.

    1980-01-01

    An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.

  3. Advances in the regionalization approach: geostatistical techniques for estimating flood quantiles

    NASA Astrophysics Data System (ADS)

    Chiarello, Valentina; Caporali, Enrica; Matthies, Hermann G.

    2015-04-01

    The knowledge of peak flow discharges and associated floods is of primary importance in engineering practice for planning of water resources and risk assessment. Streamflow characteristics are usually estimated starting from measurements of river discharges at stream gauging stations. However, the lack of observations at site of interest as well as the measurement inaccuracies, bring inevitably to the necessity of developing predictive models. Regional analysis is a classical approach to estimate river flow characteristics at sites where little or no data exists. Specific techniques are needed to regionalize the hydrological variables over the considered area. Top-kriging or topological kriging, is a kriging interpolation procedure that takes into account the geometric organization and structure of hydrographic network, the catchment area and the nested nature of catchments. The continuous processes in space defined for the point variables are represented by a variogram. In Top-kriging, the measurements are not point values but are defined over a non-zero catchment area. Top-kriging is applied here over the geographical space of Tuscany Region, in Central Italy. The analysis is carried out on the discharge data of 57 consistent runoff gauges, recorded from 1923 to 2014. Top-kriging give also an estimation of the prediction uncertainty in addition to the prediction itself. The results are validated using a cross-validation procedure implemented in the package rtop of the open source statistical environment R The results are compared through different error measurement methods. Top-kriging seems to perform better in nested catchments and larger scale catchments but no for headwater or where there is a high variability for neighbouring catchments.

  4. On Advanced Estimation Techniques for Exoplanet Detection and Characterization Using Ground-based Coronagraphs

    PubMed Central

    Lawson, Peter R.; Poyneer, Lisa; Barrett, Harrison; Frazin, Richard; Caucci, Luca; Devaney, Nicholas; Furenlid, Lars; Gładysz, Szymon; Guyon, Olivier; Krist, John; Maire, Jérôme; Marois, Christian; Mawet, Dimitri; Mouillet, David; Mugnier, Laurent; Pearson, Iain; Perrin, Marshall; Pueyo, Laurent; Savransky, Dmitry

    2015-01-01

    The direct imaging of planets around nearby stars is exceedingly difficult. Only about 14 exoplanets have been imaged to date that have masses less than 13 times that of Jupiter. The next generation of planet-finding coronagraphs, including VLT-SPHERE, the Gemini Planet Imager, Palomar P1640, and Subaru HiCIAO have predicted contrast performance of roughly a thousand times less than would be needed to detect Earth-like planets. In this paper we review the state of the art in exoplanet imaging, most notably the method of Locally Optimized Combination of Images (LOCI), and we investigate the potential of improving the detectability of faint exoplanets through the use of advanced statistical methods based on the concepts of the ideal observer and the Hotelling observer. We propose a formal comparison of techniques using a blind data challenge with an evaluation of performance using the Receiver Operating Characteristic (ROC) and Localization ROC (LROC) curves. We place particular emphasis on the understanding and modeling of realistic sources of measurement noise in ground-based AO-corrected coronagraphs. The work reported in this paper is the result of interactions between the co-authors during a week-long workshop on exoplanet imaging that was held in Squaw Valley, California, in March of 2012. PMID:26347393

  5. On Advanced Estimation Techniques for Exoplanet Detection and Characterization using Ground-based Coronagraphs

    NASA Technical Reports Server (NTRS)

    Lawson, Peter; Frazin, Richard

    2012-01-01

    The direct imaging of planets around nearby stars is exceedingly difficult. Only about 14 exoplanets have been imaged to date that have masses less than 13 times that of Jupiter. The next generation of planet-finding coronagraphs, including VLT-SPHERE, the Gemini Planet Imager, Palomar P1640, and Subaru HiCIAO have predicted contrast performance of roughly a thousand times less than would be needed to detect Earth-like planets. In this paper we review the state of the art in exoplanet imaging, most notably the method of Locally Optimized Combination of Images (LOCI), and we investigate the potential of improving the detectability of faint exoplanets through the use of advanced statistical methods based on the concepts of the ideal observer and the Hotelling observer. We propose a formal comparison of techniques using a blind data challenge with an evaluation of performance using the Receiver Operating Characteristic (ROC) and Localization ROC (LROC) curves. We place particular emphasis on the understanding and modeling of realistic sources of measurement noise in ground-based AO-corrected coronagraphs. The work reported in this paper is the result of interactions between the co-authors during a week-long workshop on exoplanet imaging that was held in Squaw Valley, California, in March of 2012

  6. On Advanced Estimation Techniques for Exoplanet Detection and Characterization using Ground-Based Coronagraphs

    NASA Technical Reports Server (NTRS)

    Lawson, Peter R.; Frazin, Richard; Barrett, Harrison; Caucci, Luca; Devaney, Nicholas; Furenlid, Lars; Gladysz, Szymon; Guyon, Olivier; Krist, John; Maire, Jerome; Marois, Christian; Mawet, Dimitri; Mouillet, David; Mugnier, Laurent; Perrin, Marshall; Poyneer, Lisa; Pueyo, Laurent; Savransky, Dmitry; Soummer, Remi

    2012-01-01

    The direct imaging of planets around nearby stars is exceedingly difficult. Only about 14 exoplanets have been imaged to date that have masses less than 13 times that of Jupiter. The next generation of planet-finding coronagraphs, including VLT-SPHERE, the Gemini Planet Imager, Palomar P1640, and Subaru HiCIAO have predicted contrast performance of roughly a thousand times less than would be needed to detect Earth-like planets. In this paper we review the state of the art in exoplanet imaging, most notably the method of Locally Optimized Combination of Images (LOCI), and we investigate the potential of improving the detectability of faint exoplanets through the use of advanced statistical methods based on the concepts of the ideal observer and the Hotelling observer. We provide a formal comparison of techniques through a blind data challenge and evaluate performance using the Receiver Operating Characteristic (ROC) and Localization ROC (LROC) curves. We place particular emphasis on the understanding and modeling of realistic sources of measurement noise in ground-based AO-corrected coronagraphs. The work reported in this paper is the result of interactions between the co-authors during a week-long workshop on exoplanet imaging that was held in Squaw Valley, California, in March of 2012.

  7. Time-frequency and advanced frequency estimation techniques for the investigation of bat echolocation calls.

    PubMed

    Kopsinis, Yannis; Aboutanios, Elias; Waters, Dean A; McLaughlin, Steve

    2010-02-01

    In this paper, techniques for time-frequency analysis and investigation of bat echolocation calls are studied. Particularly, enhanced resolution techniques are developed and/or used in this specific context for the first time. When compared to traditional time-frequency representation methods, the proposed techniques are more capable of showing previously unseen features in the structure of bat echolocation calls. It should be emphasized that although the study is focused on bat echolocation recordings, the results are more general and applicable to many other types of signal. PMID:20136233

  8. Advanced qualification techniques

    SciTech Connect

    Winokur, P.S; Shaneyfelt, M.R.; Meisenheimer, T.L.; Fleetwood, D.M.

    1993-12-01

    This paper demonstrates use of the Qualified Manufacturers List (QML) methodology to qualify commercial and military microelectronics for use in space applications. QML ``builds in`` the hardness of product through statistical process control (SPC) of technology parameters relevant to the radiation response, test structure to integrated circuit (IC) correlations, and techniques for extrapolating laboratory test results to low-dose-rate space scenarios. Each of these elements is demonstrated and shown to be a cost-effective alternative to expensive end-of-line IC testing. Several examples of test structured-IC correlations are provided and recent work on complications arising from transistor scaling and geometry is discussed. The use of a 10-keV x-ray wafer-level test system to support SPC and establish ``process capability`` is illustrated and a comparison of 10-keV x-ray and Co{sup 60} gamma irradiations is provided for a wide range of CMOS technologies. The x-ray tester is shown to be cost-effective and its use in lot acceptance/qualification is recommended. Finally, a comparison is provided between MIL-STD-883D, Test Method 1019.4, which governs the testing of packaged semiconductor microcircuits in the DoD, and ESA/SSC Basic Specification No. 22900, Europe`s Total Dose Steady-State Irradiation Test Method. Test Method 1019.4 focuses on conservative estimates of MOS hardness for space and tactical applications, while Basic Specification 22900 focuses on improved simulation of low-dose-rate space environments.

  9. Advanced qualification techniques

    NASA Astrophysics Data System (ADS)

    Winokur, P. S.; Shaneyfelt, M. R.; Meisenheimer, T. L.; Fleetwood, D. M.

    This paper demonstrates use of the Qualified Manufacturers List (QML) methodology to qualify commercial and military microelectronics for use in space applications. QML 'builds in' the hardness of product through statistical process control (SPC) of technology parameters relevant to the radiation response, test structure to integrated circuit (IC) correlations, and techniques for extrapolating laboratory test results to low-dose-rate space scenarios. Each of these elements is demonstrated and shown to be a cost-effective alternative to expensive end-of-line IC testing. Several examples of test structured-IC correlations are provided and recent work on complications arising from transistor scaling and geometry is discussed. The use of a 10-keV x-ray wafer-level test system to support SPC and establish 'process capability' is illustrated and a comparison of 10-keV x-ray and Co-60 gamma irradiations is provided for a wide range of CMOS technologies. The x-ray tester is shown to be cost-effective and its use in lot acceptance/qualification is recommended. Finally, a comparison is provided between MIL-STD-883D, Test Method 1019.4, which governs the testing of packaged semiconductor microcircuits in the DoD, and ESA/SSC Basic Specification No. 22900, Europe's Total Dose Steady-State Irradiation Test Method. Test Method 1019.4 focuses on conservative estimates of MOS hardness for space and tactical applications, while Basic Specification 22900 focuses on improved simulation of low-dose-rate space environments.

  10. Advanced qualification techniques

    SciTech Connect

    Winokur, P.S.; Shaneyfelt, M.R.; Meisenheimer, T.L.; Fleetwood, D.M. )

    1994-06-01

    This paper demonstrates use of the Qualified Manufacturers List (QML) methodology to qualify commercial and military microelectronics for use in space applications. QML ''builds in'' the hardness of product through statistical process control (SPC) of technology parameters relevant to the radiation response, test structure to integrated circuit (IC) correlations, and techniques for extrapolating laboratory test results to low-dose-rate space scenarios. Each of these elements is demonstrated and shown to be a cost-effective alternative to expensive end-of-line IC testing. Several examples of test structure-to-IC correlations are provided and recent work on complications arising from transistor scaling and geometry is discussed. The use of a 10-keV x-ray wafer-level test system to support SPC and establish ''process capability'' is illustrated and a comparison of 10-kev x-ray wafer-level test system to support SPC and establish ''process capability'' is illustrated and a comparison of 10-keV x-ray and Co[sup 60] gamma irradiations is provided for a wide range of CMOS technologies. The x-ray tester is shown to be cost-effective and its use in lot acceptance/qualification is recommended. Finally, a comparison is provided between MIL-STD-883, Test Method 1019.4, which governs the testing of packaged semiconductor microcircuits in the DoD, and ESA/SCC Basic Specification No. 22900, Europe's Total Dose Steady-State Irradiation Test Method. Test Method 1019.4 focuses on conservative estimates of MOS hardness for space and tactical applications, while Basic Specification 22900 focuses on improved simulation of low-dose-rate space environments.

  11. Advanced qualification techniques

    NASA Astrophysics Data System (ADS)

    Winokur, P. S.; Shaneyfelt, M. R.; Meisenheimer, T. L.; Fleetwood, D. M.

    1994-06-01

    This paper demonstrates use of the Qualified Manufacturers List (QML) methodology to qualify commercial and military microelectronics for use in space applications. QML 'builds in' the hardness of product through statistical process control (SPC) of technology parameters relevant to the radiation response, test structure to integrated circuit (IC) correlations, and techniques for extrapolating laboratory test results to low-dose-rate space scenarios. Each of these elements is demonstrated and shown to be a cost-effective alternative to expensive end-of-line IC testing. Several examples of test structure-to-IC correlations are provided and recent work on complications arising from transistor scaling and geometry is discussed. The use of a 10-keV x-ray wafer-level test system to support SPC and establish 'process capability' is illustrated and a comparison of 10-keV x-ray and Co-60 gamma irradiations is provided for a wide range of CMOS technologies. The x-ray tester is shown to be cost-effective and its use in lot acceptance/qualification is recommended. Finally, a comparison is provided between MIL-STD-883, Test Method 1019.4, which governs the testing of packaged semiconductor microcircuits in the DoD, and ESA/SCC Basic Specification No. 22900, Europe's Total Dose Steady-State Irradiation Test Method. Test Method 1019.4 focuses on conservative estimates of MOS hardness for space and tactical applications, while Basic Specification 22900 focuses on improved simulation of low-dose-rate space environments.

  12. Advanced radiographic imaging techniques.

    NASA Technical Reports Server (NTRS)

    Beal, J. B.; Brown, R. L.

    1973-01-01

    Examination of the nature and operational constraints of conventional X-radiographic and neutron imaging methods, providing a foundation for a discussion of advanced radiographic imaging systems. Two types of solid-state image amplifiers designed to image X rays are described. Operational theory, panel construction, and performance characteristics are discussed. A closed-circuit television system for imaging neutrons is then described and the system design, operational theory, and performance characteristics are outlined. Emphasis is placed on a description of the advantages of these imaging systems over conventional methods.

  13. Advanced Coating Removal Techniques

    NASA Technical Reports Server (NTRS)

    Seibert, Jon

    2006-01-01

    An important step in the repair and protection against corrosion damage is the safe removal of the oxidation and protective coatings without further damaging the integrity of the substrate. Two such methods that are proving to be safe and effective in this task are liquid nitrogen and laser removal operations. Laser technology used for the removal of protective coatings is currently being researched and implemented in various areas of the aerospace industry. Delivering thousands of focused energy pulses, the laser ablates the coating surface by heating and dissolving the material applied to the substrate. The metal substrate will reflect the laser and redirect the energy to any remaining protective coating, thus preventing any collateral damage the substrate may suffer throughout the process. Liquid nitrogen jets are comparable to blasting with an ultra high-pressure water jet but without the residual liquid that requires collection and removal .As the liquid nitrogen reaches the surface it is transformed into gaseous nitrogen and reenters the atmosphere without any contamination to surrounding hardware. These innovative technologies simplify corrosion repair by eliminating hazardous chemicals and repetitive manual labor from the coating removal process. One very significant advantage is the reduction of particulate contamination exposure to personnel. With the removal of coatings adjacent to sensitive flight hardware, a benefit of each technique for the space program is that no contamination such as beads, water, or sanding residue is left behind when the job is finished. One primary concern is the safe removal of coatings from thin aluminum honeycomb face sheet. NASA recently conducted thermal testing on liquid nitrogen systems and found that no damage occurred on 1/6", aluminum substrates. Wright Patterson Air Force Base in conjunction with Boeing and NASA is currently testing the laser remOval technique for process qualification. Other applications of liquid

  14. Advanced Wavefront Control Techniques

    SciTech Connect

    Olivier, S S; Brase, J M; Avicola, K; Thompson, C A; Kartz, M W; Winters, S; Hartley, R; Wihelmsen, J; Dowla, F V; Carrano, C J; Bauman, B J; Pennington, D M; Lande, D; Sawvel, R M; Silva, D A; Cooke, J B; Brown, C G

    2001-02-21

    this project, work was performed in four areas (1) advanced modeling tools for deformable mirrors (2) low-order wavefront correctors with Alvarez lenses, (3) a direct phase measuring heterdyne wavefront sensor, and (4) high-spatial-frequency wavefront control using spatial light modulators.

  15. Advances in Procedural Techniques - Antegrade

    PubMed Central

    Wilson, William; Spratt, James C.

    2014-01-01

    There have been many technological advances in antegrade CTO PCI, but perhaps most importantly has been the evolution of the “hybrid’ approach where ideally there exists a seamless interplay of antegrade wiring, antegrade dissection re-entry and retrograde approaches as dictated by procedural factors. Antegrade wire escalation with intimal tracking remains the preferred initial strategy in short CTOs without proximal cap ambiguity. More complex CTOs, however, usually require either a retrograde or an antegrade dissection re-entry approach, or both. Antegrade dissection re-entry is well suited to long occlusions where there is a healthy distal vessel and limited “interventional” collaterals. Early use of a dissection re-entry strategy will increase success rates, reduce complications, and minimise radiation exposure, contrast use as well as procedural times. Antegrade dissection can be achieved with a knuckle wire technique or the CrossBoss catheter whilst re-entry will be achieved in the most reproducible and reliable fashion by the Stingray balloon/wire. It should be avoided where there is potential for loss of large side branches. It remains to be seen whether use of newer dissection re-entry strategies will be associated with lower restenosis rates compared with the more uncontrolled subintimal tracking strategies such as STAR and whether stent insertion in the subintimal space is associated with higher rates of late stent malapposition and stent thrombosis. It is to be hoped that the algorithms, which have been developed to guide CTO operators, allow for a better transfer of knowledge and skills to increase uptake and acceptance of CTO PCI as a whole. PMID:24694104

  16. Advanced Spectroscopy Technique for Biomedicine

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhua; Zeng, Haishan

    This chapter presents an overview of the applications of optical spectroscopy in biomedicine. We focus on the optical design aspects of advanced biomedical spectroscopy systems, Raman spectroscopy system in particular. Detailed components and system integration are provided. As examples, two real-time in vivo Raman spectroscopy systems, one for skin cancer detection and the other for endoscopic lung cancer detection, and an in vivo confocal Raman spectroscopy system for skin assessment are presented. The applications of Raman spectroscopy in cancer diagnosis of the skin, lung, colon, oral cavity, gastrointestinal tract, breast, and cervix are summarized.

  17. Stitching Techniques Advance Optics Manufacturing

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Because NASA depends on the fabrication and testing of large, high-quality aspheric (nonspherical) optics for applications like the James Webb Space Telescope, it sought an improved method for measuring large aspheres. Through Small Business Innovation Research (SBIR) awards from Goddard Space Flight Center, QED Technologies, of Rochester, New York, upgraded and enhanced its stitching technology for aspheres. QED developed the SSI-A, which earned the company an R&D 100 award, and also developed a breakthrough machine tool called the aspheric stitching interferometer. The equipment is applied to advanced optics in telescopes, microscopes, cameras, medical scopes, binoculars, and photolithography."

  18. Advanced measurement techniques, part 1

    NASA Technical Reports Server (NTRS)

    Holmes, Bruce J.; Carraway, Debra L.; Manuel, Gregory S.; Croom, Cynthia C.

    1987-01-01

    In modern laminar flow flight and wind tunnel research, it is important to understand the specific cause(s) of laminar to turbulent boundary layer transition. Such information is crucial to the exploration of the limits of practical application of laminar flow for drag reduction on aircraft. The process of transition involves both the possible modes of disturbance growth, and the environmental conditioning of the instabilities by freestream or surface conditions. The possible modes of disturbance growth include viscous, inviscid, and modes which may bypass these natural ones. Theory provides information on the possible modes of disturbance amplification, but experimentation must be relied upon to determine which of those modes actually dominates the transition process in a given environment. The results to date of research on advanced devices and methods used for the study of transition phenomena in the subsonic and transonic flight and wind tunnel environments are presented.

  19. Nuclear material investigations by advanced analytical techniques

    NASA Astrophysics Data System (ADS)

    Degueldre, C.; Kuri, G.; Martin, M.; Froideval, A.; Cammelli, S.; Orlov, A.; Bertsch, J.; Pouchon, M. A.

    2010-10-01

    Advanced analytical techniques have been used to characterize nuclear materials at the Paul Scherrer Institute during the last decade. The analysed materials ranged from reactor pressure vessel (RPV) steels, Zircaloy claddings to fuel samples. The processes studied included copper cluster build up in RPV steels, corrosion, mechanical and irradiation damage behaviour of PWR and BWR cladding materials as well as fuel defect development. The used advanced techniques included muon spin resonance spectroscopy for zirconium alloy defect characterization while fuel element materials were analysed by techniques derived from neutron and X-ray scattering and absorption spectroscopy.

  20. Simulations of motor unit number estimation techniques

    NASA Astrophysics Data System (ADS)

    Major, Lora A.; Jones, Kelvin E.

    2005-06-01

    Motor unit number estimation (MUNE) is an electrodiagnostic procedure used to evaluate the number of motor axons connected to a muscle. All MUNE techniques rely on assumptions that must be fulfilled to produce a valid estimate. As there is no gold standard to compare the MUNE techniques against, we have developed a model of the relevant neuromuscular physiology and have used this model to simulate various MUNE techniques. The model allows for a quantitative analysis of candidate MUNE techniques that will hopefully contribute to consensus regarding a standard procedure for performing MUNE.

  1. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    The development of parametric cost estimating methods for advanced space systems in the conceptual design phase is discussed. The process of identifying variables which drive cost and the relationship between weight and cost are discussed. A theoretical model of cost is developed and tested using a historical data base of research and development projects.

  2. Hybrid mesh generation using advancing reduction technique

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study presents an extension of the application of the advancing reduction technique to the hybrid mesh generation. The proposed algorithm is based on a pre-generated rectangle mesh (RM) with a certain orientation. The intersection points between the two sets of perpendicular mesh lines in RM an...

  3. Recent advancement of turbulent flow measurement techniques

    NASA Technical Reports Server (NTRS)

    Battle, T.; Wang, P.; Cheng, D. Y.

    1974-01-01

    Advancements of the fluctuating density gradient cross beam laser Schlieren technique, the fluctuating line-reversal temperature measurement and the development of the two-dimensional drag-sensing probe to a three-dimensional drag-sensing probe are discussed. The three-dimensionality of the instantaneous momentum vector can shed some light on the nature of turbulence especially with swirling flow. All three measured fluctuating quantities (density, temperature, and momentum) can provide valuable information for theoreticians.

  4. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    Parametric cost estimating methods for space systems in the conceptual design phase are developed. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance, and time. The relationship between weight and cost is examined in detail. A theoretical model of cost is developed and tested statistically against a historical data base of major research and development programs. It is concluded that the technique presented is sound, but that it must be refined in order to produce acceptable cost estimates.

  5. Development of advanced acreage estimation methods

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr. (Principal Investigator)

    1980-01-01

    The use of the AMOEBA clustering/classification algorithm was investigated as a basis for both a color display generation technique and maximum likelihood proportion estimation procedure. An approach to analyzing large data reduction systems was formulated and an exploratory empirical study of spatial correlation in LANDSAT data was also carried out. Topics addressed include: (1) development of multiimage color images; (2) spectral spatial classification algorithm development; (3) spatial correlation studies; and (4) evaluation of data systems.

  6. Advanced Tools and Techniques for Formal Techniques in Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Knight, John C.

    2005-01-01

    This is the final technical report for grant number NAG-1-02101. The title of this grant was "Advanced Tools and Techniques for Formal Techniques In Aerospace Systems". The principal investigator on this grant was Dr. John C. Knight of the Computer Science Department, University of Virginia, Charlottesville, Virginia 22904-4740. This report summarizes activities under the grant during the period 7/01/2002 to 9/30/2004. This report is organized as follows. In section 2, the technical background of the grant is summarized. Section 3 lists accomplishments and section 4 lists students funded under the grant. In section 5, we present a list of presentations given at various academic and research institutions about the research conducted. Finally, a list of publications generated under this grant is included in section 6.

  7. Advanced flow MRI: emerging techniques and applications.

    PubMed

    Markl, M; Schnell, S; Wu, C; Bollache, E; Jarvis, K; Barker, A J; Robinson, J D; Rigsby, C K

    2016-08-01

    Magnetic resonance imaging (MRI) techniques provide non-invasive and non-ionising methods for the highly accurate anatomical depiction of the heart and vessels throughout the cardiac cycle. In addition, the intrinsic sensitivity of MRI to motion offers the unique ability to acquire spatially registered blood flow simultaneously with the morphological data, within a single measurement. In clinical routine, flow MRI is typically accomplished using methods that resolve two spatial dimensions in individual planes and encode the time-resolved velocity in one principal direction, typically oriented perpendicular to the two-dimensional (2D) section. This review describes recently developed advanced MRI flow techniques, which allow for more comprehensive evaluation of blood flow characteristics, such as real-time flow imaging, 2D multiple-venc phase contrast MRI, four-dimensional (4D) flow MRI, quantification of complex haemodynamic properties, and highly accelerated flow imaging. Emerging techniques and novel applications are explored. In addition, applications of these new techniques for the improved evaluation of cardiovascular (aorta, pulmonary arteries, congenital heart disease, atrial fibrillation, coronary arteries) as well as cerebrovascular disease (intra-cranial arteries and veins) are presented. PMID:26944696

  8. Advanced Bode Plot Techniques for Ultrasonic Transducers

    NASA Astrophysics Data System (ADS)

    DeAngelis, D. A.; Schulze, G. W.

    The Bode plot, displayed as either impedance or admittance versus frequency, is the most basic test used by ultrasonic transducer designers. With simplicity and ease-of-use, Bode plots are ideal for baseline comparisons such as spacing of parasitic modes or impedance, but quite often the subtleties that manifest as poor process control are hard to interpret or are nonexistence. In-process testing of transducers is time consuming for quantifying statistical aberrations, and assessments made indirectly via the workpiece are difficult. This research investigates the use of advanced Bode plot techniques to compare ultrasonic transducers with known "good" and known "bad" process performance, with the goal of a-priori process assessment. These advanced techniques expand from the basic constant voltage versus frequency sweep to include constant current and constant velocity interrogated locally on transducer or tool; they also include up and down directional frequency sweeps to quantify hysteresis effects like jumping and dropping phenomena. The investigation focuses solely on the common PZT8 piezoelectric material used with welding transducers for semiconductor wire bonding. Several metrics are investigated such as impedance, displacement/current gain, velocity/current gain, displacement/voltage gain and velocity/voltage gain. The experimental and theoretical research methods include Bode plots, admittance loops, laser vibrometry and coupled-field finite element analysis.

  9. Impacts of advanced manufacturing technology on parametric estimating

    NASA Astrophysics Data System (ADS)

    Hough, Paul G.

    1989-12-01

    The introduction of advanced manufacturing technology in the aerospace industry poses serious challenges for government cost analysts. Traditionally, the analysts have relied on parametric estimating techniques for both planning and budgeting. Despite its problems, this approach has proven to be a remarkably useful and robust tool for estimating new weapon system costs. However, rapid improvements in both product and process technology could exacerbate current difficulties, and diminish the utility of the parametric approach. This paper reviews some weakness associated with parametrics, then proceeds to examine how specific aspects of the factory of the future may further impact parametric estimating, and suggests avenues of research for their resolution. This paper is an extended version of Cost Estimating for the Factory of the Future. Parametric estimating is a method by which aggregated costs are derived as a function of high-level product characteristics or parameters. The resulting equations are known as cost estimating relationships (CERs). Such equations are particularly useful when detailed technical specifications are not available.

  10. Recent advances in DNA sequencing techniques

    NASA Astrophysics Data System (ADS)

    Singh, Rama Shankar

    2013-06-01

    Successful mapping of the draft human genome in 2001 and more recent mapping of the human microbiome genome in 2012 have relied heavily on the parallel processing of the second generation/Next Generation Sequencing (NGS) DNA machines at a cost of several millions dollars and long computer processing times. These have been mainly biochemical approaches. Here a system analysis approach is used to review these techniques by identifying the requirements, specifications, test methods, error estimates, repeatability, reliability and trends in the cost reduction. The first generation, NGS and the Third Generation Single Molecule Real Time (SMART) detection sequencing methods are reviewed. Based on the National Human Genome Research Institute (NHGRI) data, the achieved cost reduction of 1.5 times per yr. from Sep. 2001 to July 2007; 7 times per yr., from Oct. 2007 to Apr. 2010; and 2.5 times per yr. from July 2010 to Jan 2012 are discussed.

  11. Advanced Techniques for Power System Identification from Measured Data

    SciTech Connect

    Pierre, John W.; Wies, Richard; Trudnowski, Daniel

    2008-11-25

    Time-synchronized measurements provide rich information for estimating a power-system's electromechanical modal properties via advanced signal processing. This information is becoming critical for the improved operational reliability of interconnected grids. A given mode's properties are described by its frequency, damping, and shape. Modal frequencies and damping are useful indicators of power-system stress, usually declining with increased load or reduced grid capacity. Mode shape provides critical information for operational control actions. This project investigated many advanced techniques for power system identification from measured data focusing on mode frequency and damping ratio estimation. Investigators from the three universities coordinated their effort with Pacific Northwest National Laboratory (PNNL). Significant progress was made on developing appropriate techniques for system identification with confidence intervals and testing those techniques on field measured data and through simulation. Experimental data from the western area power system was provided by PNNL and Bonneville Power Administration (BPA) for both ambient conditions and for signal injection tests. Three large-scale tests were conducted for the western area in 2005 and 2006. Measured field PMU (Phasor Measurement Unit) data was provided to the three universities. A 19-machine simulation model was enhanced for testing the system identification algorithms. Extensive simulations were run with this model to test the performance of the algorithms. University of Wyoming researchers participated in four primary activities: (1) Block and adaptive processing techniques for mode estimation from ambient signals and probing signals, (2) confidence interval estimation, (3) probing signal design and injection method analysis, and (4) performance assessment and validation from simulated and field measured data. Subspace based methods have been use to improve previous results from block processing

  12. Advances in procedural techniques--antegrade.

    PubMed

    Wilson, William; Spratt, James C

    2014-05-01

    There have been many technological advances in antegrade CTO PCI, but perhaps most importantly has been the evolution of the "hybrid' approach where ideally there exists a seamless interplay of antegrade wiring, antegrade dissection re-entry and retrograde approaches as dictated by procedural factors. Antegrade wire escalation with intimal tracking remains the preferred initial strategy in short CTOs without proximal cap ambiguity. More complex CTOs, however, usually require either a retrograde or an antegrade dissection re-entry approach, or both. Antegrade dissection re-entry is well suited to long occlusions where there is a healthy distal vessel and limited "interventional" collaterals. Early use of a dissection re-entry strategy will increase success rates, reduce complications, and minimise radiation exposure, contrast use as well as procedural times. Antegrade dissection can be achieved with a knuckle wire technique or the CrossBoss catheter whilst re-entry will be achieved in the most reproducible and reliable fashion by the Stingray balloon/wire. It should be avoided where there is potential for loss of large side branches. It remains to be seen whether use of newer dissection re-entry strategies will be associated with lower restenosis rates compared with the more uncontrolled subintimal tracking strategies such as STAR and whether stent insertion in the subintimal space is associated with higher rates of late stent malapposition and stent thrombosis. It is to be hoped that the algorithms, which have been developed to guide CTO operators, allow for a better transfer of knowledge and skills to increase uptake and acceptance of CTO PCI as a whole. PMID:24694104

  13. Bringing Advanced Computational Techniques to Energy Research

    SciTech Connect

    Mitchell, Julie C

    2012-11-17

    Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

  14. Removing baseline flame's spectrum by using advanced recovering spectrum techniques.

    PubMed

    Arias, Luis; Sbarbaro, Daniel; Torres, Sergio

    2012-09-01

    In this paper, a novel automated algorithm to estimate and remove the continuous baseline from measured flame spectra is proposed. The algorithm estimates the continuous background based on previous information obtained from a learning database of continuous flame spectra. Then, the discontinuous flame emission is calculated by subtracting the estimated continuous baseline from the measured spectrum. The key issue subtending the learning database is that the continuous flame emissions are predominant in the sooty regions, in absence of discontinuous radiation. The proposed algorithm was tested using natural gas and bio-oil flames spectra at different combustion conditions, and the goodness-of-fit coefficient (GFC) quality metric was used to quantify the performance in the estimation process. Additionally, the commonly used first derivative method (FDM) for baseline removing was applied to the same testing spectra in order to compare and to evaluate the proposed technique. The achieved results show that the proposed method is a very attractive tool for designing advanced combustion monitoring strategies of discontinuous emissions. PMID:22945158

  15. Space shuttle propulsion parameter estimation using optimal estimation techniques

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.

  16. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1994-01-01

    NASA is responsible for developing much of the nation's future space technology. Cost estimates for new programs are required early in the planning process so that decisions can be made accurately. Because of the long lead times required to develop space hardware, the cost estimates are frequently required 10 to 15 years before the program delivers hardware. The system design in conceptual phases of a program is usually only vaguely defined and the technology used is so often state-of-the-art or beyond. These factors combine to make cost estimating for conceptual programs very challenging. This paper describes an effort to develop parametric cost estimating methods for space systems in the conceptual design phase. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance and time. The nature of the relationships between the driver variables and cost will be discussed. In particular, the relationship between weight and cost will be examined in detail. A theoretical model of cost will be developed and tested statistically against a historical database of major research and development projects.

  17. Recent Advances in Beam Diagnostic Techniques

    NASA Astrophysics Data System (ADS)

    Fiorito, R. B.

    2002-12-01

    We describe recent advances in diagnostics of the transverse phase space of charged particle beams. The emphasis of this paper is on the utilization of beam-based optical radiation for the precise measurement of the spatial distribution, divergence and emittance of relativistic charged particle beams. The properties and uses of incoherent as well as coherent optical transition, diffraction and synchrotron radiation for beam diagnosis are discussed.

  18. Space Shuttle propulsion parameter estimation using optimal estimation techniques

    NASA Technical Reports Server (NTRS)

    1983-01-01

    This fourth monthly progress report again contains corrections and additions to the previously submitted reports. The additions include a simplified SRB model that is directly incorporated into the estimation algorithm and provides the required partial derivatives. The resulting partial derivatives are analytical rather than numerical as would be the case using the SOBER routines. The filter and smoother routine developments have continued. These routines are being checked out.

  19. Advances in laparoscopic urologic surgery techniques

    PubMed Central

    Abdul-Muhsin, Haidar M.; Humphreys, Mitchell R.

    2016-01-01

    The last two decades witnessed the inception and exponential implementation of key technological advancements in laparoscopic urology. While some of these technologies thrived and became part of daily practice, others are still hindered by major challenges. This review was conducted through a comprehensive literature search in order to highlight some of the most promising technologies in laparoscopic visualization, augmented reality, and insufflation. Additionally, this review will provide an update regarding the current status of single-site and natural orifice surgery in urology. PMID:27134743

  20. Advances in laparoscopic urologic surgery techniques.

    PubMed

    Abdul-Muhsin, Haidar M; Humphreys, Mitchell R

    2016-01-01

    The last two decades witnessed the inception and exponential implementation of key technological advancements in laparoscopic urology. While some of these technologies thrived and became part of daily practice, others are still hindered by major challenges. This review was conducted through a comprehensive literature search in order to highlight some of the most promising technologies in laparoscopic visualization, augmented reality, and insufflation. Additionally, this review will provide an update regarding the current status of single-site and natural orifice surgery in urology. PMID:27134743

  1. Two biased estimation techniques in linear regression: Application to aircraft

    NASA Technical Reports Server (NTRS)

    Klein, Vladislav

    1988-01-01

    Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.

  2. Flashlamp failure modes and lifetime estimation techniques

    NASA Astrophysics Data System (ADS)

    Tucker, Ryand J. F.; Cochran, Nicholas; Morelli, Gregg L.

    2013-03-01

    Solid state pulsed laser systems are of interest for industrial applications. Flashlamps are an effective method for pumping solid state pulsed laser systems. Flashlamp lifetime is hard to quantify past the specification provided by the manufacture and is of concern for applications that are not used or tested on a frequent basis. The flashlamp lifetime can be shortened by three main failure modes: manufacturing quality escapes, shipping and handling damage, and shelf life. Manufacturing and shipping failure modes will be the focus of this research. Manufacturing and shipping failure modes are hard to detect, beyond the obvious non-functioning flashlamp, without testing to failure, which is not a feasible option. A method is being proposed that can estimate the lifetime of flashlamps as well as other key characteristics of the flashlamp while a relatively low number of shots are taken with the flashlamp. Fill pressure and fill gas will be determined by monitoring the input voltage, current, and output spectrum with comparison to the arc length, bore diameter, wall thickness and electrode configuration. Flashlamp lifetime estimations will be determined by monitoring the current, wavelength shift, and output intensity. Experimental results will be discussed focusing on the characteristics and lifetime estimations of flashlamps.

  3. Cost estimate guidelines for advanced nuclear power technologies

    SciTech Connect

    Hudson, C.R. II

    1986-07-01

    To make comparative assessments of competing technologies, consistent ground rules must be applied when developing cost estimates. This document provides a uniform set of assumptions, ground rules, and requirements that can be used in developing cost estimates for advanced nuclear power technologies.

  4. Cost estimate guidelines for advanced nuclear power technologies

    SciTech Connect

    Delene, J.G.; Hudson, C.R. II.

    1990-03-01

    To make comparative assessments of competing technologies, consistent ground rules must be applied when developing cost estimates. This document provides a uniform set of assumptions, ground rules, and requirements that can be used in developing cost estimates for advanced nuclear power technologies. 10 refs., 8 figs., 32 tabs.

  5. Advancing Methods for Estimating Cropland Area

    NASA Astrophysics Data System (ADS)

    King, L.; Hansen, M.; Stehman, S. V.; Adusei, B.; Potapov, P.; Krylov, A.

    2014-12-01

    Measurement and monitoring of complex and dynamic agricultural land systems is essential with increasing demands on food, feed, fuel and fiber production from growing human populations, rising consumption per capita, the expansion of crops oils in industrial products, and the encouraged emphasis on crop biofuels as an alternative energy source. Soybean is an important global commodity crop, and the area of land cultivated for soybean has risen dramatically over the past 60 years, occupying more than 5% of all global croplands (Monfreda et al 2008). Escalating demands for soy over the next twenty years are anticipated to be met by an increase of 1.5 times the current global production, resulting in expansion of soybean cultivated land area by nearly the same amount (Masuda and Goldsmith 2009). Soybean cropland area is estimated with the use of a sampling strategy and supervised non-linear hierarchical decision tree classification for the United States, Argentina and Brazil as the prototype in development of a new methodology for crop specific agricultural area estimation. Comparison of our 30 m2 Landsat soy classification with the National Agricultural Statistical Services Cropland Data Layer (CDL) soy map shows a strong agreement in the United States for 2011, 2012, and 2013. RapidEye 5m2 imagery was also classified for soy presence and absence and used at the field scale for validation and accuracy assessment of the Landsat soy maps, describing a nearly 1 to 1 relationship in the United States, Argentina and Brazil. The strong correlation found between all products suggests high accuracy and precision of the prototype and has proven to be a successful and efficient way to assess soybean cultivated area at the sub-national and national scale for the United States with great potential for application elsewhere.

  6. Evaluation of gravimetric techniques to estimate the microvascular filtration coefficient.

    PubMed

    Dongaonkar, R M; Laine, G A; Stewart, R H; Quick, C M

    2011-06-01

    Microvascular permeability to water is characterized by the microvascular filtration coefficient (K(f)). Conventional gravimetric techniques to estimate K(f) rely on data obtained from either transient or steady-state increases in organ weight in response to increases in microvascular pressure. Both techniques result in considerably different estimates and neither account for interstitial fluid storage and lymphatic return. We therefore developed a theoretical framework to evaluate K(f) estimation techniques by 1) comparing conventional techniques to a novel technique that includes effects of interstitial fluid storage and lymphatic return, 2) evaluating the ability of conventional techniques to reproduce K(f) from simulated gravimetric data generated by a realistic interstitial fluid balance model, 3) analyzing new data collected from rat intestine, and 4) analyzing previously reported data. These approaches revealed that the steady-state gravimetric technique yields estimates that are not directly related to K(f) and are in some cases directly proportional to interstitial compliance. However, the transient gravimetric technique yields accurate estimates in some organs, because the typical experimental duration minimizes the effects of interstitial fluid storage and lymphatic return. Furthermore, our analytical framework reveals that the supposed requirement of tying off all draining lymphatic vessels for the transient technique is unnecessary. Finally, our numerical simulations indicate that our comprehensive technique accurately reproduces the value of K(f) in all organs, is not confounded by interstitial storage and lymphatic return, and provides corroboration of the estimate from the transient technique. PMID:21346245

  7. Advance crew procedures development techniques: Procedures generation program requirements document

    NASA Technical Reports Server (NTRS)

    Arbet, J. D.; Benbow, R. L.; Hawk, M. L.

    1974-01-01

    The Procedures Generation Program (PGP) is described as an automated crew procedures generation and performance monitoring system. Computer software requirements to be implemented in PGP for the Advanced Crew Procedures Development Techniques are outlined.

  8. Feedback Techniques and Ecloud Instabilites - Design Estimates

    SciTech Connect

    Fox, J.D.; Mastorides, T.; Ndabashimiye, G.; Rivetta, C.; Winkle, D.Van; Byrd, J.; Vay, J-L; Hofle, W.; Rumolo, G.; Maria, R.De; /Brookhaven

    2009-05-18

    The SPS at high intensities exhibits transverse single-bunch instabilities with signatures consistent with an Ecloud driven instability. While the SPS has a coupled-bunch transverse feedback system, control of Ecloud-driven motion requires a much wider control bandwidth capable of sensing and controlling motion within each bunched beam. This paper draws beam dynamics data from the measurements and simulations of this SPS instability, and estimates system requirements for a feedback system with 2-4 GS/sec. sampling rates to damp Ecloud-driven transverse motion in the SPS at intensities desired for high-current LHC operation.

  9. Advanced airfoil design empirically based transonic aircraft drag buildup technique

    NASA Technical Reports Server (NTRS)

    Morrison, W. D., Jr.

    1976-01-01

    To systematically investigate the potential of advanced airfoils in advance preliminary design studies, empirical relationships were derived, based on available wind tunnel test data, through which total drag is determined recognizing all major aircraft geometric variables. This technique recognizes a single design lift coefficient and Mach number for each aircraft. Using this technique drag polars are derived for all Mach numbers up to MDesign + 0.05 and lift coefficients -0.40 to +0.20 from CLDesign.

  10. Advanced Optical Imaging Techniques for Neurodevelopment

    PubMed Central

    Wu, Yicong; Christensen, Ryan; Colón-Ramos, Daniel; Shroff, Hari

    2013-01-01

    Over the past decade, developmental neuroscience has been transformed by the widespread application of confocal and two-photon fluorescence microscopy. Even greater progress is imminent, as recent innovations in microscopy now enable imaging with increased depth, speed, and spatial resolution; reduced phototoxicity; and in some cases without external fluorescent probes. We discuss these new techniques and emphasize their dramatic impact on neurobiology, including the ability to image neurons at depths exceeding 1 mm, to observe neurodevelopment noninvasively throughout embryogenesis, and to visualize neuronal processes or structures that were previously too small or too difficult to target with conventional microscopy. PMID:23831260

  11. Advanced analysis techniques for uranium assay

    SciTech Connect

    Geist, W. H.; Ensslin, Norbert; Carrillo, L. A.; Beard, C. A.

    2001-01-01

    Uranium has a negligible passive neutron emission rate making its assay practicable only with an active interrogation method. The active interrogation uses external neutron sources to induce fission events in the uranium in order to determine the mass. This technique requires careful calibration with standards that are representative of the items to be assayed. The samples to be measured are not always well represented by the available standards which often leads to large biases. A technique of active multiplicity counting is being developed to reduce some of these assay difficulties. Active multiplicity counting uses the measured doubles and triples count rates to determine the neutron multiplication (f4) and the product of the source-sample coupling ( C ) and the 235U mass (m). Since the 35U mass always appears in the multiplicity equations as the product of Cm, the coupling needs to be determined before the mass can be known. A relationship has been developed that relates the coupling to the neutron multiplication. The relationship is based on both an analytical derivation and also on empirical observations. To determine a scaling constant present in this relationship, known standards must be used. Evaluation of experimental data revealed an improvement over the traditional calibration curve analysis method of fitting the doubles count rate to the 235Um ass. Active multiplicity assay appears to relax the requirement that the calibration standards and unknown items have the same chemical form and geometry.

  12. Parameter estimation techniques for LTP system identification

    NASA Astrophysics Data System (ADS)

    Nofrarias Serra, Miquel

    LISA Pathfinder (LPF) is the precursor mission of LISA (Laser Interferometer Space Antenna) and the first step towards gravitational waves detection in space. The main instrument onboard the mission is the LTP (LISA Technology Package) whose scientific goal is to test LISA's drag-free control loop by reaching a differential acceleration noise level between two masses in √ geodesic motion of 3 × 10-14 ms-2 / Hz in the milliHertz band. The mission is not only challenging in terms of technology readiness but also in terms of data analysis. As with any gravitational wave detector, attaining the instrument performance goals will require an extensive noise hunting campaign to measure all contributions with high accuracy. But, opposite to on-ground experiments, LTP characterisation will be only possible by setting parameters via telecommands and getting a selected amount of information through the available telemetry downlink. These two conditions, high accuracy and high reliability, are the main restrictions that the LTP data analysis must overcome. A dedicated object oriented Matlab Toolbox (LTPDA) has been set up by the LTP analysis team for this purpose. Among the different toolbox methods, an essential part for the mission are the parameter estimation tools that will be used for system identification during operations: Linear Least Squares, Non-linear Least Squares and Monte Carlo Markov Chain methods have been implemented as LTPDA methods. The data analysis team has been testing those methods with a series of mock data exercises with the following objectives: to cross-check parameter estimation methods and compare the achievable accuracy for each of them, and to develop the best strategies to describe the physics underlying a complex controlled experiment as the LTP. In this contribution we describe how these methods were tested with simulated LTP-like data to recover the parameters of the model and we report on the latest results of these mock data exercises.

  13. Validation of a technique for estimating outgoing longwave radiation from HIRS radiance observations

    NASA Technical Reports Server (NTRS)

    Ellingson, Robert G.; Lee, Hai-Tien; Yanuk, David; Gruber, Arnold

    1994-01-01

    Simultaneous observations by the Earth Radiation Budget Experiment (ERBE) scanning radiometer and the High-Resolution Infrared Sounder (HIRS) on board the NOAA-9 spacecraft have been used to validate a multispectral technique for estimating the outgoing longwave radiation (OLR) from the earth-atmosphere system. Results from approximately 100 000 collocated observations show that the HIRS technique provides instantaneous OLR estimates that agree with the ERBE observations just as well as different ERBE scanners agree with each other--about 5 W m(exp -2) rms. Although there are differences between the HIRS and ERBE estimates that depend upon the scene type and time of day, the HIRS technique explained more than 99% of the variance of the ERBE observations for both day and night observations. The results suggest that the HIRS OLR technique is a suitable replacement for the Advanced Very High Resolution Radiometer technique now used by the National Oceanic and Atmospheric Administration for operational estimates of the OLR.

  14. Techniques for estimating Space Station aerodynamic characteristics

    NASA Technical Reports Server (NTRS)

    Thomas, Richard E.

    1993-01-01

    A method was devised and calculations were performed to determine the effects of reflected molecules on the aerodynamic force and moment coefficients for a body in free molecule flow. A procedure was developed for determining the velocity and temperature distributions of molecules reflected from a surface of arbitrary momentum and energy accommodation. A system of equations, based on momentum and energy balances for the surface, incident, and reflected molecules, was solved by a numerical optimization technique. The minimization of a 'cost' function, developed from the set of equations, resulted in the determination of the defining properties of the flow reflected from the arbitrary surface. The properties used to define both the incident and reflected flows were: average temperature of the molecules in the flow, angle of the flow with respect to a vector normal to the surface, and the molecular speed ratio. The properties of the reflected flow were used to calculate the contribution of multiply reflected molecules to the force and moments on a test body in the flow. The test configuration consisted of two flat plates joined along one edge at a right angle to each other. When force and moment coefficients of this 90 deg concave wedge were compared to results that did not include multiple reflections, it was found that multiple reflections could nearly double lift and drag coefficients, with nearly a 50 percent increase in pitching moment for cases with specular or nearly specular accommodation. The cases of diffuse or nearly diffuse accommodation often had minor reductions in axial and normal forces when multiple reflections were included. There were several cases of intermediate accommodation where the addition of multiple reflection effects more than tripled the lift coefficient over the convex technique.

  15. COMPARISON OF RECURSIVE ESTIMATION TECHNIQUES FOR POSITION TRACKING RADIOACTIVE SOURCES

    SciTech Connect

    K. MUSKE; J. HOWSE

    2000-09-01

    This paper compares the performance of recursive state estimation techniques for tracking the physical location of a radioactive source within a room based on radiation measurements obtained from a series of detectors at fixed locations. Specifically, the extended Kalman filter, algebraic observer, and nonlinear least squares techniques are investigated. The results of this study indicate that recursive least squares estimation significantly outperforms the other techniques due to the severe model nonlinearity.

  16. Diagnostics of nonlocal plasmas: advanced techniques

    NASA Astrophysics Data System (ADS)

    Mustafaev, Alexander; Grabovskiy, Artiom; Strakhova, Anastasiya; Soukhomlinov, Vladimir

    2014-10-01

    This talk generalizes our recent results, obtained in different directions of plasma diagnostics. First-method of flat single-sided probe, based on expansion of the electron velocity distribution function (EVDF) in series of Legendre polynomials. It will be demonstrated, that flat probe, oriented under different angles with respect to the discharge axis, allow to determine full EVDF in nonlocal plasmas. It is also shown, that cylindrical probe is unable to determine full EVDF. We propose the solution of this problem by combined using the kinetic Boltzmann equation and experimental probe data. Second-magnetic diagnostics. This method is implemented in knudsen diode with surface ionization of atoms (KDSI) and based on measurements of the magnetic characteristics of the KDSI in presence of transverse magnetic field. Using magnetic diagnostics we can investigate the wide range of plasma processes: from scattering cross-sections of electrons to plasma-surface interactions. Third-noncontact diagnostics method for direct measurements of EVDF in remote plasma objects by combination of the flat single-sided probe technique and magnetic polarization Hanley method.

  17. Evaluation of Advanced Retrieval Techniques in an Experimental Online Catalog.

    ERIC Educational Resources Information Center

    Larson, Ray R.

    1992-01-01

    Discusses subject searching problems in online library catalogs; explains advanced information retrieval (IR) techniques; and describes experiments conducted on a test collection database, CHESHIRE (California Hybrid Extended SMART for Hypertext and Information Retrieval Experimentation), which was created to evaluate IR techniques in online…

  18. Innovative Tools Advance Revolutionary Weld Technique

    NASA Technical Reports Server (NTRS)

    2009-01-01

    The iconic, orange external tank of the space shuttle launch system not only contains the fuel used by the shuttle s main engines during liftoff but also comprises the shuttle s backbone, supporting the space shuttle orbiter and solid rocket boosters. Given the tank s structural importance and the extreme forces (7.8 million pounds of thrust load) and temperatures it encounters during launch, the welds used to construct the tank must be highly reliable. Variable polarity plasma arc welding, developed for manufacturing the external tank and later employed for building the International Space Station, was until 1994 the best process for joining the aluminum alloys used during construction. That year, Marshall Space Flight Center engineers began experimenting with a relatively new welding technique called friction stir welding (FSW), developed in 1991 by The Welding Institute, of Cambridge, England. FSW differs from traditional fusion welding in that it is a solid-state welding technique, using frictional heat and motion to join structural components without actually melting any of the material. The weld is created by a shouldered pin tool that is plunged into the seam of the materials to be joined. The tool traverses the line while rotating at high speeds, generating friction that heats and softens but does not melt the metal. (The heat produced approaches about 80 percent of the metal s melting temperature.) The pin tool s rotation crushes and stirs the plasticized metal, extruding it along the seam as the tool moves forward. The material cools and consolidates, resulting in a weld with superior mechanical properties as compared to those weld properties of fusion welds. The innovative FSW technology promises a number of attractive benefits. Because the welded materials are not melted, many of the undesirables associated with fusion welding porosity, cracking, shrinkage, and distortion of the weld are minimized or avoided. The process is more energy efficient, safe

  19. Positional estimation techniques for an autonomous mobile robot

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Aggarwal, J. K.

    1990-01-01

    Techniques for positional estimation of a mobile robot navigation in an indoor environment are described. A comprehensive review of the various positional estimation techniques studied in the literature is first presented. The techniques are divided into four different types and each of them is discussed briefly. Two different kinds of environments are considered for positional estimation; mountainous natural terrain and an urban, man-made environment with polyhedral buildings. In both cases, the robot is assumed to be equipped with single visual camera that can be panned and tilted and also a 3-D description (world model) of the environment is given. Such a description could be obtained from a stereo pair of aerial images or from the architectural plans of the buildings. Techniques for positional estimation using the camera input and the world model are presented.

  20. Advances in gamma titanium aluminides and their manufacturing techniques

    NASA Astrophysics Data System (ADS)

    Kothari, Kunal; Radhakrishnan, Ramachandran; Wereley, Norman M.

    2012-11-01

    Gamma titanium aluminides display attractive properties for high temperature applications. For over a decade in the 1990s, the attractive properties of titanium aluminides were outweighed by difficulties encountered in processing and machining at room temperature. But advances in manufacturing technologies, deeper understanding of titanium aluminides microstructure, deformation mechanisms, and advances in micro-alloying, has led to the production of gamma titanium aluminide sheets. An in-depth review of key advances in gamma titanium aluminides is presented, including microstructure, deformation mechanisms, and alloy development. Traditional manufacturing techniques such as ingot metallurgy and investment casting are reviewed and advances via powder metallurgy based manufacturing techniques are discussed. Finally, manufacturing challenges facing gamma titanium aluminides, as well as avenues to overcome them, are discussed.

  1. The application of advanced analytical techniques to direct coal liquefaction

    SciTech Connect

    Brandes, S.D.; Winschel, R.A.; Burke, F.P.; Robbins, G.A.

    1991-12-31

    Consol is coordinating a program designed to bridge the gap between the advanced, modern techniques of the analytical chemist and the application of those techniques by the direct coal liquefaction process developer, and to advance our knowledge of the process chemistry of direct coal liquefaction. The program is designed to provide well-documented samples to researchers who are utilizing techniques potentially useful for the analysis of coal derived samples. The choice of samples and techniques was based on an extensive survey made by Consol of the present status of analytical methodology associated with direct coal liquefaction technology. Sources of information included process developers and analytical chemists. Identified in the survey are a number of broadly characterizable needs. These categories include a need for: A better understanding of the nature of the high molecular weight, non-distillable residual materials (both soluble and insoluble) in the process streams; improved techniques for molecular characterization, heteroatom and hydrogen speciation and a knowledge of the hydrocarbon structural changes across coal liquefaction systems; better methods for sample separation; application of advanced data analysis methods; the use of more advanced predictive models; on-line analytical techniques; and better methods for catalyst monitoring.

  2. Advanced liner-cooling techniques for gas turbine combustors

    NASA Technical Reports Server (NTRS)

    Norgren, C. T.; Riddlebaugh, S. M.

    1985-01-01

    Component research for advanced small gas turbine engines is currently underway at the NASA Lewis Research Center. As part of this program, a basic reverse-flow combustor geometry was being maintained while different advanced liner wall cooling techniques were investigated. Performance and liner cooling effectiveness of the experimental combustor configuration featuring counter-flow film-cooled panels is presented and compared with two previously reported combustors featuring: splash film-cooled liner walls; and transpiration cooled liner walls (Lamilloy).

  3. [Advanced online search techniques and dedicated search engines for physicians].

    PubMed

    Nahum, Yoav

    2008-02-01

    In recent years search engines have become an essential tool in the work of physicians. This article will review advanced search techniques from the world of information specialists, as well as some advanced search engine operators that may help physicians improve their online search capabilities, and maximize the yield of their searches. This article also reviews popular dedicated scientific and biomedical literature search engines. PMID:18357673

  4. 75 FR 44015 - Certain Semiconductor Products Made by Advanced Lithography Techniques and Products Containing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-27

    ... COMMISSION Certain Semiconductor Products Made by Advanced Lithography Techniques and Products Containing... importation of certain semiconductor products made by advanced lithography techniques and products containing... certain semiconductor products made by advanced lithography techniques or products containing same...

  5. Accuracy of selected techniques for estimating ice-affected streamflow

    USGS Publications Warehouse

    Walker, John F.

    1991-01-01

    This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.

  6. A comparison of sampling techniques to estimate number of wetlands

    USGS Publications Warehouse

    Johnson, R.R.; Higgins, K.F.; Naugle, D.E.; Jenks, J.A.

    1999-01-01

    Service use annual estimates of the number of ponded wetlands to estimate duck production and establish duck hunting regulations. Sampling techniques that minimize bias may provide more reliable estimates of annual duck production. Using a wetland geographic information system (GIS), we estimated number of wetlands using standard counting protocol with belt transects and samples of square plots. Estimates were compared to the known number of wetlands in the GIS to determine bias. Bias in transect-derived estimates ranged from +67-87% of the known number of wetlands, compared to bias of +3-6% in estimates from samples of 10.24-km2 plots. We recommend using samples of 10.24-km2 plots stratified by wetland density to decrease bias.

  7. Performance and Weight Estimates for an Advanced Open Rotor Engine

    NASA Technical Reports Server (NTRS)

    Hendricks, Eric S.; Tong, Michael T.

    2012-01-01

    NASA s Environmentally Responsible Aviation Project and Subsonic Fixed Wing Project are focused on developing concepts and technologies which may enable dramatic reductions to the environmental impact of future generation subsonic aircraft. The open rotor concept (also historically referred to an unducted fan or advanced turboprop) may allow for the achievement of this objective by reducing engine fuel consumption. To evaluate the potential impact of open rotor engines, cycle modeling and engine weight estimation capabilities have been developed. The initial development of the cycle modeling capabilities in the Numerical Propulsion System Simulation (NPSS) tool was presented in a previous paper. Following that initial development, further advancements have been made to the cycle modeling and weight estimation capabilities for open rotor engines and are presented in this paper. The developed modeling capabilities are used to predict the performance of an advanced open rotor concept using modern counter-rotating propeller designs. Finally, performance and weight estimates for this engine are presented and compared to results from a previous NASA study of advanced geared and direct-drive turbofans.

  8. Advanced Marketing Core Curriculum. Test Items and Assessment Techniques.

    ERIC Educational Resources Information Center

    Smith, Clifton L.; And Others

    This document contains duties and tasks, multiple-choice test items, and other assessment techniques for Missouri's advanced marketing core curriculum. The core curriculum begins with a list of 13 suggested textbook resources. Next, nine duties with their associated tasks are given. Under each task appears one or more citations to appropriate…

  9. Comparative evaluation of workload estimation techniques in piloting tasks

    NASA Technical Reports Server (NTRS)

    Wierwille, W. W.

    1983-01-01

    Techniques to measure operator workload in a wide range of situations and tasks were examined. The sensitivity and intrusion of a wide variety of workload assessment techniques in simulated piloting tasks were investigated. Four different piloting tasks, psychomotor, perceptual, mediational, and communication aspects of piloting behavior were selected. Techniques to determine relative sensitivity and intrusion were applied. Sensitivity is the relative ability of a workload estimation technique to discriminate statistically significant differences in operator loading. High sensitivity requires discriminable changes in score means as a function of load level and low variation of the scores about the means. Intrusion is an undesirable change in the task for which workload is measured, resulting from the introduction of the workload estimation technique or apparatus.

  10. Cost estimate guidelines for advanced nuclear power technologies

    SciTech Connect

    Delene, J.G.; Hudson, C.R. II

    1993-05-01

    Several advanced power plant concepts are currently under development. These include the Modular High Temperature Gas Cooled Reactors, the Advanced Liquid Metal Reactor and the Advanced Light Water Reactors. One measure of the attractiveness of a new concept is its cost. Invariably, the cost of a new type of power plant will be compared with other alternative forms of electrical generation. This report provides a common starting point, whereby the cost estimates for the various power plants to be considered are developed with common assumptions and ground rules. Comparisons can then be made on a consistent basis. This is the second update of these cost estimate guidelines. Changes have been made to make the guidelines more current (January 1, 1992) and in response to suggestions made as a result of the use of the previous report. The principal changes are that the reference site has been changed from a generic Northeast (Middletown) site to a more central site (EPRI`s East/West Central site) and that reference bulk commodity prices and labor productivity rates have been added. This report is designed to provide a framework for the preparation and reporting of costs. The cost estimates will consist of the overnight construction cost, the total plant capital cost, the operation and maintenance costs, the fuel costs, decommissioning costs and the power production or busbar generation cost.

  11. Technique for estimating depth of 100-year floods in Tennessee

    USGS Publications Warehouse

    Gamble, Charles R.; Lewis, James G.

    1977-01-01

    Preface: A method is presented for estimating the depth of the loo-year flood in four hydrologic areas in Tennessee. Depths at 151 gaging stations on streams that were not significantly affected by man made changes were related to basin characteristics by multiple regression techniques. Equations derived from the analysis can be used to estimate the depth of the loo-year flood if the size of the drainage basin is known.

  12. Congestion estimation technique in the optical network unit registration process.

    PubMed

    Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk

    2016-07-01

    We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results. PMID:27367066

  13. Advanced Packaging Materials and Techniques for High Power TR Module: Standard Flight vs. Advanced Packaging

    NASA Technical Reports Server (NTRS)

    Hoffman, James Patrick; Del Castillo, Linda; Miller, Jennifer; Jenabi, Masud; Hunter, Donald; Birur, Gajanana

    2011-01-01

    The higher output power densities required of modern radar architectures, such as the proposed DESDynI [Deformation, Ecosystem Structure, and Dynamics of Ice] SAR [Synthetic Aperture Radar] Instrument (or DSI) require increasingly dense high power electronics. To enable these higher power densities, while maintaining or even improving hardware reliability, requires advances in integrating advanced thermal packaging technologies into radar transmit/receive (TR) modules. New materials and techniques have been studied and compared to standard technologies.

  14. Estimating self-clutter of the multiple-pulse technique

    NASA Astrophysics Data System (ADS)

    Reimer, A. S.; Hussey, G. C.

    2015-07-01

    Autocorrelation function (ACF) estimates from voltage data measured by high-frequency ionospheric radar systems that utilize the multiple-pulse technique of Farley (1972) are susceptible to interference from self-clutter. Self-clutter is caused by simultaneous returns from multiple transmitted pulses echoing from unwanted, or ambiguous ranges. Without accurate estimates of self-clutter it is impossible to account for all the uncertainty in estimates of the radar ACF. Voltage- and power-based self-clutter estimators are presented and evaluated using a modified version of the radar data simulator of Ribeiro et al. (2013a) and data from the Super Dual Auroral Radar Network (SuperDARN). It is shown that self-clutter caused by ambiguous ranges filled with ground scatter can be accurately estimated using a voltage-based self-clutter estimator but that for ionospheric origin self-clutter a maximal estimator must be used. Two maximal self-clutter estimators are discussed and verified using the radar data simulator. A discussion of the application of the self-clutter estimator as it is applied to ACFs obtained with Saskatoon SuperDARN radar is also presented.

  15. Advancing Techniques of Radiation Therapy for Rectal Cancer.

    PubMed

    Patel, Sagar A; Wo, Jennifer Y; Hong, Theodore S

    2016-07-01

    Since the advent of radiation therapy for rectal cancer, there has been continual investigation of advancing technologies and techniques that allow for improved dose conformality to target structures while limiting irradiation of surrounding normal tissue. For locally advanced disease, intensity modulated and proton beam radiation therapy both provide more highly conformal treatment volumes that reduce dose to organs at risk, though the clinical benefit in terms of toxicity reduction is unclear. For early stage disease, endorectal contact therapy and high-dose rate brachytherapy may be a definitive treatment option for patients who are poor operative candidates or those with low-lying tumors that desire sphincter-preservation. Finally, there has been growing evidence that supports stereotactic body radiotherapy as a safe and effective salvage treatment for the minority of patients that locally recur following trimodality therapy for locally advanced disease. This review addresses these topics that remain areas of active clinical investigation. PMID:27238474

  16. An Advanced Time Averaging Modelling Technique for Power Electronic Circuits

    NASA Astrophysics Data System (ADS)

    Jankuloski, Goce

    For stable and efficient performance of power converters, a good mathematical model is needed. This thesis presents a new modelling technique for DC/DC and DC/AC Pulse Width Modulated (PWM) converters. The new model is more accurate than the existing modelling techniques such as State Space Averaging (SSA) and Discrete Time Modelling. Unlike the SSA model, the new modelling technique, the Advanced Time Averaging Model (ATAM) includes the averaging dynamics of the converter's output. In addition to offering enhanced model accuracy, application of linearization techniques to the ATAM enables the use of conventional linear control design tools. A controller design application demonstrates that a controller designed based on the ATAM outperforms one designed using the ubiquitous SSA model. Unlike the SSA model, ATAM for DC/AC augments the system's dynamics with the dynamics needed for subcycle fundamental contribution (SFC) calculation. This allows for controller design that is based on an exact model.

  17. Technology development of fabrication techniques for advanced solar dynamic concentrators

    NASA Technical Reports Server (NTRS)

    Richter, Scott W.

    1991-01-01

    The objective of the advanced concentrator program is to develop the technology that will lead to lightweight, highly reflective, accurate, scaleable, and long lived space solar dynamic concentrators. The advanced concentrator program encompasses new and innovative concepts, fabrication techniques, materials selection, and simulated space environmental testing. Fabrication techniques include methods of fabricating the substrates and coating substrate surfaces to produce high quality optical surfaces, acceptable for further coating with vapor deposited optical films. The selected materials to obtain a high quality optical surface include microsheet glass and Eccocoat EP-3 epoxy, with DC-93-500 selected as a candidate silicone adhesive and levelizing layer. The following procedures are defined: cutting, cleaning, forming, and bonding microsheet glass. Procedures are also defined for surface cleaning, and EP-3 epoxy application. The results and analyses from atomic oxygen and thermal cycling tests are used to determine the effects of orbital conditions in a space environment.

  18. Technology development of fabrication techniques for advanced solar dynamic concentrators

    NASA Technical Reports Server (NTRS)

    Richter, Scott W.

    1991-01-01

    The objective of the advanced concentrator program is to develop the technology that will lead to lightweight, highly reflective, accurate, scaleable, and long lived space solar dynamic concentrators. The advanced concentrator program encompasses new and innovative concepts, fabrication techniques, materials selection, and simulated space environmental testing. Fabrication techniques include methods of fabricating the substrates and coating substrate surfaces to produce high-quality optical surfaces, acceptable for further coating with vapor deposited optical films. The selected materials to obtain a high quality optical surface include microsheet glass and Eccocoat EP-3 epoxy, with DC-93-500 selected as a candidate silicone adhesive and levelizing layer. The following procedures are defined: cutting, cleaning, forming, and bonding microsheet glass. Procedures are also defined for surface cleaning, and EP-3 epoxy application. The results and analyses from atomic oxygen and thermal cycling tests are used to determine the effects of orbital conditions in a space environment.

  19. Advance techniques for monitoring human tolerance to positive Gz accelerations

    NASA Technical Reports Server (NTRS)

    Pelligra, R.; Sandler, H.; Rositano, S.; Skrettingland, K.; Mancini, R.

    1973-01-01

    Tolerance to positive g accelerations was measured in ten normal male subjects using both standard and advanced techniques. In addition to routine electrocardiogram, heart rate, respiratory rate, and infrared television, monitoring techniques during acceleration exposure included measurement of peripheral vision loss, noninvasive temporal, brachial, and/or radial arterial blood flow, and automatic measurement of indirect systolic and diastolic blood pressure at 60-sec intervals. Although brachial and radial arterial flow measurements reflected significant cardiovascular changes during and after acceleration, they were inconsistent indices of the onset of grayout or blackout. Temporal arterial blood flow, however, showed a high correlation with subjective peripheral light loss.

  20. Advanced computer graphic techniques for laser range finder (LRF) simulation

    NASA Astrophysics Data System (ADS)

    Bedkowski, Janusz; Jankowski, Stanislaw

    2008-11-01

    This paper show an advanced computer graphic techniques for laser range finder (LRF) simulation. The LRF is the common sensor for unmanned ground vehicle, autonomous mobile robot and security applications. The cost of the measurement system is extremely high, therefore the simulation tool is designed. The simulation gives an opportunity to execute algorithm such as the obstacle avoidance[1], slam for robot localization[2], detection of vegetation and water obstacles in surroundings of the robot chassis[3], LRF measurement in crowd of people[1]. The Axis Aligned Bounding Box (AABB) and alternative technique based on CUDA (NVIDIA Compute Unified Device Architecture) is presented.

  1. Data Compression Techniques for Advanced Space Transportation Systems

    NASA Technical Reports Server (NTRS)

    Bradley, William G.

    1998-01-01

    Advanced space transportation systems, including vehicle state of health systems, will produce large amounts of data which must be stored on board the vehicle and or transmitted to the ground and stored. The cost of storage or transmission of the data could be reduced if the number of bits required to represent the data is reduced by the use of data compression techniques. Most of the work done in this study was rather generic and could apply to many data compression systems, but the first application area to be considered was launch vehicle state of health telemetry systems. Both lossless and lossy compression techniques were considered in this study.

  2. The Advanced Space Plant Culture Device with Live Imaging Technique

    NASA Astrophysics Data System (ADS)

    Zheng, Weibo; Zhang, Tao; Tong, Guanghui

    The live imaging techniques, including the color and fluorescent imags, are very important and useful for space life science. The advanced space plant culture Device (ASPCD) with live imaging Technique, developed for Chinese Spacecraft, would be introduced in this paper. The ASPCD had two plant experimental chambers. Three cameras (two color cameras and one fluorescent camera) were installed in the two chambers. The fluorescent camera could observe flowering genes, which were labeled by GFP. The lighting, nutrient, temperature controling and water recycling were all independent in each chamber. The ASPCD would beed applied to investigate for the growth and development of the high plant under microgravity conditions on board the Chinese Spacecraft.

  3. Three-dimensional hybrid grid generation using advancing front techniques

    NASA Technical Reports Server (NTRS)

    Steinbrenner, John P.; Noack, Ralph W.

    1995-01-01

    A new 3-dimensional hybrid grid generation technique has been developed, based on ideas of advancing fronts for both structured and unstructured grids. In this approach, structured grids are first generate independently around individual components of the geometry. Fronts are initialized on these structure grids, and advanced outward so that new cells are extracted directly from the structured grids. Employing typical advancing front techniques, cells are rejected if they intersect the existing front or fail other criteria When no more viable structured cells exist further cells are advanced in an unstructured manner to close off the overall domain, resulting in a grid of 'hybrid' form. There are two primary advantages to the hybrid formulation. First, generating blocks with limited regard to topology eliminates the bottleneck encountered when a multiple block system is used to fully encapsulate a domain. Individual blocks may be generated free of external constraints, which will significantly reduce the generation time. Secondly, grid points near the body (presumably with high aspect ratio) will still maintain a structured (non-triangular or tetrahedral) character, thereby maximizing grid quality and solution accuracy near the surface.

  4. Estimation of sugarcane sucrose and biomass with remote sensing techniques

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Remote sensing techniques were used to predict sucrose levels (TRS) and gross cane yield in field-grown sugarcane. To estimate sucrose levels, leaves were collected from plant-cane and first-ratoon sugarcane plants from the variety maturity studies conducted at the USDA-ARS-SRRC, Sugarcane Research...

  5. A nonparametric clustering technique which estimates the number of clusters

    NASA Technical Reports Server (NTRS)

    Ramey, D. B.

    1983-01-01

    In applications of cluster analysis, one usually needs to determine the number of clusters, K, and the assignment of observations to each cluster. A clustering technique based on recursive application of a multivariate test of bimodality which automatically estimates both K and the cluster assignments is presented.

  6. Tracking closely spaced multiple sources via spectral-estimation techniques

    NASA Astrophysics Data System (ADS)

    Gabriel, W. F.

    1982-06-01

    Modern spectral-estimation techniques have achieved a level of performance that attracts interest in applications area such as the tracking of multiple spatial sources. In addition to the original "superresolution' capability, these techniques offer an apparent 'absence of sidelobes' characteristic and some reasonable solutions to the difficult radar coherent-source problem that involves a phase-dependent SNR (signal-to-noise ratio) penalty. This report reviews the situation briefly, and it discusses a few of the techniques that have been found useful, including natural or synthetic doppler shifts, non-Toeplitz forward-backward subaperture-shift processing, and recent eigenvalue/eigenvector analysis algorithms. The techniques are applied to multiple-source situations that include mixtures of coherent and noncoherent sources of unequal strengths, with either an 8-or a 12-element linear-array sampling aperture. The first test case involves the estimation of six sources, two of which are 95% correlated. The second test case involves a tracking-simulation display example of four moving sources: three are -10dB coherent sources 95% correlated, and the other is a strong 20-dB noncoherent source. These test cases demonstrate the remarkable improvements obtained with the recent estimation techniques, and they point to the possibilities for real-world applications.

  7. Satellite tracking by combined optimal estimation and control techniques.

    NASA Technical Reports Server (NTRS)

    Dressler, R. M.; Tabak, D.

    1971-01-01

    Combined optimal estimation and control techniques are applied for the first time to satellite tracking systems. Both radio antenna and optical tracking systems of NASA are considered. The optimal estimation is accomplished using an extended Kalman filter resulting in an estimated state of the satellite and of the tracking system. This estimated state constitutes an input to the optimal controller. The optimal controller treats a linearized system with a quadratic performance index. The maximum principle is applied and a steady-state approximation to the resulting Riccati equation is obtained. A computer program, RATS, implementing this algorithm is described. A feasibility study of real-time implementation, tracking simulations, and parameter sensitivity studies are also reported.

  8. NIMO's advanced state estimator copes with NUGs and open access

    SciTech Connect

    Rutz, W.L. )

    1994-12-01

    Nonutility generators (NUGs) are placing increasing wheeling demands on the transmission networks of electric utilities and, with the advent of [open quotes]open access,[close quotes] utilities also face increasing competition for their own electricity customers. Niagara Mohawk Power Corp (NIMO) has found surprising new ways to cope with and even profit from these circumstances by exploiting an advance suite of network security applications, which have been in continuous use since 1991. The network package adds advanced state estimation, load flow, and contingency analysis functions to NIMO's energy management system (EMS). According to the utility's managers, the network security functions have had important tangible benefits. These include the ability to: maximize the use of the transmission network; increase reliability by accurately predicting contingencies; determine when expensive reserve units can be safely shut down; and improve the accuracy of loss calculations, thereby permitting full recovery of wheeling losses. 1 fig.

  9. Full Endoscopic Spinal Surgery Techniques: Advancements, Indications, and Outcomes

    PubMed Central

    Yue, James J.; Long, William

    2015-01-01

    Advancements in both surgical instrumentation and full endoscopic spine techniques have resulted in positive clinical outcomes in the treatment of cervical, thoracic, and lumbar spine pathologies. Endoscopic techniques impart minimal approach related disruption of non-pathologic spinal anatomy and function while concurrently maximizing functional visualization and correction of pathological tissues. An advanced understanding of the applicable functional neuroanatomy, in particular the neuroforamen, is essential for successful outcomes. Additionally, an understanding of the varying types of disc prolapse pathology in relation to the neuroforamen will result in more optimal surgical outcomes. Indications for lumbar endoscopic spine surgery include disc herniations, spinal stenosis, infections, medial branch rhizotomy, and interbody fusion. Limitations are based on both non spine and spine related findings. A high riding iliac wing, a more posteriorly located retroperitoneal cavity, an overly distal or proximally migrated herniated disc are all relative contra-indications to lumbar endoscopic spinal surgery techniques. Modifications in scope size and visual field of view angulation have enabled both anterior and posterior cervical decompression. Endoscopic burrs, electrocautery, and focused laser technology allow for the least invasive spinal surgical techniques in all age groups and across varying body habitus. Complications include among others, dural tears, dysesthsia, nerve injury, and infection. PMID:26114086

  10. Sensitivity analysis and performance estimation of refractivity from clutter techniques

    NASA Astrophysics Data System (ADS)

    Yardim, Caglar; Gerstoft, Peter; Hodgkiss, William S.

    2009-02-01

    Refractivity from clutter (RFC) refers to techniques that estimate the atmospheric refractivity profile from radar clutter returns. A RFC algorithm works by finding the environment whose simulated clutter pattern matches the radar measured one. This paper introduces a procedure to compute RFC estimator performance. It addresses the major factors such as the radar parameters, the sea surface characteristics, and the environment (region, time of the day, season) that affect the estimator performance and formalizes an error metric combining all of these. This is important for applications such as calculating the optimal radar parameters, selecting the best RFC inversion algorithm under a set of conditions, and creating a regional performance map of a RFC system. The performance metric is used to compute the RFC performance of a non-Bayesian evaporation duct estimator. A Bayesian estimator that incorporates meteorological statistics in the inversion is introduced and compared to the non-Bayesian estimator. The performance metric is used to determine the optimal radar parameters of the evaporation duct estimator for six scenarios. An evaporation duct inversion performance map for a S band radar is created for the larger Mediterranean/Arabian Sea region.

  11. Estimation of base station position using timing advance measurements

    NASA Astrophysics Data System (ADS)

    Raitoharju, Matti; Ali-Löytty, Simo; Wirola, Lauri

    2011-10-01

    Timing Advance is used in TDMA (Time Division Multiple Access) systems, such as GSM and LTE, to synchronize the mobile phone to the cellular BS (Base Station). Mobile phone positioning can use TA measurements if BS positions are known, but in many cases BS positions are not in the public domain. In this work we study how to use a set of TA measurements taken by mobile phones at known positions to estimate the position of a BS. This paper describes two methods -- GMF (Gaussian Mixture Filter) and PMF (Point Mass Filter) for estimation of the BS position. Positioning performance is evaluated using simulated and real measurements. In suburban field tests, TA measurements suffice to determine BS position with an error comparable to the TA granularity (550m). GMF computes BS position much faster than PMF and is only slightly less accurate.

  12. Technique for estimating depths of 100-year floods in Pennsylvania

    USGS Publications Warehouse

    Flippo, Herbert N., Jr.

    1990-01-01

    Techniques are developed for estimating 100-year flood depths in natural channels of unregulated Pennsylvania streams that drain less than 2,200 square miles. Equations and graphs are presented relating the depth of the 100-year flood above median stage and drainage area in five defined hydrologic areas in the State. Another graph defines the relation between drainage area and median depth of flow over the low point of riffles. Thus 100-year depths on riffles can be estimated by summing depth values derived from two simple relations.

  13. A TRMM-Calibrated Infrared Technique for Global Rainfall Estimation

    NASA Technical Reports Server (NTRS)

    Negri, Andrew J.; Adler, Robert F.

    2002-01-01

    The development of a satellite infrared (IR) technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall on a global scale is presented. The Convective-Stratiform Technique (CST), calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), is applied over the global tropics during 2001. The technique is calibrated separately over land and ocean, making ingenious use of the IR data from the TRMM Visible/Infrared Scanner (VIRS) before application to global geosynchronous satellite data. The low sampling rate of TRMM PR imposes limitations on calibrating IR-based techniques; however, our research shows that PR observations can be applied to improve IR-based techniques significantly by selecting adequate calibration areas and calibration length. The diurnal cycle of rainfall, as well as the division between convective and stratiform rainfall will be presented. The technique is validated using available data sets and compared to other global rainfall products such as Global Precipitation Climatology Project (GPCP) IR product, calibrated with TRMM Microwave Imager (TMI) data. The calibrated CST technique has the advantages of high spatial resolution (4 km), filtering of non-raining cirrus clouds, and the stratification of the rainfall into its convective and stratiform components, the latter being important for the calculation of vertical profiles of latent heating.

  14. A TRMM-Calibrated Infrared Technique for Global Rainfall Estimation

    NASA Technical Reports Server (NTRS)

    Negri, Andrew J.; Adler, Robert F.; Xu, Li-Ming

    2003-01-01

    This paper presents the development of a satellite infrared (IR) technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall on a global scale. The Convective-Stratiform Technique (CST), calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), is applied over the global tropics during summer 2001. The technique is calibrated separately over land and ocean, making ingenious use of the IR data from the TRMM Visible/Infrared Scanner (VIRS) before application to global geosynchronous satellite data. The low sampling rate of TRMM PR imposes limitations on calibrating IR- based techniques; however, our research shows that PR observations can be applied to improve IR-based techniques significantly by selecting adequate calibration areas and calibration length. The diurnal cycle of rainfall, as well as the division between convective and t i f m rainfall will be presented. The technique is validated using available data sets and compared to other global rainfall products such as Global Precipitation Climatology Project (GPCP) IR product, calibrated with TRMM Microwave Imager (TMI) data. The calibrated CST technique has the advantages of high spatial resolution (4 km), filtering of non-raining cirrus clouds, and the stratification of the rainfall into its convective and stratiform components, the latter being important for the calculation of vertical profiles of latent heating.

  15. Advanced computer modeling techniques expand belt conveyor technology

    SciTech Connect

    Alspaugh, M.

    1998-07-01

    Increased mining production is continuing to challenge engineers and manufacturers to keep up. The pressure to produce larger and more versatile equipment is increasing. This paper will show some recent major projects in the belt conveyor industry that have pushed the limits of design and engineering technology. Also, it will discuss the systems engineering discipline and advanced computer modeling tools that have helped make these achievements possible. Several examples of technologically advanced designs will be reviewed. However, new technology can sometimes produce increased problems with equipment availability and reliability if not carefully developed. Computer modeling techniques that help one design larger equipment can also compound operational headaches if engineering processes and algorithms are not carefully analyzed every step of the way.

  16. A low tritium hydride bed inventory estimation technique

    SciTech Connect

    Klein, J.E.; Shanahan, K.L.; Baker, R.A.; Foster, P.J.

    2015-03-15

    Low tritium hydride beds were developed and deployed into tritium service in Savannah River Site. Process beds to be used for low concentration tritium gas were not fitted with instrumentation to perform the steady-state, flowing gas calorimetric inventory measurement method. Low tritium beds contain less than the detection limit of the IBA (In-Bed Accountability) technique used for tritium inventory. This paper describes two techniques for estimating tritium content and uncertainty for low tritium content beds to be used in the facility's physical inventory (PI). PI are performed periodically to assess the quantity of nuclear material used in a facility. The first approach (Mid-point approximation method - MPA) assumes the bed is half-full and uses a gas composition measurement to estimate the tritium inventory and uncertainty. The second approach utilizes the bed's hydride material pressure-composition-temperature (PCT) properties and a gas composition measurement to reduce the uncertainty in the calculated bed inventory.

  17. Techniques for estimating flood hydrographs for ungaged urban watersheds

    SciTech Connect

    Stricker, V.A.; Sauer, V.B.

    1982-04-01

    The Clark Method, modified slightly, was used to develop a synthetic dimensionless hydrograph that can be used to estimate flood hydrographs for ungaged urban watersheds. Application of the technique results in a typical (average) flood hydrograph for a given peak discharge. Input necessary to apply the technique is an estimate of basin lagtime and the recurrence interval peak discharge. Equations for this purpose were obtained from a recent nationwide study on flood frequency in urban watersheds. A regression equation was developed which relates flood volumes to drainage area size, basin lagtime, and peak discharge. This equation is useful where storage of floodwater may be a part of design or flood prevention. 6 refs., 17 figs., 5 tabs.

  18. Advanced aeroservoelastic stabilization techniques for hypersonic flight vehicles

    NASA Technical Reports Server (NTRS)

    Chan, Samuel Y.; Cheng, Peter Y.; Myers, Thomas T.; Klyde, David H.; Magdaleno, Raymond E.; Mcruer, Duane T.

    1992-01-01

    Advanced high performance vehicles, including Single-Stage-To-Orbit (SSTO) hypersonic flight vehicles, that are statically unstable, require higher bandwidth flight control systems to compensate for the instability resulting in interactions between the flight control system, the engine/propulsion dynamics, and the low frequency structural modes. Military specifications, such as MIL-F-9490D and MIL-F-87242, tend to limit treatment of structural modes to conventional gain stabilization techniques. The conventional gain stabilization techniques, however, introduce low frequency effective time delays which can be troublesome from a flying qualities standpoint. These time delays can be alleviated by appropriate blending of gain and phase stabilization techniques (referred to as Hybrid Phase Stabilization or HPS) for the low frequency structural modes. The potential of using HPS for compensating structural mode interaction was previously explored. It was shown that effective time delay was significantly reduced with the use of HPS; however, the HPS design was seen to have greater residual response than a conventional gain stablized design. Additional work performed to advance and refine the HPS design procedure, to further develop residual response metrics as a basis for alternative structural stability specifications, and to develop strategies for validating HPS design and specification concepts in manned simulation is presented. Stabilization design sensitivity to structural uncertainties and aircraft-centered requirements are also assessed.

  19. Evaluation of a technique for satellite-derived area estimation of forest fires

    NASA Technical Reports Server (NTRS)

    Cahoon, Donald R., Jr.; Stocks, Brian J.; Levine, Joel S.; Cofer, Wesley R., III; Chung, Charles C.

    1992-01-01

    The advanced very high resolution radiometer (AVHRR), has been found useful for the location and monitoring of both smoke and fires because of the daily observations, the large geographical coverage of the imagery, the spectral characteristics of the instrument, and the spatial resolution of the instrument. This paper will discuss the application of AVHRR data to assess the geographical extent of burning. Methods have been developed to estimate the surface area of burning by analyzing the surface area effected by fire with AVHRR imagery. Characteristics of the AVHRR instrument, its orbit, field of view, and archived data sets are discussed relative to the unique surface area of each pixel. The errors associated with this surface area estimation technique are determined using AVHRR-derived area estimates of target regions with known sizes. This technique is used to evaluate the area burned during the Yellowstone fires of 1988.

  20. Testing aspects of advanced coherent electron cooling technique

    SciTech Connect

    Litvinenko, V.; Jing, Y.; Pinayev, I.; Wang, G.; Samulyak, R.; Ratner, D.

    2015-05-03

    An advanced version of the Coherent-electron Cooling (CeC) based on the micro-bunching instability was proposed. This approach promises significant increase in the bandwidth of the CeC system and, therefore, significant shortening of cooling time in high-energy hadron colliders. In this paper we present our plans of simulating and testing the key aspects of this proposed technique using the set-up of the coherent-electron-cooling proof-of-principle experiment at BNL.

  1. [The role of electronic techniques for advanced neuroelectrophysiology].

    PubMed

    Wang, Min; Zhang, Lijun; Cao, Maoyong

    2008-12-01

    The rapid development in the fields of electroscience, computer science, and biomedical engineering are propelling the electrophysiologyical techniques. Recent technological advances have made it possible to simultaneously record the activity of large numbers of neurons in awake and behaving animals using implanted extracellular electrodes. Several laboratories use chronically implanted electrode arrays in freely moving animals because they allow stable recordings of discriminated single neurons and/or field potentials from up to hundreds of electrodes over long time periods. In this review, we focus on the new technologies for neuroelectrophysiology. PMID:19166233

  2. Age estimation based on Kvaal's technique using digital panoramic radiographs

    PubMed Central

    Mittal, Samta; Nagendrareddy, Suma Gundareddy; Sharma, Manisha Lakhanpal; Agnihotri, Poornapragna; Chaudhary, Sunil; Dhillon, Manu

    2016-01-01

    Introduction: Age estimation is important for administrative and ethical reasons and also because of legal consequences. Dental pulp undergoes regression in size with increasing age due to secondary dentin deposition and can be used as a parameter of age estimation even beyond 25 years of age. Kvaal et al. developed a method for chronological age estimation based on the pulp size using periapical dental radiographs. There is a need for testing this method of age estimation in the Indian population using simple tools like digital imaging on living individuals not requiring extraction of teeth. Aims and Objectives: Estimation of the chronological age of subjects by Kvaal's method using digital panoramic radiographs and also testing the validity of regression equations as given by Kvaal et al. Materials and Methods: The study sample included a total of 152 subjects in the age group of 14-60 years. Measurements were performed on the standardized digital panoramic radiographs based on Kvaal's method. Different regression formulae were derived and the age was assessed. The assessed age was then correlated to the actual age of the patient using Student's t-test. Results: No significant difference between the mean of the chronological age and the estimated age was observed. However, the values of the mean age estimated by using regression equations as given previously in the study of Kvaal et al. significantly underestimated the chronological age in the present study sample. Conclusion: The results of the study give an inference for the feasibility of this technique by calculation of regression equations on digital panoramic radiographs. However, it negates the applicability of same regression equations as given by Kvaal et al. on the study population.

  3. Recent Advances in Techniques for Hyperspectral Image Processing

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; Benediktsson, Jon Atli; Boardman, Joseph W.; Brazile, Jason; Bruzzone, Lorenzo; Camps-Valls, Gustavo; Chanussot, Jocelyn; Fauvel, Mathieu; Gamba, Paolo; Gualtieri, Anthony; Marconcini, Mattia; Tilton, James C.; Trianni, Giovanna

    2009-01-01

    Imaging spectroscopy, also known as hyperspectral imaging, has been transformed in less than 30 years from being a sparse research tool into a commodity product available to a broad user community. Currently, there is a need for standardized data processing techniques able to take into account the special properties of hyperspectral data. In this paper, we provide a seminal view on recent advances in techniques for hyperspectral image processing. Our main focus is on the design of techniques able to deal with the highdimensional nature of the data, and to integrate the spatial and spectral information. Performance of the discussed techniques is evaluated in different analysis scenarios. To satisfy time-critical constraints in specific applications, we also develop efficient parallel implementations of some of the discussed algorithms. Combined, these parts provide an excellent snapshot of the state-of-the-art in those areas, and offer a thoughtful perspective on future potentials and emerging challenges in the design of robust hyperspectral imaging algorithms

  4. Surgical techniques for advanced stage pelvic organ prolapse.

    PubMed

    Brown, Douglas N; Strauchon, Christopher; Gonzalez, Hector; Gruber, Daniel

    2016-02-01

    Pelvic organ prolapse is an extremely common condition, with approximately 12% of women requiring surgical correction over their lifetime. This manuscript reviews the most recent literature regarding the comparative efficacy of various surgical repair techniques in the treatment of advanced stage pelvic organ prolapse. Uterosacral ligament suspension has similar anatomic and subjective outcomes when compared to sacrospinous ligament fixation at 12 months and is considered to be equally effective. The use of transvaginal mesh has been shown to be superior to native tissue vaginal repairs with respect to anatomic outcomes but at the cost of a higher complication rate. Minimally invasive sacrocolpopexy appears to be equivalent to abdominal sacrocolpopexy (ASC). Robot-assisted sacrocolpopexy (RSC) and laparoscopic sacrocolpopexy (LSC) appear as effective as abdominal sacrocolpopexy, however, prospective studies of comparing long-term outcomes of ASC, LSC, and RSC in relation to health care costs is paramount in the near future. Surgical correction of advanced pelvic organ prolapse can be accomplished via a variety of proven techniques. Selection of the correct surgical approach is a complex decision process and involves a multitude of factors. When deciding on the most suitable surgical intervention, the chosen route must be individualized for each patient taking into account the specific risks and benefits of each procedure. PMID:26448444

  5. Advanced IMCW Lidar Techniques for ASCENDS CO2 Column Measurements

    NASA Astrophysics Data System (ADS)

    Campbell, Joel; lin, bing; nehrir, amin; harrison, fenton; obland, michael

    2015-04-01

    Global atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity-Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud contamination. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating the need to correct for sidelobe bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These results are extended to include Richardson-Lucy deconvolution techniques to extend the resolution of the lidar beyond that implied by limit of the bandwidth of the modulation.

  6. Estimation of Fluorescence Lifetimes Via Rotational Invariance Techniques.

    PubMed

    Yu, Hongqi; Saleeb, Rebecca; Dalgarno, Paul; Day-Uei Li, David

    2016-06-01

    Estimation of signal parameters via rotational invariance techniques is a classical algorithm widely used in array signal processing for direction-of-arrival estimation of emitters. Inspired by this method, a new signal model and new fluorescence lifetime estimation via rotational invariance techniques (FLERIT) were developed for multiexponential fluorescence lifetime imaging (FLIM) experiments. The FLERIT only requires a few time bins of a histogram generated by a time-correlated single-photon counting FLIM system, greatly reducing the data throughput from the imager to the signal processing units. As a noniterative method, the FLERIT does not require initial conditions, prior information nor model selection that are usually required by widely used traditional fitting methods, including nonlinear least square methods or maximum-likelihood methods. Moreover, its simplicity means it is suitable for implementations in embedded systems for real-time applications. FLERIT was tested on synthesized and experimental fluorescent cell data showing the potentials to be widely applied in FLIM data analysis. PMID:26571506

  7. Real time estimation of ship motions using Kalman filtering techniques

    NASA Technical Reports Server (NTRS)

    Triantafyllou, M. S.; Bodson, M.; Athans, M.

    1983-01-01

    The estimation of the heave, pitch, roll, sway, and yaw motions of a DD-963 destroyer is studied, using Kalman filtering techniques, for application in VTOL aircraft landing. The governing equations are obtained from hydrodynamic considerations in the form of linear differential equations with frequency dependent coefficients. In addition, nonminimum phase characteristics are obtained due to the spatial integration of the water wave forces. The resulting transfer matrix function is irrational and nonminimum phase. The conditions for a finite-dimensional approximation are considered and the impact of the various parameters is assessed. A detailed numerical application for a DD-963 destroyer is presented and simulations of the estimations obtained from Kalman filters are discussed.

  8. Aerodynamic parameter estimation via Fourier modulating function techniques

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1995-01-01

    Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.

  9. Multichannel SAR Interferometry via Classical and Bayesian Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Budillon, Alessandra; Ferraiuolo, Giancarlo; Pascazio, Vito; Schirinzi, Gilda

    2005-12-01

    Some multichannel synthetic aperture radar interferometric configurations are analyzed. Both across-track and along-track interferometric systems, allowing to recover the height profile of the ground or the moving target radial velocities, respectively, are considered. The joint use of multichannel configurations, which can be either multifrequency or multi-baseline, and of classical or Bayesian statistical estimation techniques allows to obtain very accurate solutions and to overcome the limitations due to the presence of ambiguous solutions, intrinsic in the single-channel configurations. The improved performance of the multichannel-based methods with respect to the corresponding single-channel ones has been tested with numerical experiments on simulated data.

  10. Tools and techniques for estimating high intensity RF effects

    NASA Technical Reports Server (NTRS)

    Zacharias, Richard L.; Pennock, Steve T.; Poggio, Andrew J.; Ray, Scott L.

    1992-01-01

    Tools and techniques for estimating and measuring coupling and component disturbance for avionics and electronic controls are described. A finite-difference-time-domain (FD-TD) modeling code, TSAR, used to predict coupling is described. This code can quickly generate a mesh model to represent the test object. Some recent applications as well as the advantages and limitations of using such a code are described. Facilities and techniques for making low-power coupling measurements and for making direct injection test measurements of device disturbance are also described. Some scaling laws for coupling and device effects are presented. A method for extrapolating these low-power test results to high-power full-system effects are presented.

  11. Estimation and filtering techniques for high-accuracy GPS applications

    NASA Technical Reports Server (NTRS)

    Lichten, S. M.

    1989-01-01

    Techniques for determination of very precise orbits for satellites of the Global Positioning System (GPS) are currently being studied and demonstrated. These techniques can be used to make cm-accurate measurements of station locations relative to the geocenter, monitor earth orientation over timescales of hours, and provide tropospheric and clock delay calibrations during observations made with deep space radio antennas at sites where the GPS receivers have been collocated. For high-earth orbiters, meter-level knowledge of position will be available from GPS, while at low altitudes, sub-decimeter accuracy will be possible. Estimation of satellite orbits and other parameters such as ground station positions is carried out with a multi-satellite batch sequential pseudo-epoch state process noise filter. Both square-root information filtering (SRIF) and UD-factorized covariance filtering formulations are implemented in the software.

  12. Advanced Techniques for Removal of Retrievable Inferior Vena Cava Filters

    SciTech Connect

    Iliescu, Bogdan; Haskal, Ziv J.

    2012-08-15

    Inferior vena cava (IVC) filters have proven valuable for the prevention of primary or recurrent pulmonary embolism in selected patients with or at high risk for venous thromboembolic disease. Their use has become commonplace, and the numbers implanted increase annually. During the last 3 years, in the United States, the percentage of annually placed optional filters, i.e., filters than can remain as permanent filters or potentially be retrieved, has consistently exceeded that of permanent filters. In parallel, the complications of long- or short-term filtration have become increasingly evident to physicians, regulatory agencies, and the public. Most filter removals are uneventful, with a high degree of success. When routine filter-retrieval techniques prove unsuccessful, progressively more advanced tools and skill sets must be used to enhance filter-retrieval success. These techniques should be used with caution to avoid damage to the filter or cava during IVC retrieval. This review describes the complex techniques for filter retrieval, including use of additional snares, guidewires, angioplasty balloons, and mechanical and thermal approaches as well as illustrates their specific application.

  13. Precise estimation of tropospheric path delays with GPS techniques

    NASA Technical Reports Server (NTRS)

    Lichten, S. M.

    1990-01-01

    Tropospheric path delays are a major source of error in deep space tracking. However, the tropospheric-induced delay at tracking sites can be calibrated using measurements of Global Positioning System (GPS) satellites. A series of experiments has demonstrated the high sensitivity of GPS to tropospheric delays. A variety of tests and comparisons indicates that current accuracy of the GPS zenith tropospheric delay estimates is better than 1-cm root-mean-square over many hours, sampled continuously at intervals of six minutes. These results are consistent with expectations from covariance analyses. The covariance analyses also indicate that by the mid-1990s, when the GPS constellation is complete and the Deep Space Network is equipped with advanced GPS receivers, zenith tropospheric delay accuracy with GPS will improve further to 0.5 cm or better.

  14. Estimation of Insulator Contaminations by Means of Remote Sensing Technique

    NASA Astrophysics Data System (ADS)

    Han, Ge; Gong, Wei; Cui, Xiaohui; Zhang, Miao; Chen, Jun

    2016-06-01

    The accurate estimation of deposits adhering on insulators is critical to prevent pollution flashovers which cause huge costs worldwide. The traditional evaluation method of insulator contaminations (IC) is based sparse manual in-situ measurements, resulting in insufficient spatial representativeness and poor timeliness. Filling that gap, we proposed a novel evaluation framework of IC based on remote sensing and data mining. Varieties of products derived from satellite data, such as aerosol optical depth (AOD), digital elevation model (DEM), land use and land cover and normalized difference vegetation index were obtained to estimate the severity of IC along with the necessary field investigation inventory (pollution sources, ambient atmosphere and meteorological data). Rough set theory was utilized to minimize input sets under the prerequisite that the resultant set is equivalent to the full sets in terms of the decision ability to distinguish severity levels of IC. We found that AOD, the strength of pollution source and the precipitation are the top 3 decisive factors to estimate insulator contaminations. On that basis, different classification algorithm such as mahalanobis minimum distance, support vector machine (SVM) and maximum likelihood method were utilized to estimate severity levels of IC. 10-fold cross-validation was carried out to evaluate the performances of different methods. SVM yielded the best overall accuracy among three algorithms. An overall accuracy of more than 70% was witnessed, suggesting a promising application of remote sensing in power maintenance. To our knowledge, this is the first trial to introduce remote sensing and relevant data analysis technique into the estimation of electrical insulator contaminations.

  15. Effective wind speed estimation: Comparison between Kalman Filter and Takagi-Sugeno observer techniques.

    PubMed

    Gauterin, Eckhard; Kammerer, Philipp; Kühn, Martin; Schulte, Horst

    2016-05-01

    Advanced model-based control of wind turbines requires knowledge of the states and the wind speed. This paper benchmarks a nonlinear Takagi-Sugeno observer for wind speed estimation with enhanced Kalman Filter techniques: The performance and robustness towards model-structure uncertainties of the Takagi-Sugeno observer, a Linear, Extended and Unscented Kalman Filter are assessed. Hence the Takagi-Sugeno observer and enhanced Kalman Filter techniques are compared based on reduced-order models of a reference wind turbine with different modelling details. The objective is the systematic comparison with different design assumptions and requirements and the numerical evaluation of the reconstruction quality of the wind speed. Exemplified by a feedforward loop employing the reconstructed wind speed, the benefit of wind speed estimation within wind turbine control is illustrated. PMID:26725505

  16. COAL AND CHAR STUDIES BY ADVANCED EMR TECHNIQUES

    SciTech Connect

    R. Linn Belford; Robert B. Clarkson; Mark J. Nilges; Boris M. Odintsov; Alex I. Smirnov

    2001-04-30

    Advanced electronic magnetic resonance (EMR) as well as nuclear magnetic resonance (NMR) methods have been used to examine properties of coals, chars, and molecular species related to constituents of coal. During the span of this grant, progress was made on construction and applications to coals and chars of two high frequency EMR systems particularly appropriate for such studies--48 GHz and 95 GHz electron magnetic resonance spectrometer, on new low-frequency dynamic nuclear polarization (DNP) experiments to examine the interaction between water and the surfaces of suspended char particulates in slurries, and on a variety of proton nuclear magnetic resonance (NMR) techniques to measure characteristics of the water directly in contact with the surfaces and pore spaces of carbonaceous particulates.

  17. Techniques for developing approximate optimal advanced launch system guidance

    NASA Technical Reports Server (NTRS)

    Feeley, Timothy S.; Speyer, Jason L.

    1991-01-01

    An extension to the authors' previous technique used to develop a real-time guidance scheme for the Advanced Launch System is presented. The approach is to construct an optimal guidance law based upon an asymptotic expansion associated with small physical parameters, epsilon. The trajectory of a rocket modeled as a point mass is considered with the flight restricted to an equatorial plane while reaching an orbital altitude at orbital injection speeds. The dynamics of this problem can be separated into primary effects due to thrust and gravitational forces, and perturbation effects which include the aerodynamic forces and the remaining inertial forces. An analytic solution to the reduced-order problem represented by the primary dynamics is possible. The Hamilton-Jacobi-Bellman or dynamic programming equation is expanded in an asymptotic series where the zeroth-order term (epsilon = 0) can be obtained in closed form.

  18. Advanced Fibre Bragg Grating and Microfibre Bragg Grating Fabrication Techniques

    NASA Astrophysics Data System (ADS)

    Chung, Kit Man

    Fibre Bragg gratings (FBGs) have become a very important technology for communication systems and fibre optic sensing. Typically, FBGs are less than 10-mm long and are fabricated using fused silica uniform phase masks which become more expensive for longer length or non-uniform pitch. Generally, interference UV laser beams are employed to make long or complex FBGs, and this technique introduces critical precision and control issues. In this work, we demonstrate an advanced FBG fabrication system that enables the writing of long and complex gratings in optical fibres with virtually any apodisation profile, local phase and Bragg wavelength using a novel optical design in which the incident angles of two UV beams onto an optical fibre can be adjusted simultaneously by moving just one optical component, instead of two optics employed in earlier configurations, to vary the grating pitch. The key advantage of the grating fabrication system is that complex gratings can be fabricated by controlling the linear movements of two translation stages. In addition to the study of advanced grating fabrication technique, we also focus on the inscription of FBGs written in optical fibres with a cladding diameter of several ten's of microns. Fabrication of microfibres was investigated using a sophisticated tapering method. We also proposed a simple but practical technique to filter out the higher order modes reflected from the FBG written in microfibres via a linear taper region while the fundamental mode re-couples to the core. By using this technique, reflection from the microfibre Bragg grating (MFBG) can be effectively single mode, simplifying the demultiplexing and demodulation processes. MFBG exhibits high sensitivity to contact force and an MFBG-based force sensor was also constructed and tested to investigate their suitability for use as an invasive surgery device. Performance of the contact force sensor packaged in a conforming elastomer material compares favourably to one

  19. Advanced imaging techniques for the detection of breast cancer.

    PubMed

    Jochelson, Maxine

    2012-01-01

    Mammography is the only breast imaging examination that has been shown to reduce breast cancer mortality. Population-based sensitivity is 75% to 80%, but sensitivity in high-risk women with dense breasts is only in the range of 50%. Breast ultrasound and contrast-enhanced breast magnetic resonance imaging (MRI) have become additional standard modalities used in the diagnosis of breast cancer. In high-risk women, ultrasound is known to detect approximately four additional cancers per 1,000 women. MRI is exquisitely sensitive for the detection of breast cancer. In high-risk women, it finds an additional four to five cancers per 100 women. However, both ultrasound and MRI are also known to lead to a large number of additional benign biopsies and short-term follow-up examinations. Many new breast imaging tools have improved and are being developed to improve on our current ability to diagnose early-stage breast cancer. These can be divided into two groups. The first group is those that are advances in current techniques, which include digital breast tomosynthesis and contrast-enhanced mammography and ultrasound with elastography or microbubbles. The other group includes new breast imaging platforms such as breast computed tomography (CT) scanning and radionuclide breast imaging. These are exciting advances. However, in this era of cost and radiation containment, it is imperative to look at all of them objectively to see which will provide clinically relevant additional information. PMID:24451711

  20. Investigation of Models and Estimation Techniques for GPS Attitude Determination

    NASA Technical Reports Server (NTRS)

    Garrick, J.

    1996-01-01

    Much work has been done in the Flight Dynamics Analysis Branch (FDAB) in developing algorithms to met the new and growing field of attitude determination using the Global Positioning SYstem (GPS) constellation of satellites. Flight Dynamics has the responsibility to investigate any new technology and incorporate the innovations in the attitude ground support systems developed to support future missions. The work presented here is an investigative analysis that will produce the needed adaptation to allow the Flight Dynamics Support System (FDSS) to incorporate GPS phase measurements and produce observation measurements compatible with the FDSS. A simulator was developed to produce the necessary measurement data to test the models developed for the different estimation techniques used by FDAB. This paper gives an overview of the current modeling capabilities of the simulator models and algorithms for the adaptation of GPS measurement data and results from each of the estimation techniques. Future analysis efforts to evaluate the simulator and models against inflight GPS measurement data are also outlined.

  1. Estimation of alpine skier posture using machine learning techniques.

    PubMed

    Nemec, Bojan; Petrič, Tadej; Babič, Jan; Supej, Matej

    2014-01-01

    High precision Global Navigation Satellite System (GNSS) measurements are becoming more and more popular in alpine skiing due to the relatively undemanding setup and excellent performance. However, GNSS provides only single-point measurements that are defined with the antenna placed typically behind the skier's neck. A key issue is how to estimate other more relevant parameters of the skier's body, like the center of mass (COM) and ski trajectories. Previously, these parameters were estimated by modeling the skier's body with an inverted-pendulum model that oversimplified the skier's body. In this study, we propose two machine learning methods that overcome this shortcoming and estimate COM and skis trajectories based on a more faithful approximation of the skier's body with nine degrees-of-freedom. The first method utilizes a well-established approach of artificial neural networks, while the second method is based on a state-of-the-art statistical generalization method. Both methods were evaluated using the reference measurements obtained on a typical giant slalom course and compared with the inverted-pendulum method. Our results outperform the results of commonly used inverted-pendulum methods and demonstrate the applicability of machine learning techniques in biomechanical measurements of alpine skiing. PMID:25313492

  2. Estimation of Alpine Skier Posture Using Machine Learning Techniques

    PubMed Central

    Nemec, Bojan; Petrič, Tadej; Babič, Jan; Supej, Matej

    2014-01-01

    High precision Global Navigation Satellite System (GNSS) measurements are becoming more and more popular in alpine skiing due to the relatively undemanding setup and excellent performance. However, GNSS provides only single-point measurements that are defined with the antenna placed typically behind the skier's neck. A key issue is how to estimate other more relevant parameters of the skier's body, like the center of mass (COM) and ski trajectories. Previously, these parameters were estimated by modeling the skier's body with an inverted-pendulum model that oversimplified the skier's body. In this study, we propose two machine learning methods that overcome this shortcoming and estimate COM and skis trajectories based on a more faithful approximation of the skier's body with nine degrees-of-freedom. The first method utilizes a well-established approach of artificial neural networks, while the second method is based on a state-of-the-art statistical generalization method. Both methods were evaluated using the reference measurements obtained on a typical giant slalom course and compared with the inverted-pendulum method. Our results outperform the results of commonly used inverted-pendulum methods and demonstrate the applicability of machine learning techniques in biomechanical measurements of alpine skiing. PMID:25313492

  3. Advances in the Rising Bubble Technique for discharge measurement

    NASA Astrophysics Data System (ADS)

    Hilgersom, Koen; Luxemburg, Willem; Willemsen, Geert; Bussmann, Luuk

    2014-05-01

    Already in the 19th century, d'Auria described a discharge measurement technique that applies floats to find the depth-integrated velocity (d'Auria, 1882). The basis of this technique was that the horizontal distance that the float travels on its way to the surface is the image of the integrated velocity profile over depth. Viol and Semenov (1964) improved this method by using air bubbles as floats, but still distances were measured manually until Sargent (1981) introduced a technique that could derive the distances from two photographs simultaneously taken from each side of the river bank. Recently, modern image processing techniques proved to further improve the applicability of the method (Hilgersom and Luxemburg, 2012). In the 2012 article, controlling and determining the rising velocity of an air bubble still appeared a major challenge for the application of this method. Ever since, laboratory experiments with different nozzle and tube sizes lead to advances in our self-made equipment enabling us to produce individual air bubbles with a more constant rising velocity. Also, we introduced an underwater camera to on-site determine the rising velocity, which is dependent on the water temperature and contamination, and therefore is site-specific. Camera measurements of the rising velocity proved successful in a laboratory and field setting, although some improvements to the setup are necessary to capture the air bubbles also at depths where little daylight penetrates. References D'Auria, L.: Velocity of streams; A new method to determine correctly the mean velocity of any perpendicular in rivers and canals, (The) American Engineers, 3, 1882. Hilgersom, K.P. and Luxemburg, W.M.J.: Technical Note: How image processing facilitates the rising bubble technique for discharge measurement, Hydrology and Earth System Sciences, 16(2), 345-356, 2012. Sargent, D.: Development of a viable method of stream flow measurement using the integrating float technique, Proceedings of

  4. Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.

  5. Using support vector machines in the multivariate state estimation technique

    SciTech Connect

    Zavaljevski, N.; Gross, K.C.

    1999-07-01

    One approach to validate nuclear power plant (NPP) signals makes use of pattern recognition techniques. This approach often assumes that there is a set of signal prototypes that are continuously compared with the actual sensor signals. These signal prototypes are often computed based on empirical models with little or no knowledge about physical processes. A common problem of all data-based models is their limited ability to make predictions on the basis of available training data. Another problem is related to suboptimal training algorithms. Both of these potential shortcomings with conventional approaches to signal validation and sensor operability validation are successfully resolved by adopting a recently proposed learning paradigm called the support vector machine (SVM). The work presented here is a novel application of SVM for data-based modeling of system state variables in an NPP, integrated with a nonlinear, nonparametric technique called the multivariate state estimation technique (MSET), an algorithm developed at Argonne National Laboratory for a wide range of nuclear plant applications.

  6. Carrier Estimation Using Classic Spectral Estimation Techniques for the Proposed Demand Assignment Multiple Access Service

    NASA Technical Reports Server (NTRS)

    Scaife, Bradley James

    1999-01-01

    In any satellite communication, the Doppler shift associated with the satellite's position and velocity must be calculated in order to determine the carrier frequency. If the satellite state vector is unknown then some estimate must be formed of the Doppler-shifted carrier frequency. One elementary technique is to examine the signal spectrum and base the estimate on the dominant spectral component. If, however, the carrier is spread (as in most satellite communications) this technique may fail unless the chip rate-to-data rate ratio (processing gain) associated with the carrier is small. In this case, there may be enough spectral energy to allow peak detection against a noise background. In this thesis, we present a method to estimate the frequency (without knowledge of the Doppler shift) of a spread-spectrum carrier assuming a small processing gain and binary-phase shift keying (BPSK) modulation. Our method relies on an averaged discrete Fourier transform along with peak detection on spectral match filtered data. We provide theory and simulation results indicating the accuracy of this method. In addition, we will describe an all-digital hardware design based around a Motorola DSP56303 and high-speed A/D which implements this technique in real-time. The hardware design is to be used in NMSU's implementation of NASA's demand assignment, multiple access (DAMA) service.

  7. Robust quantitative parameter estimation by advanced CMP measurements for vadose zone hydrological studies

    NASA Astrophysics Data System (ADS)

    Koyama, C.; Wang, H.; Khuut, T.; Kawai, T.; Sato, M.

    2015-12-01

    Soil moisture plays a crucial role in the understanding of processes in the vadose zone hydrology. In the last two decades ground penetrating radar (GPR) has been widely discussed has nondestructive measurement technique for soil moisture data. Especially the common mid-point (CMP) technique, which has been used in both seismic and GPR surveys to investigate the vertical velocity profiles, has a very high potential for quantitaive obervsations from the root zone to the ground water aquifer. However, the use is still rather limited today and algorithms for robust quantitative paramter estimation are lacking. In this study we develop an advanced processing scheme for operational soil moisture reetrieval at various depth. Using improved signal processing, together with a semblance - non-normalized cross-correlation sum combined stacking approach and the Dix formula, the interval velocities for multiple soil layers are obtained from the RMS velocities allowing for more accurate estimation of the permittivity at the reflecting point. Where the presence of a water saturated layer, like a groundwater aquifer, can be easily identified by its RMS velocity due to the high contrast compared to the unsaturated zone. By using a new semi-automated measurement technique the acquisition time for a full CMP gather with 1 cm intervals along a 10 m profile can be reduced significantly to under 2 minutes. The method is tested and validated under laboratory conditions in a sand-pit as well as on agricultural fields and beach sand in the Sendai city area. Comparison between CMP estimates and TDR measurements yield a very good agreement with RMSE of 1.5 Vol.-%. The accuracy of depth estimation is validated with errors smaller than 2%. Finally, we demonstrate application of the method in a test site in semi-arid Mongolia, namely the Orkhon River catchment in Bulgan, using commercial 100 MHz and 500 MHz RAMAC GPR antennas. The results demonstrate the suitability of the proposed method for

  8. Advanced fabrication techniques for hydrogen-cooled engine structures

    NASA Technical Reports Server (NTRS)

    Buchmann, O. A.; Arefian, V. V.; Warren, H. A.; Vuigner, A. A.; Pohlman, M. J.

    1985-01-01

    Described is a program for development of coolant passage geometries, material systems, and joining processes that will produce long-life hydrogen-cooled structures for scramjet applications. Tests were performed to establish basic material properties, and samples constructed and evaluated to substantiate fabrication processes and inspection techniques. Results of the study show that the basic goal of increasing the life of hydrogen-cooled structures two orders of magnitude relative to that of the Hypersonic Research Engine can be reached with available means. Estimated life is 19000 cycles for the channels and 16000 cycles for pin-fin coolant passage configurations using Nickel 201. Additional research is required to establish the fatigue characteristics of dissimilar-metal coolant passages (Nickel 201/Inconel 718) and to investigate the embrittling effects of the hydrogen coolant.

  9. Soil Moisture Estimation under Vegetation Applying Polarimetric Decomposition Techniques

    NASA Astrophysics Data System (ADS)

    Jagdhuber, T.; Schön, H.; Hajnsek, I.; Papathanassiou, K. P.

    2009-04-01

    Polarimetric decomposition techniques and inversion algorithms are developed and applied on the OPAQUE data set acquired in spring 2007 to investigate their potential and limitations for soil moisture estimation. A three component model-based decomposition is used together with an eigenvalue decomposition in a combined approach to invert for soil moisture over bare and vegetated soils at L-band. The applied approach indicates a feasible capability to invert soil moisture after decomposing volume and ground scattering components over agricultural land surfaces. But there are still deficiencies in modeling the volume disturbance. The results show a root mean square error below 8.5vol.-% for the winter crop fields (winter wheat, winter triticale and winter barley) and below 11.5Vol-% for the summer crop field (summer barley) whereas all fields have a distinct volume layer of 55-85cm height.

  10. Tools and techniques for estimating high intensity RF effects

    SciTech Connect

    Zacharias, R.; Pennock, S.; Poggio, A.; Ray, S.

    1991-07-01

    With the ever-increasing dependence of modern aircraft on sophisticated avionics and electronic controls, the need to assure aircraft survivatality when exposed to high Intensity RF (HIRF) signals has become of great Interest. Advisory regulation is currently being proposed which would require testing and/or analysis to assure RF hardness of installed flight critical and flight essential equipment. While full-aircraft, full-threat testing may be the most thorough manner to assure survivability, it is not generally practical in loins of cost. Various combinations of limited full-aircraft testing, box-level testing, modeling, and analysis are also being considered as methods to achieve compliance. Modeling, analysis, and low power measurements may hold the key to making full-system survivability estimates at reasonable cost. In this paper we will describe some of the tools and techniques we use for estimating and measuring coupling and component disturbance. A finite difference time domain modeling code, TSAR, used to predict coupling will be described. This code has the capability to quickly generate a mesh model to represent the test object. Some recent applications as well as the advantages and limitations of using such a code will be described. We will also describe some of the facilities and techniques we have developed for making low power coupling measurements and for making direct injection test measurements of device disturbance. Some scaling laws for coupling and device effects will be presented. A method to extrapolate these low-power test results to high-power full-system effects will be presented.

  11. Automatic parameter estimation for atmospheric turbulence mitigation techniques

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen; Paolini, Aaron; Kelmelis, Eric

    2015-05-01

    Several image processing techniques for turbulence mitigation have been shown to be effective under a wide range of long-range capture conditions; however, complex, dynamic scenes have often required manual interaction with the algorithm's underlying parameters to achieve optimal results. While this level of interaction is sustainable in some workflows, in-field determination of ideal processing parameters greatly diminishes usefulness for many operators. Additionally, some use cases, such as those that rely on unmanned collection, lack human-in-the-loop usage. To address this shortcoming, we have extended a well-known turbulence mitigation algorithm based on bispectral averaging with a number of techniques to greatly reduce (and often eliminate) the need for operator interaction. Automations were made in the areas of turbulence strength estimation (Fried's parameter), as well as the determination of optimal local averaging windows to balance turbulence mitigation and the preservation of dynamic scene content (non-turbulent motions). These modifications deliver a level of enhancement quality that approaches that of manual interaction, without the need for operator interaction. As a consequence, the range of operational scenarios where this technology is of benefit has been significantly expanded.

  12. Advances in Poly(4-aminodiphenylaniline) Nanofibers Preparation by Electrospinning Technique.

    PubMed

    Della Pina, C; Busacca, C; Frontera, P; Antonucci, P L; Scarpino, L A; Sironi, A; Falletta, E

    2016-05-01

    Polyaniline (PANI) nanofibers are drawing a great deal of interest from academia and industry due to their multiple applications, especially in biomedical field. PANI nanofibers were successfully electrospun for the first time by MacDiarmid and co-workers at the beginning of the millennium and since then many efforts have been addressed to improve their quality. However, traditional PANI prepared from aniline monomer shows some drawbacks, such as presence of toxic (i.e., benzidine) and inorganic (salts and metals) co-products, that complicate polymer post-treatment, and low solubility in common organic solvents, making hard its processing by electrospinning technique. Some industrial sectors, such as medical and biomedical, need to employ materials free from toxic and polluting species. In this regard, the oxidative polymerization of N-(4-aminophenyl)aniline, aniline dimer, to produce poly(4-aminodiphenylaniline), P4ADA, a kind of PANI, represents an innovative alternative to the traditional synthesis because the obtained polymer results free from carcinogenic and/or polluting co-products, and, moreover, more soluble than traditional PANI. This latter feature can be exploited to obtain P4ADA nanofibers by electrospinning technique. In this paper we report the advances obtained in the P4ADA nanofibers electrospinnig. A comparison among polyethylene oxide (PEO), polymethyl methacrylate (PMMA) and polystyrene (PS), as the second polymer to facilitate the electrospinning process, is shown. In order to increase the conductivity of P4ADA nanofibers, two strategies were adopted and compared: selective insulating binder removal from electrospun nanofibers by a rinsing tratment, afterwards optimizing the minimum amount of binder necessary for the electrospinning process. Moreover, the effect of PEO/P4ADA weight ratio on the fibers morphology and conductivity was highlighted. PMID:27483933

  13. A review of hemorheology: Measuring techniques and recent advances

    NASA Astrophysics Data System (ADS)

    Sousa, Patrícia C.; Pinho, Fernando T.; Alves, Manuel A.; Oliveira, Mónica S. N.

    2016-02-01

    Significant progress has been made over the years on the topic of hemorheology, not only in terms of the development of more accurate and sophisticated techniques, but also in terms of understanding the phenomena associated with blood components, their interactions and impact upon blood properties. The rheological properties of blood are strongly dependent on the interactions and mechanical properties of red blood cells, and a variation of these properties can bring further insight into the human health state and can be an important parameter in clinical diagnosis. In this article, we provide both a reference for hemorheological research and a resource regarding the fundamental concepts in hemorheology. This review is aimed at those starting in the field of hemodynamics, where blood rheology plays a significant role, but also at those in search of the most up-to-date findings (both qualitative and quantitative) in hemorheological measurements and novel techniques used in this context, including technical advances under more extreme conditions such as in large amplitude oscillatory shear flow or under extensional flow, which impose large deformations comparable to those found in the microcirculatory system and in diseased vessels. Given the impressive rate of increase in the available knowledge on blood flow, this review is also intended to identify areas where current knowledge is still incomplete, and which have the potential for new, exciting and useful research. We also discuss the most important parameters that can lead to an alteration of blood rheology, and which as a consequence can have a significant impact on the normal physiological behavior of blood.

  14. Nanocrystalline materials: recent advances in crystallographic characterization techniques

    PubMed Central

    Ringe, Emilie

    2014-01-01

    Most properties of nanocrystalline materials are shape-dependent, providing their exquisite tunability in optical, mechanical, electronic and catalytic properties. An example of the former is localized surface plasmon resonance (LSPR), the coherent oscillation of conduction electrons in metals that can be excited by the electric field of light; this resonance frequency is highly dependent on both the size and shape of a nanocrystal. An example of the latter is the marked difference in catalytic activity observed for different Pd nanoparticles. Such examples highlight the importance of particle shape in nanocrystalline materials and their practical applications. However, one may ask ‘how are nanoshapes created?’, ‘how does the shape relate to the atomic packing and crystallography of the material?’, ‘how can we control and characterize the external shape and crystal structure of such small nanocrystals?’. This feature article aims to give the reader an overview of important techniques, concepts and recent advances related to these questions. Nucleation, growth and how seed crystallography influences the final synthesis product are discussed, followed by shape prediction models based on seed crystallography and thermodynamic or kinetic parameters. The crystallographic implications of epitaxy and orientation in multilayered, core-shell nanoparticles are overviewed, and, finally, the development and implications of novel, spatially resolved analysis tools are discussed. PMID:25485133

  15. Pediatric Cardiopulmonary Resuscitation: Advances in Science, Techniques, and Outcomes

    PubMed Central

    Topjian, Alexis A.; Berg, Robert A.; Nadkarni, Vinay M.

    2009-01-01

    More than 25% of children survive to hospital discharge after in-hospital cardiac arrests, and 5% to 10% survive after out-of-hospital cardiac arrests. This review of pediatric cardiopulmonary resuscitation addresses the epidemiology of pediatric cardiac arrests, mechanisms of coronary blood flow during cardiopulmonary resuscitation, the 4 phases of cardiac arrest resuscitation, appropriate interventions during each phase, special resuscitation circumstances, extracorporeal membrane oxygenation cardiopulmonary resuscitation, and quality of cardiopulmonary resuscitation. The key elements of pathophysiology that impact and match the timing, intensity, duration, and variability of the hypoxic-ischemic insult to evidence-based interventions are reviewed. Exciting discoveries in basic and applied-science laboratories are now relevant for specific subpopulations of pediatric cardiac arrest victims and circumstances (eg, ventricular fibrillation, neonates, congenital heart disease, extracorporeal cardiopulmonary resuscitation). Improving the quality of interventions is increasingly recognized as a key factor for improving outcomes. Evolving training strategies include simulation training, just-in-time and just-in-place training, and crisis-team training. The difficult issue of when to discontinue resuscitative efforts is addressed. Outcomes from pediatric cardiac arrests are improving. Advances in resuscitation science and state-of-the-art implementation techniques provide the opportunity for further improvement in outcomes among children after cardiac arrest. PMID:18977991

  16. Development of advanced strain diagnostic techniques for reactor environments.

    SciTech Connect

    Fleming, Darryn D.; Holschuh, Thomas Vernon,; Miller, Timothy J.; Hall, Aaron Christopher; Urrea, David Anthony,; Parma, Edward J.,

    2013-02-01

    The following research is operated as a Laboratory Directed Research and Development (LDRD) initiative at Sandia National Laboratories. The long-term goals of the program include sophisticated diagnostics of advanced fuels testing for nuclear reactors for the Department of Energy (DOE) Gen IV program, with the future capability to provide real-time measurement of strain in fuel rod cladding during operation in situ at any research or power reactor in the United States. By quantifying the stress and strain in fuel rods, it is possible to significantly improve fuel rod design, and consequently, to improve the performance and lifetime of the cladding. During the past year of this program, two sets of experiments were performed: small-scale tests to ensure reliability of the gages, and reactor pulse experiments involving the most viable samples in the Annulated Core Research Reactor (ACRR), located onsite at Sandia. Strain measurement techniques that can provide useful data in the extreme environment of a nuclear reactor core are needed to characterize nuclear fuel rods. This report documents the progression of solutions to this issue that were explored for feasibility in FY12 at Sandia National Laboratories, Albuquerque, NM.

  17. Hybrid inverse lithography techniques for advanced hierarchical memories

    NASA Astrophysics Data System (ADS)

    Xiao, Guangming; Hooker, Kevin; Irby, Dave; Zhang, Yunqiang; Ward, Brian; Cecil, Tom; Hall, Brett; Lee, Mindy; Kim, Dave; Lucas, Kevin

    2014-03-01

    Traditional segment-based model-based OPC methods have been the mainstream mask layout optimization techniques in volume production for memory and embedded memory devices for many device generations. These techniques have been continually optimized over time to meet the ever increasing difficulties of memory and memory periphery patterning. There are a range of difficult issues for patterning embedded memories successfully. These difficulties include the need for a very high level of symmetry and consistency (both within memory cells themselves and between cells) due to circuit effects such as noise margin requirements in SRAMs. Memory cells and access structures consume a large percentage of area in embedded devices so there is a very high return from shrinking the cell area as much as possible. This aggressive scaling leads to very difficult resolution, 2D CD control and process window requirements. Additionally, the range of interactions between mask synthesis corrections of neighboring areas can extend well beyond the size of the memory cell, making it difficult to fully take advantage of the inherent designed cell hierarchy in mask pattern optimization. This is especially true for non-traditional (i.e., less dependent on geometric rule) OPC/RET methods such as inverse lithography techniques (ILT) which inherently have more model-based decisions in their optimizations. New inverse methods such as model-based SRAF placement and ILT are, however, well known to have considerable benefits in finding flexible mask pattern solutions to improve process window, improve 2D CD control, and improve resolution in ultra-dense memory patterns. They also are known to reduce recipe complexity and provide native MRC compliant mask pattern solutions. Unfortunately, ILT is also known to be several times slower than traditional OPC methods due to the increased computational lithographic optimizations it performs. In this paper, we describe and present results for a methodology to

  18. Revision of the Fully technique for estimating statures.

    PubMed

    Raxter, Michelle H; Auerbach, Benjamin M; Ruff, Christopher B

    2006-07-01

    The "anatomical" method of Fully (1956 Ann. Legale Med. 35:266-273) for reconstructing stature, involving the addition of skeletal elements from the calcaneus to the skull, has been increasingly used in anthropological and forensic contexts, but has undergone little systematic testing on samples other than the original sample used to develop the technique. The original description by Fully of the method also does not provide completely explicit directions for taking all of the necessary measurements. This study tested the accuracy and applicability of his method, and clarified measurement procedures. The study sample consisted of 119 adult black and white males and females of known cadaveric statures from the Terry Collection. Cadaveric statures were adjusted to living statures, following the recommendations of Trotter and Gleser (1952 Am. J. Phys. Anthropol. 10:469-514). We obtained the best results using maximum vertebral body heights (anterior to the pedicles) and measurement of the articulated talus and calcaneus height in anatomical position. Statures derived using the original Fully technique are strongly correlated with living statures in our sample (r = 0.96), but underestimate living stature by an average of about 2.4 cm. Anatomical considerations also suggest that the correction factors applied by Fully to convert summed skeletal height to living stature are too small. New formulae are derived to calculate living stature from skeletal height. There is no effect of sex or ancestry on stature prediction. Resulting stature estimates are accurate to within 4.5 cm in 95% of the individuals in our sample, with no directional bias. PMID:16425177

  19. Development and validation of a MRgHIFU non-invasive tissue acoustic property estimation technique.

    PubMed

    Johnson, Sara L; Dillon, Christopher; Odéen, Henrik; Parker, Dennis; Christensen, Douglas; Payne, Allison

    2016-11-01

    MR-guided high-intensity focussed ultrasound (MRgHIFU) non-invasive ablative surgeries have advanced into clinical trials for treating many pathologies and cancers. A remaining challenge of these surgeries is accurately planning and monitoring tissue heating in the face of patient-specific and dynamic acoustic properties of tissues. Currently, non-invasive measurements of acoustic properties have not been implemented in MRgHIFU treatment planning and monitoring procedures. This methods-driven study presents a technique using MR temperature imaging (MRTI) during low-temperature HIFU sonications to non-invasively estimate sample-specific acoustic absorption and speed of sound values in tissue-mimicking phantoms. Using measured thermal properties, specific absorption rate (SAR) patterns are calculated from the MRTI data and compared to simulated SAR patterns iteratively generated via the Hybrid Angular Spectrum (HAS) method. Once the error between the simulated and measured patterns is minimised, the estimated acoustic property values are compared to the true phantom values obtained via an independent technique. The estimated values are then used to simulate temperature profiles in the phantoms, and compared to experimental temperature profiles. This study demonstrates that trends in acoustic absorption and speed of sound can be non-invasively estimated with average errors of 21% and 1%, respectively. Additionally, temperature predictions using the estimated properties on average match within 1.2 °C of the experimental peak temperature rises in the phantoms. The positive results achieved in tissue-mimicking phantoms presented in this study indicate that this technique may be extended to in vivo applications, improving HIFU sonication temperature rise predictions and treatment assessment. PMID:27441427

  20. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    SciTech Connect

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    Nonconventional wells, which include horizontal, deviated, multilateral and ''smart'' wells, offer great potential for the efficient management of oil and gas reservoirs. These wells are able to contact larger regions of the reservoir than conventional wells and can also be used to target isolated hydrocarbon accumulations. The use of nonconventional wells instrumented with downhole inflow control devices allows for even greater flexibility in production. Because nonconventional wells can be very expensive to drill, complete and instrument, it is important to be able to optimize their deployment, which requires the accurate prediction of their performance. However, predictions of nonconventional well performance are often inaccurate. This is likely due to inadequacies in some of the reservoir engineering and reservoir simulation tools used to model and optimize nonconventional well performance. A number of new issues arise in the modeling and optimization of nonconventional wells. For example, the optimal use of downhole inflow control devices has not been addressed for practical problems. In addition, the impact of geological and engineering uncertainty (e.g., valve reliability) has not been previously considered. In order to model and optimize nonconventional wells in different settings, it is essential that the tools be implemented into a general reservoir simulator. This simulator must be sufficiently general and robust and must in addition be linked to a sophisticated well model. Our research under this five year project addressed all of the key areas indicated above. The overall project was divided into three main categories: (1) advanced reservoir simulation techniques for modeling nonconventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and for coupling the well to the simulator (which includes the accurate calculation of well index and the modeling of multiphase flow in the wellbore

  1. Weldability and joining techniques for advanced fossil energy system alloys

    SciTech Connect

    Lundin, C.D.; Qiao, C.Y.P.; Liu, W.; Yang, D.; Zhou, G.; Morrison, M.

    1998-05-01

    The efforts represent the concerns for the basic understanding of the weldability and fabricability of the advanced high temperature alloys so necessary to affect increases in the efficiency of the next generation Fossil Energy Power Plants. The effort was divided into three tasks with the first effort dealing with the welding and fabrication behavior of 310HCbN (HR3C), the second task details the studies aimed at understanding the weldability of a newly developed 310TaN high temperature stainless (a modification of 310 stainless) and Task 3 addressed the cladding of austenitic tubing with Iron-Aluminide using the GTAW process. Task 1 consisted of microstructural studies on 310HCbN and the development of a Tube Weldability test which has applications to production welding techniques as well as laboratory weldability assessments. In addition, the evaluation of ex-service 310HCbN which showed fireside erosion and cracking at the attachment weld locations was conducted. Task 2 addressed the behavior of the newly developed 310 TaN modification of standard 310 stainless steel and showed that the weldability was excellent and that the sensitization potential was minimal for normal welding and fabrication conditions. The microstructural evolution during elevated temperature testing was characterized and the second phase particles evolved upon aging were identified. Task 3 details the investigation undertaken to clad 310HCbN tubing with Iron Aluminide and developed welding conditions necessary to provide a crack free cladding. The work showed that both a preheat and a post-heat was necessary for crack free deposits and the effect of a third element on the cracking potential was defined together with the effect of the aluminum level for optimum weldability.

  2. Implementation of MASW and waveform inversion techniques for new seismic hazard estimation technique

    NASA Astrophysics Data System (ADS)

    el-aziz abd el-aal, abd; Kamal, heba

    2016-04-01

    In this contribution, an integrated multi-channel analysis of Surface Waves (MASW) technique is applied to explore the geotechnical parameters of subsurface layers at the Zafarana Wind Farm site. The study area includes many active fault systems along the Gulf of Suez that cause many moderate and large earthquakes. Overall, the seismic activity of the area has recently become better understood following the use of waveform inversion method and software to develop accurate focal mechanism solutions for recent recorded earthquakes around the studied area. These earthquakes resulted in major stress-drops in the Eastern Desert and the Gulf of Suez area. These findings have helped to reshape the understanding of the seismotectonic environment of the Gulf of Suez area, which is a perplexing tectonic domain. Based on the collected new information and data, this study uses new an extended stochastic technique to re-examine the seismic hazard for the Gulf of Suez region, particularly the wind turbine towers sites at Zafarana Wind Farm and its vicinity. The essential characteristics of the extended stochastic technique are to obtain and simulate ground motion in order to minimize future earthquake consequences. The first step of this technique is defining the seismic sources which mostly affect the study area. Then, the maximum expected magnitude is defined for each of these seismic sources. It is followed by estimating the ground motion using an empirical attenuation relationship. Finally, the site amplification is implemented in calculating the peak ground acceleration (PGA) at each site of interest. Key words: MASW, waveform inversion, extended stochastic technique, Zafarana Wind Farm

  3. Investigation of joining techniques for advanced austenitic alloys

    SciTech Connect

    Lundin, C.D.; Qiao, C.Y.P.; Kikuchi, Y.; Shi, C.; Gill, T.P.S.

    1991-05-01

    Modified Alloys 316 and 800H, designed for high temperature service, have been developed at Oak Ridge National Laboratory. Assessment of the weldability of the advanced austenitic alloys has been conducted at the University of Tennessee. Four aspects of weldability of the advanced austenitic alloys were included in the investigation.

  4. Basic parameter estimation of binary neutron star systems by the advanced LIGO/Vigro network

    SciTech Connect

    Rodriguez, Carl L.; Farr, Benjamin; Raymond, Vivien; Farr, Will M.; Littenberg, Tyson B.; Fazi, Diego; Kalogera, Vicky

    2014-04-01

    Within the next five years, it is expected that the Advanced LIGO/Virgo network will have reached a sensitivity sufficient to enable the routine detection of gravitational waves. Beyond the initial detection, the scientific promise of these instruments relies on the effectiveness of our physical parameter estimation capabilities. A major part of this effort has been toward the detection and characterization of gravitational waves from compact binary coalescence, e.g., the coalescence of binary neutron stars. While several previous studies have investigated the accuracy of parameter estimation with advanced detectors, the majority have relied on approximation techniques such as the Fisher Matrix which are insensitive to the non-Gaussian nature of the gravitational wave posterior distribution function. Here we report average statistical uncertainties that will be achievable for strong detection candidates (S/N = 20) over a comprehensive sample of source parameters. We use the Markov Chain Monte Carlo based parameter estimation software developed by the LIGO/Virgo Collaboration with the goal of updating the previously quoted Fisher Matrix bounds. We find the recovery of the individual masses to be fractionally within 9% (15%) at the 68% (95%) credible intervals for equal-mass systems, and within 1.9% (3.7%) for unequal-mass systems. We also find that the Advanced LIGO/Virgo network will constrain the locations of binary neutron star mergers to a median uncertainty of 5.1 deg{sup 2} (13.5 deg{sup 2}) on the sky. This region is improved to 2.3 deg{sup 2} (6 deg{sup 2}) with the addition of the proposed LIGO India detector to the network. We also report the average uncertainties on the luminosity distances and orbital inclinations of strong detections that can be achieved by different network configurations.

  5. Estimation of ambient BVOC emissions using remote sensing techniques

    NASA Astrophysics Data System (ADS)

    Nichol, Janet; Wong, Man Sing

    2011-06-01

    The contribution of Biogenic Volatile Organic Compounds (BVOCs) to local air quality modelling is often ignored due to the difficulty of obtaining accurate spatial estimates of emissions. Yet their role in the formation of secondary aerosols and photochemical smog is thought to be significant, especially in hot tropical cities such as Hong Kong, which are situated downwind from dense forests. This paper evaluates Guenther et al.'s [Guenther, A., Hewitt, C.N., Erickson, D., Fall, R., Geron, C., Graedel, T.E., Harley, P., Klinger, L., Lerdau, M., McKay, W.A., Pierce, T., Scholes, B., Steinbrecher, R., Tallamraju, R., Taylor, J., Zimmerman, P., 1995. A global model of natural volatile organic compound emissions. Journal of Geophysical Research 100, 8873-8892] global model of BVOC emissions, for application at a spatially detailed level to Hong Kong's tropical forested landscape using high resolution remote sensing and ground data. The emission estimates are based on a landscape approach which assigns emission rates directly to ecosystem types not to individual species, since unlike in temperate regions where one or two single species may dominate over large regions, Hong Kong's vegetation is extremely diverse with up to 300 different species in one hectare. The resulting BVOC emission maps are suitable for direct input to regional and local air quality models giving 10 m raster output on an hourly basis over the whole of the Hong Kong territory, an area of 1100 km 2. Due to the spatially detailed mapping of isoprene emissions over the study area, it was possible to validate the model output using field data collected at a precise time and place by replicating those conditions in the model. The field measurement of emissions used for validating the model was based on a canister sampling technique, undertaken under different climatic conditions for Hong Kong's main ecosystem types in both urban and rural areas. The model-derived BVOC flux distributions appeared to be

  6. Modified Multilook Cross Correlation technique for Doppler centroid estimation in SAR image signal processing

    NASA Astrophysics Data System (ADS)

    Bee Cheng, Sew

    Synthetic Aperture Radar (SAR) is one of the widely used remote sensing sensors which produces high resolution image by using advance signal processing technique. SAR managed to operate in all sorts of weather and cover wide range of area. To produce a high-quality image, accurate parameters such as Doppler centroid are required for precise SAR signal processing. In the azimuth matched filtering of SAR signal processing, Doppler centroid is an important azimuth parameter that helps to focus the image pixels. Doppler centroid has always been overlooked during SAR signal processing. It is due to the fact that estimation of Doppler centroid involved complicated calculation and increased computational load. Therefore, researcher used to apply only the approximate Doppler value which is not precise and cause defocus effort in the generated SAR image. In this study, several conventional Doppler centroid estimation algorithms are reviewed and developed using Matlab software program to extract the Doppler parameter from received SAR data, namely Spectrum Fit Algorithm, Wavelength Diversity Algorithm (WDA), Multilook Cross Correlation Algorithm (MLCC), and Multilook Beat Frequency Algorithm (MLBF). Two sets of SAR data are employed to evaluate the performance of each estimator, i.e. simulated point target data and RADARSAT-1 Vancouver scene raw data. These experiments gave a sense of accuracy for the estimated results together with computational time consumption. Point target is simulated to generate ideal case SAR data with pre-defined SAR system parameters.

  7. Technique for estimating depth of floods in Tennessee

    USGS Publications Warehouse

    Gamble, C.R.

    1983-01-01

    Estimates of flood depths are needed for design of roadways across flood plains and for other types of construction along streams. Equations for estimating flood depths in Tennessee were derived using data for 150 gaging stations. The equations are based on drainage basin size and can be used to estimate depths of the 10-year and 100-year floods for four hydrologic areas. A method also was developed for estimating depth of floods having recurrence intervals between 10 and 100 years. Standard errors range from 22 to 30 percent for the 10-year depth equations and from 23 to 30 percent for the 100-year depth equations. (USGS)

  8. Software risk estimation and management techniques at JPL

    NASA Technical Reports Server (NTRS)

    Hihn, J.; Lum, K.

    2002-01-01

    In this talk we will discuss how uncertainty has been incorporated into the JPL software model, probabilistic-based estimates, and how risk is addressed, how cost risk is currently being explored via a variety of approaches, from traditional risk lists, to detailed WBS-based risk estimates to the Defect Detection and Prevention (DDP) tool.

  9. Hybrid estimation technique for predicting butene concentration in polyethylene reactor

    NASA Astrophysics Data System (ADS)

    Mohd Ali, Jarinah; Hussain, M. A.

    2016-03-01

    A component of artificial intelligence (AI), which is fuzzy logic, is combined with the so-called conventional sliding mode observer (SMO) to establish a hybrid type estimator to predict the butene concentration in the polyethylene production reactor. Butene or co-monomer concentration is another significant parameter in the polymerization process since it will affect the molecular weight distribution of the polymer produced. The hybrid estimator offers straightforward formulation of SMO and its combination with the fuzzy logic rules. The error resulted from the SMO estimation will be manipulated using the fuzzy rules to enhance the performance, thus improved on the convergence rate. This hybrid estimation is able to estimate the butene concentration satisfactorily despite the present of noise in the process.

  10. Recent advances in sample preparation techniques for effective bioanalytical methods.

    PubMed

    Kole, Prashant Laxman; Venkatesh, Gantala; Kotecha, Jignesh; Sheshala, Ravi

    2011-01-01

    This paper reviews the recent developments in bioanalysis sample preparation techniques and gives an update on basic principles, theory, applications and possibilities for automation, and a comparative discussion on the advantages and limitation of each technique. Conventional liquid-liquid extraction (LLE), protein precipitation (PP) and solid-phase extraction (SPE) techniques are now been considered as methods of the past. The last decade has witnessed a rapid development of novel sample preparation techniques in bioanalysis. Developments in SPE techniques such as selective sorbents and in the overall approach to SPE, such as hybrid SPE and molecularly imprinted polymer SPE, have been addressed. Considerable literature has been published in the area of solid-phase micro-extraction and its different versions, e.g. stir bar sorptive extraction, and their application in the development of selective and sensitive bioanalytical methods. Techniques such as dispersive solid-phase extraction, disposable pipette extraction and micro-extraction by packed sorbent offer a variety of extraction phases and provide unique advantages to bioanalytical methods. On-line SPE utilizing column-switching techniques is rapidly gaining acceptance in bioanalytical applications. PP sample preparation techniques such as PP filter plates/tubes offer many advantages like removal of phospholipids and proteins in plasma/serum. Newer approaches to conventional LLE techniques (salting-out LLE) are also covered in this review article. PMID:21154887

  11. Techniques for estimating flood-frequency discharges for streams in Iowa

    USGS Publications Warehouse

    Eash, David A.

    2001-01-01

    Techniques for estimating flood-frequency discharges for streams in Iowa are presented for determining (1) regional regression estimates for ungaged sites on ungaged streams; (2) weighted estimates for gaged sites; and (3) weighted estimates for ungaged sites on gaged streams. The technique for determining regional regression estimates for ungaged sites on ungaged streams requires determining which of four possible examples applies to the location of the stream site and its basin. Illustrations for determining which example applies to an ungaged stream site and for applying both the one-variable and multi-variable regression equations are provided for the estimation techniques.

  12. Advances in Focal Plane Wavefront Estimation for Directly Imaging Exoplanets

    NASA Astrophysics Data System (ADS)

    Eldorado Riggs, A. J.; Kasdin, N. Jeremy; Groff, Tyler Dean

    2015-01-01

    To image cold exoplanets directly in visible light, an instrument on a telescope needs to suppress starlight by about 9 orders of magnitude at small separations from the star. A coronagraph changes the point spread function to create regions of high contrast where exoplanets or disks can be seen. Aberrations on the optics degrade the contrast by several orders of magnitude, so all high-contrast imaging systems incorporate one or more deformable mirrors (DMs) to recover regions of high contrast. With a coronagraphic instrument planned for the WFIRST-AFTA space telescope, there is a pressing need for faster, more robust estimation and control schemes for the DMs. Non-common path aberrations limit conventional phase conjugation schemes to medium star-to-planet contrast ratios of about 1e-6. High-contrast imaging requires estimation and control of both phase and amplitude in the same beam path as the science camera. Field estimation is a challenge since only intensity is measured; the most common approach, including that planned for WFIRST-AFTA, is to use DMs to create diversity, via pairs of small probe shapes, thereby allowing disambiguation of the electric field. Most implementations of DM Diversity require at least five images per electric field estimate and require narrowband measurements. This paper describes our new estimation algorithms that improve the speed (by using fewer images) and bandwidth of focal plane wavefront estimation. For narrowband estimation, we are testing nonlinear, recursive algorithms such as an iterative extended Kalman filter (IEKF) to use three images each iteration and build better, more robust estimates. We are also exploring the use of broadband estimation without the need for narrowband sub-filters and measurements. Here we present simulations of these algorithms with realistic noise and small signals to show how they might perform for WFIRST-AFTA. Once validated in simulations, we will test these algorithms experimentally in

  13. A new Bayesian recursive technique for parameter estimation

    NASA Astrophysics Data System (ADS)

    Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis

    2006-08-01

    The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.

  14. Estimating monthly temperature using point based interpolation techniques

    NASA Astrophysics Data System (ADS)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  15. IMPROVED TECHNIQUE FOR ESTIMATING MEAN DEPTHS OF LAKES

    EPA Science Inventory

    The authors describe a technique for determining mean lake depth utilizing a systematically aligned dot grid. This technique is, on the average, 55% faster than the traditional planimeter methods, depending on the type of planimeter and the size and complexity of the lake. No det...

  16. Recent advances in microscopic techniques for visualizing leukocytes in vivo

    PubMed Central

    Jain, Rohit; Tikoo, Shweta; Weninger, Wolfgang

    2016-01-01

    Leukocytes are inherently motile and interactive cells. Recent advances in intravital microscopy approaches have enabled a new vista of their behavior within intact tissues in real time. This brief review summarizes the developments enabling the tracking of immune responses in vivo. PMID:27239292

  17. Recent advances in microscopic techniques for visualizing leukocytes in vivo.

    PubMed

    Jain, Rohit; Tikoo, Shweta; Weninger, Wolfgang

    2016-01-01

    Leukocytes are inherently motile and interactive cells. Recent advances in intravital microscopy approaches have enabled a new vista of their behavior within intact tissues in real time. This brief review summarizes the developments enabling the tracking of immune responses in vivo. PMID:27239292

  18. Bricklaying Curriculum: Advanced Bricklaying Techniques. Instructional Materials. Revised.

    ERIC Educational Resources Information Center

    Turcotte, Raymond J.; Hendrix, Laborn J.

    This curriculum guide is designed to assist bricklaying instructors in providing performance-based instruction in advanced bricklaying. Included in the first section of the guide are units on customized or architectural masonry units; glass block; sills, lintels, and copings; and control (expansion) joints. The next two units deal with cut,…

  19. Advanced NDE techniques for quantitative characterization of aircraft

    NASA Technical Reports Server (NTRS)

    Heyman, Joseph S.; Winfree, William P.

    1990-01-01

    Recent advances in nondestructive evaluation (NDE) at NASA Langley Research Center and their applications that have resulted in quantitative assessment of material properties based on thermal and ultrasonic measurements are reviewed. Specific applications include ultrasonic determination of bolt tension, ultrasonic and thermal characterization of bonded layered structures, characterization of composite materials, and disbonds in aircraft skins.

  20. Weight estimation techniques for composite airplanes in general aviation industry

    NASA Technical Reports Server (NTRS)

    Paramasivam, T.; Horn, W. J.; Ritter, J.

    1986-01-01

    Currently available weight estimation methods for general aviation airplanes were investigated. New equations with explicit material properties were developed for the weight estimation of aircraft components such as wing, fuselage and empennage. Regression analysis was applied to the basic equations for a data base of twelve airplanes to determine the coefficients. The resulting equations can be used to predict the component weights of either metallic or composite airplanes.

  1. Backscattered Electron Microscopy as an Advanced Technique in Petrography.

    ERIC Educational Resources Information Center

    Krinsley, David Henry; Manley, Curtis Robert

    1989-01-01

    Three uses of this method with sandstone, desert varnish, and granite weathering are described. Background information on this technique is provided. Advantages of this type of microscopy are stressed. (CW)

  2. A Secure Test Technique for Pipelined Advanced Encryption Standard

    NASA Astrophysics Data System (ADS)

    Shi, Youhua; Togawa, Nozomu; Yanagisawa, Masao; Ohtsuki, Tatsuo

    In this paper, we presented a Design-for-Secure-Test (DFST) technique for pipelined AES to guarantee both the security and the test quality during testing. Unlike previous works, the proposed method can keep all the secrets inside and provide high test quality and fault diagnosis ability as well. Furthermore, the proposed DFST technique can significantly reduce test application time, test data volume, and test generation effort as additional benefits.

  3. Coal and Coal Constituent Studies by Advanced EMR Techniques.

    SciTech Connect

    Belford, R.L.; Clarkson, R.B.; Odintsov, B.; Ceroke, P.J.

    1997-09-30

    Advanced electronic magnetic resonance (EMR) methods are used to examine properties of coals, chars, and molecular species related to constituents of coal. During this grant period, progress was made on a high frequency EMR system particularly appropriate for such studies and on low-frequency dynamic nuclear polarization (DNP) to examine the interaction between fluids such as water and the surface of suspended char particles.

  4. Coal and char studies by advanced EMR techniques

    SciTech Connect

    Belford, R.L.; Clarkson, R.B.; Odintsov, B.M.

    1998-09-30

    Advanced magnetic resonance (EMR) methods are used to examine properties of coals, chars, and molecular species related to constituents of coal. During this grant period, further progress was made on proton NMR and low-frequency dynamic nuclear polarization (DNP) to examine the interaction between fluids such as water and the surface of suspended char particles. Effects of char particle size on water nuclear spin relaxation, T2, were measured.

  5. COAL AND COAL CONSTITUENT STUDIES BY ADVANCED EMR TECHNIQUES

    SciTech Connect

    R. Linn Belford; Robert B. Clarkson

    1997-03-28

    Advanced electronic magnetic resonance (EMR) methods are used to examine properties of coals, chars, and molecular species related to constituents of coal. During this grant period, progress was made on setting up a separate high frequency EMR system particularly appropriate for such studies and exploring the use of low-frequency dynamic nuclear polarization (DNP) to examine the interaction between fluids such as water and the surface of suspended char particles.

  6. Coal and char studies by advanced EMR techniques

    SciTech Connect

    Belford, R.L.; Clarkson, R.B.; Odintsov, B.M.

    1999-03-31

    Advanced magnetic resonance (EMR) methods are used to examine properties of coals, chars, and molecular species related to constituents of coal. During this grant period, further progress was made on proton NMR and low-frequency dynamic nuclear polarization (DNP) to examine the interaction between fluids such as water and the surface of suspended char particles. Effects of char particle size and type on water nuclear spin relaxation, T2, were measured and modeled.

  7. Estimation of the elastic Earth parameters from the SLR technique

    NASA Astrophysics Data System (ADS)

    Rutkowska, Milena

    ABSTRACT. The global elastic parameters (Love and Shida numbers) associated with the tide variations for satellite and stations are estimated from the Satellite Laser Ranging (SLR) data. The study is based on satellite observations taken by the global network of the ground stations during the period from January 1, 2005 until January 1, 2007 for monthly orbital arcs of Lageos 1 satellite. The observation equations contain unknown for orbital arcs, some constants and elastic Earth parameters which describe tide variations. The adjusted values are discussed and compared with geophysical estimations of Love numbers. All computations were performed employing the NASA software GEODYN II (eddy et al. 1990).

  8. Some computational techniques for estimating human operator describing functions

    NASA Technical Reports Server (NTRS)

    Levison, W. H.

    1986-01-01

    Computational procedures for improving the reliability of human operator describing functions are described. Special attention is given to the estimation of standard errors associated with mean operator gain and phase shift as computed from an ensemble of experimental trials. This analysis pertains to experiments using sum-of-sines forcing functions. Both open-loop and closed-loop measurement environments are considered.

  9. DEVELOPING SEASONAL AMMONIA EMISSION ESTIMATES WITH AN INVERSE MODELING TECHNIQUE

    EPA Science Inventory

    Significant uncertainty exists in magnitude and variability of ammonia (NH3) emissions, which are needed for air quality modeling of aerosols and deposition of nitrogen compounds. Approximately 85% of NH3 emissions are estimated to come from agricultural non-point sources. We sus...

  10. Estimating Returns to Education Using Different Natural Experiment Techniques

    ERIC Educational Resources Information Center

    Leigh, Andrew; Ryan, Chris

    2008-01-01

    How much do returns to education differ across different natural experiment methods? To test this, we estimate the rate of return to schooling in Australia using two different instruments for schooling: month of birth and changes in compulsory schooling laws. With annual pre-tax income as our measure of income, we find that the naive ordinary…

  11. Metamodels for Ozone: Comparison of Three Estimation Techniques

    EPA Science Inventory

    A metamodel for ozone is a mathematical relationship between the inputs and outputs of an air quality modeling experiment, permitting calculation of outputs for scenarios of interest without having to run the model again. In this study we compare three metamodel estimation techn...

  12. The estimation technique of the airframe design for manufacturability

    NASA Astrophysics Data System (ADS)

    Govorkov, A.; Zhilyaev, A.

    2016-04-01

    This paper discusses the method of quantitative estimation of a design for manufacturability of the parts of the airframe. The method is based on the interaction of individual indicators considering the weighting factor. The authors of the paper introduce the algorithm of the design for manufacturability of parts based on its 3D model

  13. Nondestructive Evaluation of Thick Concrete Using Advanced Signal Processing Techniques

    SciTech Connect

    Clayton, Dwight A; Barker, Alan M; Santos-Villalobos, Hector J; Albright, Austin P; Hoegh, Kyle; Khazanovich, Lev

    2015-09-01

    The purpose of the U.S. Department of Energy Office of Nuclear Energy’s Light Water Reactor Sustainability (LWRS) Program is to develop technologies and other solutions that can improve the reliability, sustain the safety, and extend the operating lifetimes of nuclear power plants (NPPs) beyond 60 years [1]. Since many important safety structures in an NPP are constructed of concrete, inspection techniques must be developed and tested to evaluate the internal condition. In-service containment structures generally do not allow for the destructive measures necessary to validate the accuracy of these inspection techniques. This creates a need for comparative testing of the various nondestructive evaluation (NDE) measurement techniques on concrete specimens with known material properties, voids, internal microstructure flaws, and reinforcement locations.

  14. Advanced implementations of the iterative multi region technique

    NASA Astrophysics Data System (ADS)

    Kaburcuk, Fatih

    The integration of the finite-difference time-domain (FDTD) method into the iterative multi-region (IMR) technique, an iterative approach used to solve large-scale electromagnetic scattering and radiation problems, is presented in this dissertation. The idea of the IMR technique is to divide a large problem domain into smaller subregions, solve each subregion separately, and combine the solutions of subregions after introducing the effect of interaction to obtain solutions at multiple frequencies for the large domain. Solution of the subregions using the frequency domain solvers has been the preferred approach as such solutions using time domain solvers require computationally expensive bookkeeping of time signals between subregions. In this contribution we present an algorithm that makes it feasible to use the FDTD method, a time domain numerical technique, in the IMR technique to obtain solutions at a pre-specified number of frequencies in a single simulation. As a result, a considerable reduction in memory storage requirements and computation time is achieved. A hybrid method integrated into the IMR technique is also presented in this work. This hybrid method combines the desirable features of the method of moments (MoM) and the FDTD method to solve large-scale radiation problems more efficiently. The idea of this hybrid method based on the IMR technique is to divide an original problem domain into unconnected subregions and use the more appropriate method in each domain. The most prominent feature of this proposed method is to obtain solutions at multiple frequencies in a single IMR simulation by constructing time-limited waveforms. The performance of the proposed method is investigated numerically using different configurations composed of two, three, and four objects.

  15. Evaluating noninvasive genetic sampling techniques to estimate large carnivore abundance.

    PubMed

    Mumma, Matthew A; Zieminski, Chris; Fuller, Todd K; Mahoney, Shane P; Waits, Lisette P

    2015-09-01

    Monitoring large carnivores is difficult because of intrinsically low densities and can be dangerous if physical capture is required. Noninvasive genetic sampling (NGS) is a safe and cost-effective alternative to physical capture. We evaluated the utility of two NGS methods (scat detection dogs and hair sampling) to obtain genetic samples for abundance estimation of coyotes, black bears and Canada lynx in three areas of Newfoundland, Canada. We calculated abundance estimates using program capwire, compared sampling costs, and the cost/sample for each method relative to species and study site, and performed simulations to determine the sampling intensity necessary to achieve abundance estimates with coefficients of variation (CV) of <10%. Scat sampling was effective for both coyotes and bears and hair snags effectively sampled bears in two of three study sites. Rub pads were ineffective in sampling coyotes and lynx. The precision of abundance estimates was dependent upon the number of captures/individual. Our simulations suggested that ~3.4 captures/individual will result in a < 10% CV for abundance estimates when populations are small (23-39), but fewer captures/individual may be sufficient for larger populations. We found scat sampling was more cost-effective for sampling multiple species, but suggest that hair sampling may be less expensive at study sites with limited road access for bears. Given the dependence of sampling scheme on species and study site, the optimal sampling scheme is likely to be study-specific warranting pilot studies in most circumstances. PMID:25693632

  16. Application of advanced coating techniques to rocket engine components

    NASA Technical Reports Server (NTRS)

    Verma, S. K.

    1988-01-01

    The materials problem in the space shuttle main engine (SSME) is reviewed. Potential coatings and the method of their application for improved life of SSME components are discussed. A number of advanced coatings for turbine blade components and disks are being developed and tested in a multispecimen thermal fatigue fluidized bed facility at IIT Research Institute. This facility is capable of producing severe strains of the degree present in blades and disk components of the SSME. The potential coating systems and current efforts at IITRI being taken for life extension of the SSME components are summarized.

  17. Transcranial Doppler: Techniques and advanced applications: Part 2

    PubMed Central

    Sharma, Arvind K.; Bathala, Lokesh; Batra, Amit; Mehndiratta, Man Mohan; Sharma, Vijay K.

    2016-01-01

    Transcranial Doppler (TCD) is the only diagnostic tool that can provide continuous information about cerebral hemodynamics in real time and over extended periods. In the previous paper (Part 1), we have already presented the basic ultrasound physics pertaining to TCD, insonation methods, and various flow patterns. This article describes various advanced applications of TCD such as detection of right-to-left shunt, emboli monitoring, vasomotor reactivity (VMR), monitoring of vasospasm in subarachnoid hemorrhage (SAH), monitoring of intracranial pressure, its role in stoke prevention in sickle cell disease, and as a supplementary test for confirmation of brain death. PMID:27011639

  18. Transcranial Doppler: Techniques and advanced applications: Part 2.

    PubMed

    Sharma, Arvind K; Bathala, Lokesh; Batra, Amit; Mehndiratta, Man Mohan; Sharma, Vijay K

    2016-01-01

    Transcranial Doppler (TCD) is the only diagnostic tool that can provide continuous information about cerebral hemodynamics in real time and over extended periods. In the previous paper (Part 1), we have already presented the basic ultrasound physics pertaining to TCD, insonation methods, and various flow patterns. This article describes various advanced applications of TCD such as detection of right-to-left shunt, emboli monitoring, vasomotor reactivity (VMR), monitoring of vasospasm in subarachnoid hemorrhage (SAH), monitoring of intracranial pressure, its role in stoke prevention in sickle cell disease, and as a supplementary test for confirmation of brain death. PMID:27011639

  19. Advances, shortcomings, and recommendations for wind chill estimation.

    PubMed

    Shitzer, Avraham; Tikuisis, Peter

    2012-05-01

    This article discusses briefly the advances made and the remaining short-comings in the "new" wind chill charts adopted in the US and Canada in 2001. A number of indicated refinements are proposed, including the use of whole body models in the computations, verification of heat exchange coefficients by human experiments, reconsideration of "calm" wind conditions, reconsideration of frostbite threshold levels, the inclusion of cold-related pain and numbness in the charts, etc. A dynamic numerical model is applied to compare the effects of wind speeds, on the one hand, and air temperatures, on the other, on the steady-state exposed facial and bare finger temperatures. An apparent asymmetry is demonstrated, favoring the effects of wind speeds over those of air temperatures for an identical final facial temperature. This asymmetry is reversed, however, when SI unit changes in these quantities are considered. PMID:20852897

  20. Advances, shortcomings, and recommendations for wind chill estimation

    NASA Astrophysics Data System (ADS)

    Shitzer, Avraham; Tikuisis, Peter

    2012-05-01

    This article discusses briefly the advances made and the remaining short-comings in the "new" wind chill charts adopted in the US and Canada in 2001. A number of indicated refinements are proposed, including the use of whole body models in the computations, verification of heat exchange coefficients by human experiments, reconsideration of "calm" wind conditions, reconsideration of frostbite threshold levels, the inclusion of cold-related pain and numbness in the charts, etc. A dynamic numerical model is applied to compare the effects of wind speeds, on the one hand, and air temperatures, on the other, on the steady-state exposed facial and bare finger temperatures. An apparent asymmetry is demonstrated, favoring the effects of wind speeds over those of air temperatures for an identical final facial temperature. This asymmetry is reversed, however, when SI unit changes in these quantities are considered.

  1. In Situ Techniques for Monitoring Electrochromism: An Advanced Laboratory Experiment

    ERIC Educational Resources Information Center

    Saricayir, Hakan; Uce, Musa; Koca, Atif

    2010-01-01

    This experiment employs current technology to enhance and extend existing lab content. The basic principles of spectroscopic and electroanalytical techniques and their use in determining material properties are covered in some detail in many undergraduate chemistry programs. However, there are limited examples of laboratory experiments with in…

  2. Advances in reduction techniques for tire contact problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1995-01-01

    Some recent developments in reduction techniques, as applied to predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities, are reviewed. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of reduction techniques, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface. Also, the research topics which have high potential for enhancing the effectiveness of reduction techniques are outlined.

  3. Benefits of advanced software techniques for mission planning systems

    NASA Technical Reports Server (NTRS)

    Gasquet, A.; Parrod, Y.; Desaintvincent, A.

    1994-01-01

    The increasing complexity of modern spacecraft, and the stringent requirement for maximizing their mission return, call for a new generation of Mission Planning Systems (MPS). In this paper, we discuss the requirements for the Space Mission Planning and the benefits which can be expected from Artificial Intelligence techniques through examples of applications developed by Matra Marconi Space.

  4. Parameter estimation techniques and application in aircraft flight testing

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Technical papers presented at the symposium by selected representatives from industry, universities, and various Air Force, Navy, and NASA installations are given. The topics covered include the newest developments in identification techniques, the most recent flight-test experience, and the projected potential for the near future.

  5. Some advanced testing techniques for concentrator photovoltaic cells and lenses

    SciTech Connect

    Wiczer, J.J.; Chaffin, R.J.; Hibray, R.E.

    1982-09-01

    The authors describe two separate test techniques for evaluating concentrator photovoltaic components. For convenient characterization of concentrator solar cells, they have developed a method for measuring the entire illuminated I-V curve of a photovoltaic cell with a single flash of intense simulated sunlight. This method reduces the heat input to the cell and the time required to test a cell, thus making possible quick indoor measurements of photovoltaic conversion efficiency at concentrated illumination levels without the use of elaborate cell mounting fixtures or heat sink attachments. The other test method provides a technique to analyze the spatially dependent, spectral distribution of intense sunlight collected and focused by lenses designed for use in photovoltaic concentrator systems. This information is important in the design of multijunction photovoltaic receivers, secondary concentrators, and in optimizing the performance of conventional silicon cell concentrator systems.

  6. Characterization of PTFE Using Advanced Thermal Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Blumm, J.; Lindemann, A.; Meyer, M.; Strasser, C.

    2010-10-01

    Polytetrafluoroethylene (PTFE) is a synthetic fluoropolymer used in numerous industrial applications. It is often referred to by its trademark name, Teflon. Thermal characterization of a PTFE material was carried out using various thermal analysis and thermophysical properties test techniques. The transformation energetics and specific heat were measured employing differential scanning calorimetry. The thermal expansion and the density changes were determined employing pushrod dilatometry. The viscoelastic properties (storage and loss modulus) were analyzed using dynamic mechanical analysis. The thermal diffusivity was measured using the laser flash technique. Combining thermal diffusivity data with specific heat and density allows calculation of the thermal conductivity of the polymer. Measurements were carried out from - 125 °C up to 150 °C. Additionally, measurements of the mechanical properties were carried out down to - 170 °C. The specific heat tests were conducted into the fully molten regions up to 370 °C.

  7. Developments and advances concerning the hyperpolarisation technique SABRE.

    PubMed

    Mewis, Ryan E

    2015-10-01

    To overcome the inherent sensitivity issue in NMR and MRI, hyperpolarisation techniques are used. Signal Amplification By Reversible Exchange (SABRE) is a hyperpolarisation technique that utilises parahydrogen, a molecule that possesses a nuclear singlet state, as the source of polarisation. A metal complex is required to break the singlet order of parahydrogen and, by doing so, facilitates polarisation transfer to analyte molecules ligated to the same complex through the J-coupled network that exists. The increased signal intensities that the analyte molecules possess as a result of this process have led to investigations whereby their potential as MRI contrast agents has been probed and to understand the fundamental processes underpinning the polarisation transfer mechanism. As well as discussing literature relevant to both of these areas, the chemical structure of the complex, the physical constraints of the polarisation transfer process and the successes of implementing SABRE at low and high magnetic fields are discussed. PMID:26264565

  8. Advance techniques for monitoring human tolerance to +Gz accelerations.

    NASA Technical Reports Server (NTRS)

    Pelligra, R.; Sandler, H.; Rositano, S.; Skrettingland, K.; Mancini, R.

    1972-01-01

    Standard techniques for monitoring the acceleration-stressed human subject have been augmented by measuring (1) temporal, brachial and/or radial arterial blood flow, and (2) indirect systolic and diastolic blood pressure at 60-sec intervals. Results show that the response of blood pressure to positive accelerations is complex and dependent on an interplay of hydrostatic forces, diminishing venous return, redistribution of blood, and other poorly defined compensatory reflexes.

  9. Added Value of Assessing Adnexal Masses with Advanced MRI Techniques

    PubMed Central

    Thomassin-Naggara, I.; Balvay, D.; Rockall, A.; Carette, M. F.; Ballester, M.; Darai, E.; Bazot, M.

    2015-01-01

    This review will present the added value of perfusion and diffusion MR sequences to characterize adnexal masses. These two functional MR techniques are readily available in routine clinical practice. We will describe the acquisition parameters and a method of analysis to optimize their added value compared with conventional images. We will then propose a model of interpretation that combines the anatomical and morphological information from conventional MRI sequences with the functional information provided by perfusion and diffusion weighted sequences. PMID:26413542

  10. Advanced Method to Estimate Fuel Slosh Simulation Parameters

    NASA Technical Reports Server (NTRS)

    Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl

    2005-01-01

    The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the

  11. Development of processing techniques for advanced thermal protection materials

    NASA Technical Reports Server (NTRS)

    Selvaduray, Guna S.

    1994-01-01

    The effort, which was focused on the research and development of advanced materials for use in Thermal Protection Systems (TPS), has involved chemical and physical testing of refractory ceramic tiles, fabrics, threads and fibers. This testing has included determination of the optical properties, thermal shock resistance, high temperature dimensional stability, and tolerance to environmental stresses. Materials have also been tested in the Arc Jet 2 x 9 Turbulent Duct Facility (TDF), the 1 atmosphere Radiant Heat Cycler, and the Mini-Wind Tunnel Facility (MWTF). A significant part of the effort hitherto has gone towards modifying and upgrading the test facilities so that meaningful tests can be carried out. Another important effort during this period has been the creation of a materials database. Computer systems administration and support have also been provided. These are described in greater detail below.

  12. Advanced materials and techniques for fibre-optic sensing

    NASA Astrophysics Data System (ADS)

    Henderson, Philip J.

    2014-06-01

    Fibre-optic monitoring systems came of age in about 1999 upon the emergence of the world's first significant commercialising company - a spin-out from the UK's collaborative MAST project. By using embedded fibre-optic technology, the MAST project successfully measured transient strain within high-performance composite yacht masts. Since then, applications have extended from smart composites into civil engineering, energy, military, aerospace, medicine and other sectors. Fibre-optic sensors come in various forms, and may be subject to embedment, retrofitting, and remote interrogation. The unique challenges presented by each implementation require careful scrutiny before widespread adoption can take place. Accordingly, various aspects of design and reliability are discussed spanning a range of representative technologies that include resonant microsilicon structures, MEMS, Bragg gratings, advanced forms of spectroscopy, and modern trends in nanotechnology. Keywords: Fibre-optic sensors, fibre Bragg gratings, MEMS, MOEMS, nanotechnology, plasmon.

  13. Advanced techniques for characterization of ion beam modified materials

    DOE PAGESBeta

    Zhang, Yanwen; Debelle, Aurélien; Boulle, Alexandre; Kluth, Patrick; Tuomisto, Filip

    2014-10-30

    Understanding the mechanisms of damage formation in materials irradiated with energetic ions is essential for the field of ion-beam materials modification and engineering. Utilizing incident ions, electrons, photons, and positrons, various analysis techniques, including Rutherford backscattering spectrometry (RBS), electron RBS, Raman spectroscopy, high-resolution X-ray diffraction, small-angle X-ray scattering, and positron annihilation spectroscopy, are routinely used or gaining increasing attention in characterizing ion beam modified materials. The distinctive information, recent developments, and some perspectives in these techniques are reviewed in this paper. Applications of these techniques are discussed to demonstrate their unique ability for studying ion-solid interactions and the corresponding radiationmore » effects in modified depths ranging from a few nm to a few tens of μm, and to provide information on electronic and atomic structure of the materials, defect configuration and concentration, as well as phase stability, amorphization and recrystallization processes. Finally, such knowledge contributes to our fundamental understanding over a wide range of extreme conditions essential for enhancing material performance and also for design and synthesis of new materials to address a broad variety of future energy applications.« less

  14. Advanced techniques for characterization of ion beam modified materials

    SciTech Connect

    Zhang, Yanwen; Debelle, Aurélien; Boulle, Alexandre; Kluth, Patrick; Tuomisto, Filip

    2014-10-30

    Understanding the mechanisms of damage formation in materials irradiated with energetic ions is essential for the field of ion-beam materials modification and engineering. Utilizing incident ions, electrons, photons, and positrons, various analysis techniques, including Rutherford backscattering spectrometry (RBS), electron RBS, Raman spectroscopy, high-resolution X-ray diffraction, small-angle X-ray scattering, and positron annihilation spectroscopy, are routinely used or gaining increasing attention in characterizing ion beam modified materials. The distinctive information, recent developments, and some perspectives in these techniques are reviewed in this paper. Applications of these techniques are discussed to demonstrate their unique ability for studying ion-solid interactions and the corresponding radiation effects in modified depths ranging from a few nm to a few tens of μm, and to provide information on electronic and atomic structure of the materials, defect configuration and concentration, as well as phase stability, amorphization and recrystallization processes. Finally, such knowledge contributes to our fundamental understanding over a wide range of extreme conditions essential for enhancing material performance and also for design and synthesis of new materials to address a broad variety of future energy applications.

  15. Advanced techniques for constrained internal coordinate molecular dynamics.

    PubMed

    Wagner, Jeffrey R; Balaraman, Gouthaman S; Niesen, Michiel J M; Larsen, Adrien B; Jain, Abhinandan; Vaidehi, Nagarajan

    2013-04-30

    Internal coordinate molecular dynamics (ICMD) methods provide a more natural description of a protein by using bond, angle, and torsional coordinates instead of a Cartesian coordinate representation. Freezing high-frequency bonds and angles in the ICMD model gives rise to constrained ICMD (CICMD) models. There are several theoretical aspects that need to be developed to make the CICMD method robust and widely usable. In this article, we have designed a new framework for (1) initializing velocities for nonindependent CICMD coordinates, (2) efficient computation of center of mass velocity during CICMD simulations, (3) using advanced integrators such as Runge-Kutta, Lobatto, and adaptive CVODE for CICMD simulations, and (4) cancelling out the "flying ice cube effect" that sometimes arises in Nosé-Hoover dynamics. The Generalized Newton-Euler Inverse Mass Operator (GNEIMO) method is an implementation of a CICMD method that we have developed to study protein dynamics. GNEIMO allows for a hierarchy of coarse-grained simulation models based on the ability to rigidly constrain any group of atoms. In this article, we perform tests on the Lobatto and Runge-Kutta integrators to determine optimal simulation parameters. We also implement an adaptive coarse-graining tool using the GNEIMO Python interface. This tool enables the secondary structure-guided "freezing and thawing" of degrees of freedom in the molecule on the fly during molecular dynamics simulations and is shown to fold four proteins to their native topologies. With these advancements, we envision the use of the GNEIMO method in protein structure prediction, structure refinement, and in studying domain motion. PMID:23345138

  16. Advanced Techniques for Constrained Internal Coordinate Molecular Dynamics

    PubMed Central

    Wagner, Jeffrey R.; Balaraman, Gouthaman S.; Niesen, Michiel J. M.; Larsen, Adrien B.; Jain, Abhinandan; Vaidehi, Nagarajan

    2013-01-01

    Internal coordinate molecular dynamics (ICMD) methods provide a more natural description of a protein by using bond, angle and torsional coordinates instead of a Cartesian coordinate representation. Freezing high frequency bonds and angles in the ICMD model gives rise to constrained ICMD (CICMD) models. There are several theoretical aspects that need to be developed in order to make the CICMD method robust and widely usable. In this paper we have designed a new framework for 1) initializing velocities for non-independent CICMD coordinates, 2) efficient computation of center of mass velocity during CICMD simulations, 3) using advanced integrators such as Runge-Kutta, Lobatto and adaptive CVODE for CICMD simulations, and 4) cancelling out the “flying ice cube effect” that sometimes arises in Nosé-Hoover dynamics. The Generalized Newton-Euler Inverse Mass Operator (GNEIMO) method is an implementation of a CICMD method that we have developed to study protein dynamics. GNEIMO allows for a hierarchy of coarse-grained simulation models based on the ability to rigidly constrain any group of atoms. In this paper, we perform tests on the Lobatto and Runge-Kutta integrators to determine optimal simulation parameters. We also implement an adaptive coarse graining tool using the GNEIMO Python interface. This tool enables the secondary structure-guided “freezing and thawing” of degrees of freedom in the molecule on the fly during MD simulations, and is shown to fold four proteins to their native topologies. With these advancements we envision the use of the GNEIMO method in protein structure prediction, structure refinement, and in studying domain motion. PMID:23345138

  17. Cost estimation model for advanced planetary programs, fourth edition

    NASA Technical Reports Server (NTRS)

    Spadoni, D. J.

    1983-01-01

    The development of the planetary program cost model is discussed. The Model was updated to incorporate cost data from the most recent US planetary flight projects and extensively revised to more accurately capture the information in the historical cost data base. This data base is comprised of the historical cost data for 13 unmanned lunar and planetary flight programs. The revision was made with a two fold objective: to increase the flexibility of the model in its ability to deal with the broad scope of scenarios under consideration for future missions, and to maintain and possibly improve upon the confidence in the model's capabilities with an expected accuracy of 20%. The Model development included a labor/cost proxy analysis, selection of the functional forms of the estimating relationships, and test statistics. An analysis of the Model is discussed and two sample applications of the cost model are presented.

  18. A new technique for direction of arrival estimation for ionospheric multipath channels

    NASA Astrophysics Data System (ADS)

    Guldogan, Mehmet B.; Arıkan, Orhan; Arıkan, Feza

    2009-09-01

    A novel array signal processing technique is proposed to estimate HF channel parameters including number of paths, their respective direction of arrivals (DOA), delays, Doppler shifts and amplitudes. The proposed technique utilizes the Cross Ambiguity Function (CAF), hence, called as the CAF-DF technique. The CAF-DF technique iteratively processes the array output data and provides reliable estimates for DOA, delay, Doppler shift and amplitude corresponding to each impinging HF propagated wave onto an antenna array. Obtained results for both real and simulated data at different signal to noise ratio (SNR) values indicate the superior performance of the proposed technique over the well known MUltiple SIgnal Classification (MUSIC) technique.

  19. Feedback techniques and SPS Ecloud instabilities - design estimates

    SciTech Connect

    Fox,J.D.; Mastorides, T.; Ndabashimiye, G.; Rivetta, C.; Van Winkle, D.; Byrd, J.; Vay, J-L.; Hofle, W.; Rumolo, G.; de Maria, R.

    2009-05-04

    The SPS at high intensities exhibits transverse single-bunch instabilities with signatures consistent with an Ecloud driven instability. While the SPS has a coupled-bunch transverse feedback system, control of Ecloud driven motion requires a much wider control bandwidth capable of sensing and controlling motion within each bunched beam. This paper draws beam dynamics data from the measurements and simulations of this SPS instability, and estimates system requirements for a feedback system with 2-4 GS/sec. sampling rates to damp Ecloud-driven transverse motion in the SPS at intensities desired for high-current LHC operation.

  20. Advances in dental veneers: materials, applications, and techniques

    PubMed Central

    Pini, Núbia Pavesi; Aguiar, Flávio Henrique Baggio; Lima, Débora Alves Nunes Leite; Lovadino, José Roberto; Terada, Raquel Sano Suga; Pascotto, Renata Corrêa

    2012-01-01

    Laminate veneers are a conservative treatment of unaesthetic anterior teeth. The continued development of dental ceramics offers clinicians many options for creating highly aesthetic and functional porcelain veneers. This evolution of materials, ceramics, and adhesive systems permits improvement of the aesthetic of the smile and the self-esteem of the patient. Clinicians should understand the latest ceramic materials in order to be able to recommend them and their applications and techniques, and to ensure the success of the clinical case. The current literature was reviewed to search for the most important parameters determining the long-term success, correct application, and clinical limitations of porcelain veneers. PMID:23674920

  1. Advances in dental local anesthesia techniques and devices: An update

    PubMed Central

    Saxena, Payal; Gupta, Saurabh K.; Newaskar, Vilas; Chandra, Anil

    2013-01-01

    Although local anesthesia remains the backbone of pain control in dentistry, researches are going to seek new and better means of managing the pain. Most of the researches are focused on improvement in the area of anesthetic agents, delivery devices and technique involved. Newer technologies have been developed that can assist the dentist in providing enhanced pain relief with reduced injection pain and fewer adverse effects. This overview will enlighten the practicing dentists regarding newer devices and methods of rendering pain control comparing these with the earlier used ones on the basis of research and clinical studies available. PMID:24163548

  2. Multiclass Bayes error estimation by a feature space sampling technique

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.

    1979-01-01

    A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

  3. Advanced techniques in reliability model representation and solution

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Nicol, David M.

    1992-01-01

    The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.

  4. Advanced terahertz techniques for quality control and counterfeit detection

    NASA Astrophysics Data System (ADS)

    Ahi, Kiarash; Anwar, Mehdi

    2016-04-01

    This paper reports our invented methods for detection of counterfeit electronic. These versatile techniques are also handy in quality control applications. Terahertz pulsed laser systems are capable of giving the material characteristics and thus make it possible to distinguish between the materials used in authentic components and their counterfeit clones. Components with material defects can also be distinguished in section in this manner. In this work different refractive indices and absorption coefficients were observed for counterfeit components compared to their authentic counterparts. Existence of unexpected ingredient materials was detected in counterfeit components by Fourier Transform analysis of the transmitted terahertz pulse. Thicknesses of different layers are obtainable by analyzing the reflected terahertz pulse. Existence of unexpected layers is also detectable in this manner. Recycled, sanded and blacktopped counterfeit electronic components were detected as a result of these analyses. Counterfeit ICs with die dislocations were detected by depicting the terahertz raster scanning data in a coordinate plane which gives terahertz images. In the same manner, raster scanning of the reflected pulse gives terahertz images of the surfaces of the components which were used to investigate contaminant materials and sanded points on the surfaces. The results of the later technique, reveals the recycled counterfeit components.

  5. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  6. Estimation of Missing Precipitation Data using Soft Computing based Spatial Interpolation Techniques

    NASA Astrophysics Data System (ADS)

    Teegavarapu, R. S.

    2007-12-01

    Deterministic and stochastic weighting methods are the most frequently used methods for estimating missing rainfall values at a gage based on values recorded at all other available recording gages. Traditional spatial interpolation techniques can be integrated with soft computing techniques to improve the estimation of missing precipitation data. Association rule mining based spatial interpolation approach, universal function approximation based kriging, optimal function approximation and clustering methods are developed and investigated in the current study to estimate missing precipitation values at a gaging station. Historical daily precipitation data obtained from 15 rain gauging stations from a temperate climatic region, Kentucky, USA, are used to test this approach and derive conclusions about efficacy of these methods in estimating missing precipitation data. Results suggest that the use of soft computing techniques in conjunction with a spatial interpolation technique can improve the precipitation estimates and help to address few limitations of traditional spatial interpolation techniques.

  7. Development of Advanced In-Situ Techniques for Chemistry Monitoring and Corrosion Mitigation in SCWO Environments

    SciTech Connect

    Macdonald, D. D.; Lvov, S. N.

    2000-03-31

    This project is developing sensing technologies and corrosion monitoring techniques for use in super critical water oxidation (SCWO) systems to reduce the volume of mixed low-level nuclear waste by oxidizing organic components in a closed cycle system where CO2 and other gaseous oxides are produced, leaving the radioactive elements concentrated in ash. The technique uses water at supercritical temperatures under highly oxidized conditions by maintaining a high fugacity of molecular oxygen in the system, which causes high corrosion rates of even the most corrosive resistant reactor materials. This project significantly addresses the high corrosion shortcoming through development of (a) advanced electrodes and sensors for in situ potentiometric monitoring of pH in high subcritical and supercritical aqueous solutions, (b) an approach for evaluating the association constants for 1-1 aqueous electrolytes using a flow-through electrochemical thermocell; (c) an electrochemical noise sensor for the in situ measurement of corrosion rate in subcritical and supercritical aqueous systems; (d) a model for estimating the effect of pressure on reaction rates, including corrosion reactions, in high subcritical and supercritical aqueous systems. The project achieved all objectives, except for installing some of the sensors into a fully operating SCWO system.

  8. Advanced coding techniques for few mode transmission systems.

    PubMed

    Okonkwo, Chigo; van Uden, Roy; Chen, Haoshuo; de Waardt, Huug; Koonen, Ton

    2015-01-26

    We experimentally verify the advantage of employing advanced coding schemes such as space-time coding and 4 dimensional modulation formats to enhance the transmission performance of a 3-mode transmission system. The performance gain of space-time block codes for extending the optical signal-to-noise ratio tolerance in multiple-input multiple-output optical coherent spatial division multiplexing transmission systems with respect to single-mode transmission performance are evaluated. By exploiting the spatial diversity that few-mode-fibers offer, with respect to single mode fiber back-to-back performance, significant OSNR gains of 3.2, 4.1, 4.9, and 6.8 dB at the hard-decision forward error correcting limit are demonstrated for DP-QPSK 8, 16 and 32 QAM, respectively. Furthermore, by employing 4D constellations, 6 × 28Gbaud 128 set partitioned quadrature amplitude modulation is shown to outperform conventional 8 QAM transmission performance, whilst carrying an additional 0.5 bit/symbol. PMID:25835899

  9. Advanced Cell Culture Techniques for Cancer Drug Discovery

    PubMed Central

    Lovitt, Carrie J.; Shelper, Todd B.; Avery, Vicky M.

    2014-01-01

    Human cancer cell lines are an integral part of drug discovery practices. However, modeling the complexity of cancer utilizing these cell lines on standard plastic substrata, does not accurately represent the tumor microenvironment. Research into developing advanced tumor cell culture models in a three-dimensional (3D) architecture that more prescisely characterizes the disease state have been undertaken by a number of laboratories around the world. These 3D cell culture models are particularly beneficial for investigating mechanistic processes and drug resistance in tumor cells. In addition, a range of molecular mechanisms deconstructed by studying cancer cells in 3D models suggest that tumor cells cultured in two-dimensional monolayer conditions do not respond to cancer therapeutics/compounds in a similar manner. Recent studies have demonstrated the potential of utilizing 3D cell culture models in drug discovery programs; however, it is evident that further research is required for the development of more complex models that incorporate the majority of the cellular and physical properties of a tumor. PMID:24887773

  10. Recent Advances in Spaceborne Precipitation Radar Measurement Techniques and Technology

    NASA Technical Reports Server (NTRS)

    Im, Eastwood; Durden, Stephen L.; Tanelli, Simone

    2006-01-01

    NASA is currently developing advanced instrument concepts and technologies for future spaceborne atmospheric radars, with an over-arching objective of making such instruments more capable in supporting future science needs and more cost effective. Two such examples are the Second-Generation Precipitation Radar (PR-2) and the Nexrad-In-Space (NIS). PR-2 is a 14/35-GHz dual-frequency rain radar with a deployable 5-meter, wide-swath scanned membrane antenna, a dual-polarized/dual-frequency receiver, and a realtime digital signal processor. It is intended for Low Earth Orbit (LEO) operations to provide greatly enhanced rainfall profile retrieval accuracy while consuming only a fraction of the mass of the current TRMM Precipitation Radar (PR). NIS is designed to be a 35-GHz Geostationary Earth Orbiting (GEO) radar for providing hourly monitoring of the life cycle of hurricanes and tropical storms. It uses a 35-m, spherical, lightweight membrane antenna and Doppler processing to acquire 3-dimensional information on the intensity and vertical motion of hurricane rainfall.

  11. Coal and Coal Constituent Studies by Advanced EMR Techniques

    SciTech Connect

    Alex I. Smirnov; Mark J. Nilges; R. Linn Belford; Robert B. Clarkson

    1998-03-31

    Advanced electronic magnetic resonance (EMR) methods are used to examine properties of coals, chars, and molecular species related to constituents of coal. We have achieved substantial progress on upgrading the high field (HF) EMR (W-band, 95 GHz) spectrometers that are especially advantageous for such studies. Particularly, we have built a new second W-band instrument (Mark II) in addition to our Mark I. Briefly, Mark II features: (i) an Oxford custom-built 7 T superconducting magnet which is scannable from 0 to 7 T at up to 0.5 T/min; (ii) water-cooled coaxial solenoid with up to ±550 G scan under digital (15 bits resolution) computer control; (iii) custom-engineered precision feed-back circuit, which is used to drive this solenoid, is based on an Ultrastab 860R sensor that has linearity better than 5 ppm and resolution of 0.05 ppm; (iv) an Oxford CF 1200 cryostat for variable temperature studies from 1.8 to 340 K. During this grant period we have completed several key upgrades of both Mark I and II, particularly microwave bridge, W-band probehead, and computer interfaces. We utilize these improved instruments for HF EMR studies of spin-spin interaction and existence of different paramagnetic species in carbonaceous solids.

  12. Confidence region estimation techniques for nonlinear regression :three case studies.

    SciTech Connect

    Swiler, Laura Painton (Sandia National Laboratories, Albuquerque, NM); Sullivan, Sean P. (University of Texas, Austin, TX); Stucky-Mack, Nicholas J. (Harvard University, Cambridge, MA); Roberts, Randall Mark; Vugrin, Kay White

    2005-10-01

    This work focuses on different methods to generate confidence regions for nonlinear parameter identification problems. Three methods for confidence region estimation are considered: a linear approximation method, an F-test method, and a Log-Likelihood method. Each of these methods are applied to three case studies. One case study is a problem with synthetic data, and the other two case studies identify hydraulic parameters in groundwater flow problems based on experimental well-test results. The confidence regions for each case study are analyzed and compared. Although the F-test and Log-Likelihood methods result in similar regions, there are differences between these regions and the regions generated by the linear approximation method for nonlinear problems. The differing results, capabilities, and drawbacks of all three methods are discussed.

  13. Estimation of soil hydraulic properties with microwave techniques

    NASA Technical Reports Server (NTRS)

    Oneill, P. E.; Gurney, R. J.; Camillo, P. J.

    1985-01-01

    Useful quantitative information about soil properties may be obtained by calibrating energy and moisture balance models with remotely sensed data. A soil physics model solves heat and moisture flux equations in the soil profile and is driven by the surface energy balance. Model generated surface temperature and soil moisture and temperature profiles are then used in a microwave emission model to predict the soil brightness temperature. The model hydraulic parameters are varied until the predicted temperatures agree with the remotely sensed values. This method is used to estimate values for saturated hydraulic conductivity, saturated matrix potential, and a soil texture parameter. The conductivity agreed well with a value measured with an infiltration ring and the other parameters agreed with values in the literature.

  14. A comparison of 2 techniques for estimating deer density

    USGS Publications Warehouse

    Robbins, C.S.

    1977-01-01

    We applied mark-resight and area-conversion methods to estimate deer abundance at a 2,862-ha area in and surrounding the Gettysburg National Military Park and Eisenhower National Historic Site during 1987-1991. One observer in each of 11 compartments counted marked and unmarked deer during 65-75 minutes at dusk during 3 counts in each of April and November. Use of radio-collars and vinyl collars provided a complete inventory of marked deer in the population prior to the counts. We sighted 54% of the marked deer during April 1987 and 1988, and 43% of the marked deer during November 1987 and 1988. Mean number of deer counted increased from 427 in April 1987 to 582 in April 1991, and increased from 467 in November 1987 to 662 in November 1990. Herd size during April, based on the mark-resight method, increased from approximately 700-1,400 from 1987-1991, whereas the estimates for November indicated an increase from 983 for 1987 to 1,592 for 1990. Given the large proportion of open area and the extensive road system throughout the study area, we concluded that the sighting probability for marked and unmarked deer was fairly similar. We believe that the mark-resight method was better suited to our study than the area-conversion method because deer were not evenly distributed between areas suitable and unsuitable for sighting within open and forested areas. The assumption of equal distribution is required by the area-conversion method. Deer marked for the mark-resight method also helped reduce double counting during the dusk surveys.

  15. Development of a technique for estimating noise covariances using multiple observers

    NASA Technical Reports Server (NTRS)

    Bundick, W. Thomas

    1988-01-01

    Friedland's technique for estimating the unknown noise variances of a linear system using multiple observers has been extended by developing a general solution for the estimates of the variances, developing the statistics (mean and standard deviation) of these estimates, and demonstrating the solution on two examples.

  16. Advanced experimental techniques for transonic wind tunnels - Final lecture

    NASA Technical Reports Server (NTRS)

    Kilgore, Robert A.

    1987-01-01

    A philosophy of experimental techniques is presented, suggesting that in order to be successful, one should like what one does, have the right tools, stick to the job, avoid diversions, work hard, interact with people, be informed, keep it simple, be self sufficient, and strive for perfection. Sources of information, such as bibliographies, newsletters, technical reports, and technical contacts and meetings are recommended. It is pointed out that adaptive-wall test sections eliminate or reduce wall interference effects, and magnetic suspension and balance systems eliminate support-interference effects, while the problem of flow quality remains with all wind tunnels. It is predicted that in the future it will be possible to obtain wind tunnel results at the proper Reynolds number, and the effects of flow unsteadiness, wall interference, and support interference will be eliminated or greatly reduced.

  17. Automated angiogenesis quantification through advanced image processing techniques.

    PubMed

    Doukas, Charlampos N; Maglogiannis, Ilias; Chatziioannou, Aristotle; Papapetropoulos, Andreas

    2006-01-01

    Angiogenesis, the formation of blood vessels in tumors, is an interactive process between tumor, endothelial and stromal cells in order to create a network for oxygen and nutrients supply, necessary for tumor growth. According to this, angiogenic activity is considered a suitable method for both tumor growth or inhibition detection. The angiogenic potential is usually estimated by counting the number of blood vessels in particular sections. One of the most popular assay tissues to study the angiogenesis phenomenon is the developing chick embryo and its chorioallantoic membrane (CAM), which is a highly vascular structure lining the inner surface of the egg shell. The aim of this study was to develop and validate an automated image analysis method that would give an unbiased quantification of the micro-vessel density and growth in angiogenic CAM images. The presented method has been validated by comparing automated results to manual counts over a series of digital chick embryo photos. The results indicate the high accuracy of the tool, which has been thus extensively used for tumor growth detection at different stages of embryonic development. PMID:17946107

  18. Techniques for estimating blood pressure variation using video images.

    PubMed

    Sugita, Norihiro; Obara, Kazuma; Yoshizawa, Makoto; Abe, Makoto; Tanaka, Akira; Homma, Noriyasu

    2015-08-01

    It is important to know about a sudden blood pressure change that occurs in everyday life and may pose a danger to human health. However, monitoring the blood pressure variation in daily life is difficult because a bulky and expensive sensor is needed to measure the blood pressure continuously. In this study, a new non-contact method is proposed to estimate the blood pressure variation using video images. In this method, the pulse propagation time difference or instantaneous phase difference is calculated between two pulse waves obtained from different parts of a subject's body captured by a video camera. The forehead, left cheek, and right hand are selected as regions to obtain pulse waves. Both the pulse propagation time difference and instantaneous phase difference were calculated from the video images of 20 healthy subjects performing the Valsalva maneuver. These indices are considered to have a negative correlation with the blood pressure variation because they approximate the pulse transit time obtained from a photoplethysmograph. However, the experimental results showed that the correlation coefficients between the blood pressure and the proposed indices were approximately 0.6 for the pulse wave obtained from the right hand. This result is considered to be due to the difference in the transmission depth into the skin between the green and infrared light used as light sources for the video image and conventional photoplethysmogram, respectively. In addition, the difference in the innervation of the face and hand may be related to the results. PMID:26737225

  19. A Fast Goal Recognition Technique Based on Interaction Estimates

    NASA Technical Reports Server (NTRS)

    E-Martin, Yolanda; R-Moreno, Maria D.; Smith, David E.

    2015-01-01

    Goal Recognition is the task of inferring an actor's goals given some or all of the actor's observed actions. There is considerable interest in Goal Recognition for use in intelligent personal assistants, smart environments, intelligent tutoring systems, and monitoring user's needs. In much of this work, the actor's observed actions are compared against a generated library of plans. Recent work by Ramirez and Geffner makes use of AI planning to determine how closely a sequence of observed actions matches plans for each possible goal. For each goal, this is done by comparing the cost of a plan for that goal with the cost of a plan for that goal that includes the observed actions. This approach yields useful rankings, but is impractical for real-time goal recognition in large domains because of the computational expense of constructing plans for each possible goal. In this paper, we introduce an approach that propagates cost and interaction information in a plan graph, and uses this information to estimate goal probabilities. We show that this approach is much faster, but still yields high quality results.

  20. Simulation of an advanced techniques of ion propulsion Rocket system

    NASA Astrophysics Data System (ADS)

    Bakkiyaraj, R.

    2016-07-01

    The ion propulsion rocket system is expected to become popular with the development of Deuterium,Argon gas and Hexagonal shape Magneto hydrodynamic(MHD) techniques because of the stimulation indirectly generated the power from ionization chamber,design of thrust range is 1.2 N with 40 KW of electric power and high efficiency.The proposed work is the study of MHD power generation through ionization level of Deuterium gas and combination of two gaseous ions(Deuterium gas ions + Argon gas ions) at acceleration stage.IPR consists of three parts 1.Hexagonal shape MHD based power generator through ionization chamber 2.ion accelerator 3.Exhaust of Nozzle.Initially the required energy around 1312 KJ/mol is carrying out the purpose of deuterium gas which is changed to ionization level.The ionized Deuterium gas comes out from RF ionization chamber to nozzle through MHD generator with enhanced velocity then after voltage is generated across the two pairs of electrode in MHD.it will produce thrust value with the help of mixing of Deuterium ion and Argon ion at acceleration position.The simulation of the IPR system has been carried out by MATLAB.By comparing the simulation results with the theoretical and previous results,if reaches that the proposed method is achieved of thrust value with 40KW power for simulating the IPR system.

  1. Advanced Manufacturing Techniques Demonstrated for Fabricating Developmental Hardware

    NASA Technical Reports Server (NTRS)

    Redding, Chip

    2004-01-01

    NASA Glenn Research Center's Engineering Development Division has been working in support of innovative gas turbine engine systems under development by Glenn's Combustion Branch. These one-of-a-kind components require operation under extreme conditions. High-temperature ceramics were chosen for fabrication was because of the hostile operating environment. During the designing process, it became apparent that traditional machining techniques would not be adequate to produce the small, intricate features for the conceptual design, which was to be produced by stacking over a dozen thin layers with many small features that would then be aligned and bonded together into a one-piece unit. Instead of using traditional machining, we produced computer models in Pro/ENGINEER (Parametric Technology Corporation (PTC), Needham, MA) to the specifications of the research engineer. The computer models were exported in stereolithography standard (STL) format and used to produce full-size rapid prototype polymer models. These semi-opaque plastic models were used for visualization and design verification. The computer models also were exported in International Graphics Exchange Specification (IGES) format and sent to Glenn's Thermal/Fluids Design & Analysis Branch and Applied Structural Mechanics Branch for profiling heat transfer and mechanical strength analysis.

  2. Advances in Current Rating Techniques for Flexible Printed Circuits

    NASA Technical Reports Server (NTRS)

    Hayes, Ron

    2014-01-01

    Twist Capsule Assemblies are power transfer devices commonly used in spacecraft mechanisms that require electrical signals to be passed across a rotating interface. Flexible printed circuits (flex tapes, see Figure 2) are used to carry the electrical signals in these devices. Determining the current rating for a given trace (conductor) size can be challenging. Because of the thermal conditions present in this environment the most appropriate approach is to assume that the only means by which heat is removed from the trace is thru the conductor itself, so that when the flex tape is long the temperature rise in the trace can be extreme. While this technique represents a worst-case thermal situation that yields conservative current ratings, this conservatism may lead to overly cautious designs when not all traces are used at their full rated capacity. A better understanding of how individual traces behave when they are not all in use is the goal of this research. In the testing done in support of this paper, a representative flex tape used for a flight Solar Array Drive Assembly (SADA) application was tested by energizing individual traces (conductors in the tape) in a vacuum chamber and the temperatures of the tape measured using both fine-gauge thermocouples and infrared thermographic imaging. We find that traditional derating schemes used for bundles of wires do not apply for the configuration tested. We also determine that single active traces located in the center of a flex tape operate at lower temperatures than those on the outside edges.

  3. Advances in array detectors for X-ray diffraction techniques.

    PubMed

    Hanley, Quentin S; Denton, M Bonner

    2005-09-01

    Improved focal plane array detector systems are described which can provide improved readout speeds, random addressing and even be employed to simultaneously measure position, intensity and energy. This latter capability promises to rekindle interests in Laue techniques. Simulations of three varieties of foil mask spectrometer in both on- and off-axis configurations indicate that systems of stacked silicon detectors can provide energy measurements within 1% of the true value based on the use of single 'foils' and approximately 10000 photons. An eight-detector hybrid design can provide energy coverage from 4 to 60 keV. Energy resolution can be improved by increased integration time or higher flux experiments. An off-axis spectrometer design in which the angle between the incident beam and the detector system is 45 degrees results in a shift in the optimum energy response of the spectrometer system. In the case of a 200 microm-thick silicon absorber, the energy optimum shifts from 8.7 keV to 10.3 keV as the angle of incidence goes from 0 to 45 degrees. These new designs make better use of incident photons, lower the impact of source flicker through simultaneous rather than sequential collection of intensities, and improve the energy range relative to previously reported systems. PMID:16120985

  4. Recent advances in the surface forces apparatus (SFA) technique

    NASA Astrophysics Data System (ADS)

    Israelachvili, J.; Min, Y.; Akbulut, M.; Alig, A.; Carver, G.; Greene, W.; Kristiansen, K.; Meyer, E.; Pesika, N.; Rosenberg, K.; Zeng, H.

    2010-03-01

    The surface forces apparatus (SFA) has been used for many years to measure the physical forces between surfaces, such as van der Waals (including Casimir) and electrostatic forces in vapors and liquids, adhesion and capillary forces, forces due to surface and liquid structure (e.g. solvation and hydration forces), polymer, steric and hydrophobic interactions, bio-specific interactions as well as friction and lubrication forces. Here we describe recent developments in the SFA technique, specifically the SFA 2000, its simplicity of operation and its extension into new areas of measurement of both static and dynamic forces as well as both normal and lateral (shear and friction) forces. The main reason for the greater simplicity of the SFA 2000 is that it operates on one central simple-cantilever spring to generate both coarse and fine motions over a total range of seven orders of magnitude (from millimeters to ångstroms). In addition, the SFA 2000 is more spacious and modulated so that new attachments and extra parts can easily be fitted for performing more extended types of experiments (e.g. extended strain friction experiments and higher rate dynamic experiments) as well as traditionally non-SFA type experiments (e.g. scanning probe microscopy and atomic force microscopy) and for studying different types of systems.

  5. Advanced signal processing technique for damage detection in steel tubes

    NASA Astrophysics Data System (ADS)

    Amjad, Umar; Yadav, Susheel Kumar; Dao, Cac Minh; Dao, Kiet; Kundu, Tribikram

    2016-04-01

    In recent years, ultrasonic guided waves gained attention for reliable testing and characterization of metals and composites. Guided wave modes are excited and detected by PZT (Lead Zirconate Titanate) transducers either in transmission or reflection mode. In this study guided waves are excited and detected in the transmission mode and the phase change of the propagating wave modes are recorded. In most of the other studies reported in the literature, the change in the received signal strength (amplitude) is investigated with varying degrees of damage while in this study the change in phase is correlated with the extent of damage. Feature extraction techniques are used for extracting phase and time-frequency information. The main advantage of this approach is that the bonding condition between the transducer and the specimen does not affect the phase while it can affect the strength of recorded signal. Therefore, if the specimen is not damaged but the transducer-specimen bonding is deteriorated then the received signal strength is altered but the phase remains same and thus false positive predictions for damage can be avoided.

  6. TDR Technique for Estimating the Intensity of Evapotranspiration of Turfgrasses

    PubMed Central

    Janik, Grzegorz; Wolski, Karol; Daniel, Anna; Albert, Małgorzata; Skierucha, Wojciech; Wilczek, Andrzej; Szyszkowski, Paweł; Walczak, Amadeusz

    2015-01-01

    The paper presents a method for precise estimation of evapotranspiration of selected turfgrass species. The evapotranspiration functions, whose domains are only two relatively easy to measure parameters, were developed separately for each of the grass species. Those parameters are the temperature and the volumetric moisture of soil at the depth of 2.5 cm. Evapotranspiration has the character of a modified logistic function with empirical parameters. It assumes the form ETR(θ2.5 cm, T2.5 cm) = A/(1 + B · e−C·(θ2.5 cm · T2.5 cm)), where: ETR(θ2.5 cm, T2.5 cm) is evapotranspiration [mm·h−1], θ2.5 cm is volumetric moisture of soil at the depth of 2.5 cm [m3·m−3], T2.5 cm is soil temperature at the depth of 2.5 cm [°C], and A, B, and C are empirical coefficients calculated individually for each of the grass species [mm·h1], and [—], [(m3·m−3·°C)−1]. The values of evapotranspiration calculated on the basis of the presented function can be used as input data for the design of systems for the automatic control of irrigation systems ensuring optimum moisture conditions in the active layer of lawn swards. PMID:26448964

  7. Advanced techniques for determining long term compatibility of materials with propellants

    NASA Technical Reports Server (NTRS)

    Green, R. L.; Stebbins, J. P.; Smith, A. W.; Pullen, K. E.

    1973-01-01

    A method for the prediction of propellant-material compatibility for periods of time up to ten years is presented. Advanced sensitive measurement techniques used in the prediction method are described. These include: neutron activation analysis, radioactive tracer technique, and atomic absorption spectroscopy with a graphite tube furnace sampler. The results of laboratory tests performed to verify the prediction method are presented.

  8. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    SciTech Connect

    Durlofsky, Louis J.

    2000-08-28

    This project targets the development of (1) advanced reservoir simulation techniques for modeling non-conventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and well index (for use in simulation models), including the effects of wellbore flow; and (3) accurate approaches to account for heterogeneity in the near-well region.

  9. Biotechnology Apprenticeship for Secondary-Level Students: Teaching Advanced Cell Culture Techniques for Research

    ERIC Educational Resources Information Center

    Lewis, Jennifer R.; Kotur, Mark S.; Butt, Omar; Kulcarni, Sumant; Riley, Alyssa A.; Ferrell, Nick; Sullivan, Kathryn D.; Ferrari, Mauro

    2002-01-01

    The purpose of this article is to discuss "small-group apprenticeships (SGAs)" as a method to instruct cell culture techniques to high school participants. The study aimed to teach cell culture practices and to introduce advanced imaging techniques to solve various biomedical engineering problems. Participants designed and completed experiments…

  10. Biotechnology Apprenticeship for Secondary-Level Students: Teaching Advanced Cell Culture Techniques for Research.

    ERIC Educational Resources Information Center

    Lewis, Jennifer R.; Kotur, Mark S.; Butt, Omar; Kulcarni, Sumant; Riley, Alyssa A.; Ferrell, Nick; Sullivan, Kathryn D.; Ferrari, Mauro

    2002-01-01

    Discusses small-group apprenticeships (SGAs) as a method for introducing cell culture techniques to high school participants. Teaches cell culture practices and introduces advance imaging techniques to solve various biomedical engineering problems. Clarifies and illuminates the value of small-group laboratory apprenticeships. (Author/KHR)

  11. Endoscopic therapy for early gastric cancer: Standard techniques and recent advances in ESD

    PubMed Central

    Kume, Keiichiro

    2014-01-01

    The technique of endoscopic submucosal dissection (ESD) is now a well-known endoscopic therapy for early gastric cancer. ESD was introduced to resect large specimens of early gastric cancer in a single piece. ESD can provide precision of histologic diagnosis and can also reduce the recurrence rate. However, the drawback of ESD is its technical difficulty, and, consequently, it is associated with a high rate of complications, the need for advanced endoscopic techniques, and a lengthy procedure time. Various advances in the devices and techniques used for ESD have contributed to overcoming these drawbacks. PMID:24914364

  12. Geostatistical characterization of the soil of Aguascalientes, México, by using spatial estimation techniques.

    PubMed

    Magdaleno-Márquez, Ricardo; de la Luz Pérez-Rea, María; Castaño, Víctor M

    2016-01-01

    Four spatial estimation techniques available in commercial computational packages are evaluated and compared, namely: regularized splines interpolation, tension splines interpolation, inverse distance weighted interpolation, and ordinary Kriging estimation, in order to establish the best representation for the shallow stratigraphic configuration in the city of Aguascalientes, in Central Mexico. Data from 478 sample points along with the software ArcGIS (Environmental Systems Research Institute, Inc. (ESRI), ArcGIS, ver. 9.3, Redlands, California 2008) to calculate the spatial estimates. Each technique was evaluated based on the root mean square error, calculated from a validation between the generated estimates and measured data from 64 sample points which were not used in the spatial estimation process. The present study shows that, for the estimation of the hard-soil layer, ordinary Kriging offered the best performance among the evaluated techniques. PMID:27386362

  13. Use of environmental isotope tracer and GIS techniques to estimate basin recharge

    NASA Astrophysics Data System (ADS)

    Odunmbaku, Abdulganiu A. A.

    The extensive use of ground water only began with the advances in pumping technology at the early portion of 20th Century. Groundwater provides the majority of fresh water supply for municipal, agricultural and industrial uses, primarily because of little to no treatment it requires. Estimating the volume of groundwater available in a basin is a daunting task, and no accurate measurements can be made. Usually water budgets and simulation models are primarily used to estimate the volume of water in a basin. Precipitation, land surface cover and subsurface geology are factors that affect recharge; these factors affect percolation which invariably affects groundwater recharge. Depending on precipitation, soil chemistry, groundwater chemical composition, gradient and depth, the age and rate of recharge can be estimated. This present research proposes to estimate the recharge in Mimbres, Tularosa and Diablo Basin using the chloride environmental isotope; chloride mass-balance approach and GIS. It also proposes to determine the effect of elevation on recharge rate. Mimbres and Tularosa Basin are located in southern New Mexico State, and extend southward into Mexico. Diablo Basin is located in Texas in extends southward. This research utilizes the chloride mass balance approach to estimate the recharge rate through collection of groundwater data from wells, and precipitation. The data were analysed statistically to eliminate duplication, outliers, and incomplete data. Cluster analysis, piper diagram and statistical significance were performed on the parameters of the groundwater; the infiltration rate was determined using chloride mass balance technique. The data was then analysed spatially using ArcGIS10. Regions of active recharge were identified in Mimbres and Diablo Basin, but this could not be clearly identified in Tularosa Basin. CMB recharge for Tularosa Basin yields 0.04037mm/yr (0.0016in/yr), Diablo Basin was 0.047mm/yr (0.0016 in/yr), and 0.2153mm/yr (0.00848in

  14. Parameter estimation and tests of General Relativity with GW transients in Advanced LIGO

    NASA Astrophysics Data System (ADS)

    Vitale, Salvatore

    2016-03-01

    The Advanced LIGO observatories have successfully completed their first observation run. Data were collected from September 2015 to January 2016, with a sensitivity a few times better than initial instruments in the hundreds of Hertz band. Bayesian parameter estimation and model selection algorithms can be used to estimate the astrophysical parameters of gravitational-wave sources, as well as to perform tests of General Relativity in its strong-field dynamical regime. In this talk we will describe the methods devised to characterize transient gravitational wave sources and their applications in the advanced gravitational-wave detector era.

  15. Recent Advances in Stable Isotope Techniques for N2O Source Partitioning in Soils

    NASA Astrophysics Data System (ADS)

    Baggs, E.; Mair, L.; Mahmood, S.

    2007-12-01

    The use of 13C, 15N and 18O enables us to overcome uncertainties associated with soil C and N processes and to assess the links between species diversity and ecosystem function. Recent advances in stable isotope techniques enable determination of process rates, and are fundamental for examining interactions between C and N cycles. Here we will introduce the 15N-, 18O- and 13C-enrichment techniques we have developed to distinguish between different N2O-producing processes in situ in soils, presenting selected results, and will critically assess their potential, alone and in combination with molecular techniques, to help address key research questions for soil biogeochemistry and microbial ecology. We have developed 15N- 18O-enrichment techniques to distinguish between, and to quantify, N2O production during ammonia oxidation, nitrifier denitrification and denitrification. This provides a great advantage over natural abundance approaches as it enables quantification of N2O from each microbial source, which can be coupled with quantification of N2 production, and used to examine interactions between different processes and cycles. These approaches have also provided new insights into the N cycle and how it interacts with the C cycle. For example, we now know that ammonia oxidising bacteria significantly contribute to N2O emissions from soils, both via the traditionally accepted ammonia oxidation pathway, and also via denitrification (nitrifier denitrification) which can proceed even under aerobic conditions. We are also linking emissions from each source to diversity and activity of relevant microbial functional groups, for example through the development and application of a specific nirK primer for the nitrite reductase in ammonia oxidising bacteria. Recently, isotopomers have been proposed as an alternative for source partitioning N2O at natural abundance levels, and offers the potential to investigate N2O production from nitrate ammonification, and overcomes the

  16. A Time Series Separation and Reconstruction (TSSR) Technique to Estimate Daily Suspended Sediment Concentrations

    EPA Science Inventory

    High suspended sediment concentrations (SSCs) from natural and anthropogenic sources are responsible for biological impairments of many streams, rivers, lakes, and estuaries, but techniques to estimate sediment concentrations or loads accurately at the daily temporal resolution a...

  17. A Rapid Screen Technique for Estimating Nanoparticle Transport in Porous Media

    EPA Science Inventory

    Quantifying the mobility of engineered nanoparticles in hydrologic pathways from point of release to human or ecological receptors is essential for assessing environmental exposures. Column transport experiments are a widely used technique to estimate the transport parameters of ...

  18. A GIS TECHNIQUE FOR ESTIMATING NATURAL ATTENUATION RATES AND MASS BALANCES: JOURNAL ARTICLE

    EPA Science Inventory

    NRMRL-ADA-01308 Durant, ND, Srinivasan, P, Faust, CR, Burnell, DK, Klein, KL, and Burden*, D.S. A GIS Technique for Estimating Natural Attenuation Rates and Mass Balances. Battelle's Sixth International ...

  19. A comparison of minimum distance and maximum likelihood techniques for proportion estimation

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.

    1982-01-01

    The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.

  20. A Novel Microcharacterization Technique in the Measurement of Strain and Orientation Gradient in Advanced Materials

    NASA Technical Reports Server (NTRS)

    Garmestai, H.; Harris, K.; Lourenco, L.

    1997-01-01

    Representation of morphology and evolution of the microstructure during processing and their relation to properties requires proper experimental techniques. Residual strains, lattice distortion, and texture (micro-texture) at the interface and the matrix of a layered structure or a functionally gradient material and their variation are among parameters important in materials characterization but hard to measure with present experimental techniques. Current techniques available to measure changes in interred material parameters (residual stress, micro-texture, microplasticity) produce results which are either qualitative or unreliable. This problem becomes even more complicated in the case of a temperature variation. These parameters affect many of the mechanical properties of advanced materials including stress-strain relation, ductility, creep, and fatigue. A review of some novel experimental techniques using recent advances in electron microscopy is presented here to measure internal stress, (micro)texture, interracial strength and (sub)grain formation and realignment. Two of these techniques are combined in the chamber of an Environmental Scanning Electron Microscope to measure strain and orientation gradients in advanced materials. These techniques which include Backscattered Kikuchi Diffractometry (BKD) and Microscopic Strain Field Analysis are used to characterize metallic and intermetallic matrix composites and superplastic materials. These techniques are compared with the more conventional x-ray diffraction and indentation techniques.

  1. Comparison study on disturbance estimation techniques in precise slow motion control

    NASA Astrophysics Data System (ADS)

    Fan, S.; Nagamune, R.; Altintas, Y.; Fan, D.; Zhang, Z.

    2010-08-01

    Precise low speed motion control is important for the industrial applications of both micro-milling machine tool feed drives and electro-optical tracking servo systems. It calls for precise position and instantaneous velocity measurement and disturbance, which involves direct drive motor force ripple, guide way friction and cutting force etc., estimation. This paper presents a comparison study on dynamic response and noise rejection performance of three existing disturbance estimation techniques, including the time-delayed estimators, the state augmented Kalman Filters and the conventional disturbance observers. The design technique essentials of these three disturbance estimators are introduced. For designing time-delayed estimators, it is proposed to substitute Kalman Filter for Luenberger state observer to improve noise suppression performance. The results show that the noise rejection performances of the state augmented Kalman Filters and the time-delayed estimators are much better than the conventional disturbance observers. These two estimators can give not only the estimation of the disturbance but also the low noise level estimations of position and instantaneous velocity. The bandwidth of the state augmented Kalman Filters is wider than the time-delayed estimators. In addition, the state augmented Kalman Filters can give unbiased estimations of the slow varying disturbance and the instantaneous velocity, while the time-delayed estimators can not. The simulation and experiment conducted on X axis of a 2.5-axis prototype micro milling machine are provided.

  2. Evaluation of small area crop estimation techniques using LANDSAT- and ground-derived data. [South Dakota

    NASA Technical Reports Server (NTRS)

    Amis, M. L.; Martin, M. V.; Mcguire, W. G.; Shen, S. S. (Principal Investigator)

    1982-01-01

    Studies completed in fiscal year 1981 in support of the clustering/classification and preprocessing activities of the Domestic Crops and Land Cover project. The theme throughout the study was the improvement of subanalysis district (usually county level) crop hectarage estimates, as reflected in the following three objectives: (1) to evaluate the current U.S. Department of Agriculture Statistical Reporting Service regression approach to crop area estimation as applied to the problem of obtaining subanalysis district estimates; (2) to develop and test alternative approaches to subanalysis district estimation; and (3) to develop and test preprocessing techniques for use in improving subanalysis district estimates.

  3. A high-bandwidth amplitude estimation technique for dynamic mode atomic force microscopy.

    PubMed

    Karvinen, K S; Moheimani, S O R

    2014-02-01

    While often overlooked, one of the prerequisites for high-speed amplitude modulation atomic force microscopy is a high-bandwidth amplitude estimation technique. Conventional techniques, such as RMS to DC conversion and the lock-in amplifier, have proven useful, but offer limited measurement bandwidth and are not suitable for high-speed imaging. Several groups have developed techniques, but many of these are either difficult to implement or lack robustness. In this contribution, we briefly outline existing amplitude estimation methods and propose a new high-bandwidth estimation technique, inspired by techniques employed in microwave and RF circuit design, which utilizes phase cancellation to significantly improve the performance of the lock-in amplifier. We conclude with the design and implementation of a custom circuit to experimentally demonstrate the improvements and discuss its application in high-speed and multifrequency atomic force microscopy. PMID:24593371

  4. A high-bandwidth amplitude estimation technique for dynamic mode atomic force microscopy

    SciTech Connect

    Karvinen, K. S. Moheimani, S. O. R.

    2014-02-15

    While often overlooked, one of the prerequisites for high-speed amplitude modulation atomic force microscopy is a high-bandwidth amplitude estimation technique. Conventional techniques, such as RMS to DC conversion and the lock-in amplifier, have proven useful, but offer limited measurement bandwidth and are not suitable for high-speed imaging. Several groups have developed techniques, but many of these are either difficult to implement or lack robustness. In this contribution, we briefly outline existing amplitude estimation methods and propose a new high-bandwidth estimation technique, inspired by techniques employed in microwave and RF circuit design, which utilizes phase cancellation to significantly improve the performance of the lock-in amplifier. We conclude with the design and implementation of a custom circuit to experimentally demonstrate the improvements and discuss its application in high-speed and multifrequency atomic force microscopy.

  5. Sinogram smoothing techniques for myocardial blood flow estimation from dose-reduced dynamic computed tomography

    PubMed Central

    Modgil, Dimple; Alessio, Adam M.; Bindschadler, Michael D.; La Rivière, Patrick J.

    2014-01-01

    Abstract. Dynamic contrast-enhanced computed tomography (CT) could provide an accurate and widely available technique for myocardial blood flow (MBF) estimation to aid in the diagnosis and treatment of coronary artery disease. However, one of its primary limitations is the radiation dose imparted to the patient. We are exploring techniques to reduce the patient dose by either reducing the tube current or by reducing the number of temporal frames in the dynamic CT sequence. Both of these dose reduction techniques result in noisy data. In order to extract the MBF information from the noisy acquisitions, we have explored several data-domain smoothing techniques. In this work, we investigate two specific smoothing techniques: the sinogram restoration technique in both the spatial and temporal domains and the use of the Karhunen–Loeve (KL) transform to provide temporal smoothing in the sinogram domain. The KL transform smoothing technique has been previously applied to dynamic image sequences in positron emission tomography. We apply a quantitative two-compartment blood flow model to estimate MBF from the time-attenuation curves and determine which smoothing method provides the most accurate MBF estimates in a series of simulations of different dose levels, dynamic contrast-enhanced cardiac CT acquisitions. As measured by root mean square percentage error (% RMSE) in MBF estimates, sinogram smoothing generally provides the best MBF estimates except for the cases of the lowest simulated dose levels (tube current=25  mAs, 2 or 3 s temporal spacing), where the KL transform method provides the best MBF estimates. The KL transform technique provides improved MBF estimates compared to conventional processing only at very low doses (<7  mSv). Results suggest that the proposed smoothing techniques could provide high fidelity MBF information and allow for substantial radiation dose savings. PMID:25642441

  6. Modulation/demodulation techniques for satellite communications. Part 2: Advanced techniques. The linear channel

    NASA Technical Reports Server (NTRS)

    Omura, J. K.; Simon, M. K.

    1982-01-01

    A theory is presented for deducing and predicting the performance of transmitter/receivers for bandwidth efficient modulations suitable for use on the linear satellite channel. The underlying principle used is the development of receiver structures based on the maximum-likelihood decision rule. The application of the performance prediction tools, e.g., channel cutoff rate and bit error probability transfer function bounds to these modulation/demodulation techniques.

  7. A Straightforward Method for Advance Estimation of User Charges for Information in Numeric Databases.

    ERIC Educational Resources Information Center

    Jarvelin, Kalervo

    1986-01-01

    Describes a method for advance estimation of user charges for queries in relational data model-based numeric databases when charges are based on data retrieved. Use of this approach is demonstrated by sample queries to an imaginary marketing database. The principles and methods of this approach and its relevance are discussed. (MBR)

  8. Estimating steatosis and fibrosis: Comparison of acoustic structure quantification with established techniques

    PubMed Central

    Karlas, Thomas; Berger, Joachim; Garnov, Nikita; Lindner, Franziska; Busse, Harald; Linder, Nicolas; Schaudinn, Alexander; Relke, Bettina; Chakaroun, Rima; Tröltzsch, Michael; Wiegand, Johannes; Keim, Volker

    2015-01-01

    AIM: To compare ultrasound-based acoustic structure quantification (ASQ) with established non-invasive techniques for grading and staging fatty liver disease. METHODS: Type 2 diabetic patients at risk of non-alcoholic fatty liver disease (n = 50) and healthy volunteers (n = 20) were evaluated using laboratory analysis and anthropometric measurements, transient elastography (TE), controlled attenuation parameter (CAP), proton magnetic resonance spectroscopy (1H-MRS; only available for the diabetic cohort), and ASQ. ASQ parameters mode, average and focal disturbance (FD) ratio were compared with: (1) the extent of liver fibrosis estimated from TE and non-alcoholic fatty liver disease (NAFLD) fibrosis scores; and (2) the amount of steatosis, which was classified according to CAP values. RESULTS: Forty-seven diabetic patients (age 67.0 ± 8.6 years; body mass index 29.4 ± 4.5 kg/m²) with reliable CAP measurements and all controls (age 26.5 ± 3.2 years; body mass index 22.0 ± 2.7 kg/m²) were included in the analysis. All ASQ parameters showed differences between healthy controls and diabetic patients (P < 0.001, respectively). The ASQ FD ratio (logarithmic) correlated with the CAP (r = -0.81, P < 0.001) and 1H-MRS (r = -0.43, P = 0.004) results. The FD ratio [CAP < 250 dB/m: 107 (102-109), CAP between 250 and 300 dB/m: 106 (102-114); CAP between 300 and 350 dB/m: 105 (100-112), CAP ≥ 350 dB/m: 102 (99-108)] as well as mode and average parameters, were reduced in cases with advanced steatosis (ANOVA P < 0.05). However, none of the ASQ parameters showed a significant difference in patients with advanced fibrosis, as determined by TE and the NAFLD fibrosis score (P > 0.08, respectively). CONCLUSION: ASQ parameters correlate with steatosis, but not with fibrosis in fatty liver disease. Steatosis estimation with ASQ should be further evaluated in biopsy-controlled studies. PMID:25945002

  9. Advanced combustion techniques for controlling NO sub x emissions of high altitude cruise aircraft

    NASA Technical Reports Server (NTRS)

    Rudey, R. A.; Reck, G. M.

    1976-01-01

    An array of experiments designed to explore the potential of advanced combustion techniques for controlling the emissions of aircraft into the upper atmosphere was discussed. Of particular concern are the oxides of nitrogen (NOx) emissions into the stratosphere. The experiments utilize a wide variety of approaches varying from advanced combustor concepts to fundamental flame tube experiments. Results are presented which indicate that substantial reductions in cruise NOx emissions should be achievable in future aircraft engines. A major NASA program is described which focuses the many fundamental experiments into a planned evolution and demonstration of the prevaporized-premixed combustion technique in a full-scale engine.

  10. POC-Scale Testing of an Advanced Fine Coal Dewatering Equipment/Technique

    SciTech Connect

    Karekh, B K; Tao, D; Groppo, J G

    1998-08-28

    Froth flotation technique is an effective and efficient process for recovering of ultra-fine (minus 74 mm) clean coal. Economical dewatering of an ultra-fine clean coal product to a 20% level moisture will be an important step in successful implementation of the advanced cleaning processes. This project is a step in the Department of Energy's program to show that ultra-clean coal could be effectively dewatered to 20% or lower moisture using either conventional or advanced dewatering techniques. The cost-sharing contract effort is for 45 months beginning September 30, 1994. This report discusses technical progress made during the quarter from January 1 - March 31, 1998.

  11. Superconvergence of the derivative patch recovery technique and a posteriorii error estimation

    SciTech Connect

    Zhang, Z.; Zhu, J.Z.

    1995-12-31

    The derivative patch recovery technique developed by Zienkiewicz and Zhu for the finite element method is analyzed. It is shown that, for one dimensional problems and two dimensional problems using tensor product elements, the patch recovery technique yields superconvergence recovery for the derivatives. Consequently, the error estimator based on the recovered derivative is asymptotically exact.

  12. Modulation/demodulation techniques for satellite communications. Part 3: Advanced techniques. The nonlinear channel

    NASA Technical Reports Server (NTRS)

    Omura, J. K.; Simon, M. K.

    1982-01-01

    A theory for deducing and predicting the performance of transmitter/receivers for bandwidth efficient modulations suitable for use on the nonlinear satellite channel is presented. The underlying principle used throughout is the development of receiver structures based on the maximum likelihood decision rule and aproximations to it. The bit error probability transfer function bounds developed in great detail in Part 4 is applied to these modulation/demodulation techniques. The effects of the various degrees of receiver mismatch are considered both theoretically and by numerous illustrative examples.

  13. Image enhancement and advanced information extraction techniques for ERTS-1 data

    NASA Technical Reports Server (NTRS)

    Malila, W. A. (Principal Investigator); Nalepka, R. F.; Sarno, J. E.

    1975-01-01

    The author has identified the following significant results. It was demonstrated and concluded that: (1) the atmosphere has significant effects on ERTS MSS data which can seriously degrade recognition performance; (2) the application of selected signature extension techniques serve to reduce the deleterious effects of both the atmosphere and changing ground conditions on recognition performance; and (3) a proportion estimation algorithm for overcoming problems in acreage estimation accuracy resulting from the coarse spatial resolution of the ERTS MSS, was able to significantly improve acreage estimation accuracy over that achievable by conventional techniques, especially for high contrast targets such as lakes and ponds.

  14. Estimation of infiltration parameters and the irrigation coefficients with the surface irrigation advance distance.

    PubMed

    Beibei, Zhou; Quanjiu, Wang; Shuai, Tan

    2014-01-01

    A theory based on Manning roughness equation, Philip equation and water balance equation was developed which only employed the advance distance in the calculation of the infiltration parameters and irrigation coefficients in both the border irrigation and the surge irrigation. The improved procedure was validated with both the border irrigation and surge irrigation experiments. The main results are shown as follows. Infiltration parameters of the Philip equation could be calculated accurately only using water advance distance in the irrigation process comparing to the experimental data. With the calculated parameters and the water balance equation, the irrigation coefficients were also estimated. The water advance velocity should be measured at about 0.5 m to 1.0 m far from the water advance in the experimental corn fields. PMID:25061664

  15. Improvement of color reproduction in color digital holography by using spectral estimation technique.

    PubMed

    Xia, Peng; Shimozato, Yuki; Ito, Yasunori; Tahara, Tatsuki; Kakue, Takashi; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu

    2011-12-01

    We propose a color digital holography by using spectral estimation technique to improve the color reproduction of objects. In conventional color digital holography, there is insufficient spectral information in holograms, and the color of the reconstructed images depend on only reflectances at three discrete wavelengths used in the recording of holograms. Therefore the color-composite image of the three reconstructed images is not accurate in color reproduction. However, in our proposed method, the spectral estimation technique was applied, which has been reported in multispectral imaging. According to the spectral estimation technique, the continuous spectrum of object can be estimated and the color reproduction is improved. The effectiveness of the proposed method was confirmed by a numerical simulation and an experiment, and, in the results, the average color differences are decreased from 35.81 to 7.88 and from 43.60 to 25.28, respectively. PMID:22193005

  16. Feasibility Studies of Applying Kalman Filter Techniques to Power System Dynamic State Estimation

    SciTech Connect

    Huang, Zhenyu; Schneider, Kevin P.; Nieplocha, Jarek

    2007-08-01

    Abstract—Lack of dynamic information in power system operations mainly attributes to the static modeling of traditional state estimation, as state estimation is the basis driving many other operations functions. This paper investigates the feasibility of applying Kalman filter techniques to enable the inclusion of dynamic modeling in the state estimation process and the estimation of power system dynamic states. The proposed Kalman-filter-based dynamic state estimation is tested on a multi-machine system with both large and small disturbances. Sensitivity studies of the dynamic state estimation performance with respect to measurement characteristics – sampling rate and noise level – are presented as well. The study results show that there is a promising path forward to implementation the Kalman-filter-based dynamic state estimation with the emerging phasor measurement technologies.

  17. Characterization techniques for semiconductors and nanostructures: a review of recent advances

    NASA Astrophysics Data System (ADS)

    Acher, Olivier

    2015-01-01

    Optical spectroscopy techniques are widely used for the characterization of semiconductors and nanostructures. Confocal Raman microscopy is useful to retrieve chemical and molecular information at the ultimate submicrometer resolution of optical microscopy. Fast imaging capabilities, 3D confocal ability, and multiple excitation wavelengths, have increased the power of the technique while making it simpler to use for material scientists. Recently, the development of the Tip Enhanced Raman Spectroscopy (TERS) has opened the way to the use of Raman information at nanoscale, by combining the resolution of scanning probe microscopy and chemical selectivity of Raman spectroscopy. Significant advances have been reported in the field of profiling the atomic composition of multilayers, using the Glow Discharge Optical Emission Spectroscopy technique, including real-time determination of etched depth by interferometry. This allows the construction of precise atomic profiles of sophisticated multilayers with a few nm resolution. Ellipsometry is another widely used technique to determine the profile of multilayers, and recent development have provided enhanced spatial resolution useful for the investigation of patterned materials. In addition to the advances of the different characterization techniques, the capability to observe the same regions at micrometer scale at different stages of material elaboration, or with different instrument, is becoming a critical issue. Several advances have been made to allow precise re-localization and co-localization of observation with different complementary characterization techniques.

  18. Inverse problem solution techniques as applied to indirect in situ estimation of fish target strength.

    PubMed

    Stepnowski, A; Moszyński, M

    2000-05-01

    In situ indirect methods of fish target strength (TS) estimation are analyzed in terms of the inverse techniques recently applied to the problem in question. The solution of this problem requires finding the unknown probability density function (pdf) of fish target strength from acoustic echoes, which can be estimated by solving the integral equation, relating pdf's of echo variable, target strength, and beam pattern of the echosounder transducer. In the first part of the paper the review of existing indirect in situ TS-estimation methods is presented. The second part introduces the novel TS-estimation methods, viz.: Expectation, Maximization, and Smoothing (EMS), Windowed Singular Value Decomposition (WSVD), Regularization and Wavelet Decomposition, which are compared using simulations as well as actual data from acoustic surveys. The survey data, acquired by the dual-beam digital echosounder, were thoroughly analyzed by numerical algorithms and the target strength and acoustical backscattering length pdf's estimates were calculated from fish echoes received in the narrow beam channel of the echosounder. Simultaneously, the estimates obtained directly from the dual-beam system were used as a reference for comparison of the estimates calculated by the newly introduced inverse techniques. The TS estimates analyzed in the paper are superior to those obtained from deconvolution or other conventional techniques, as the newly introduced methods partly avoid the problem of ill-conditioned equations and matrix inversion. PMID:10830379

  19. A time series deformation estimation in the NW Himalayas using SBAS InSAR technique

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Venkataraman, G.

    2012-12-01

    A time series land deformation studies in north western Himalayan region has been presented in this study. Synthetic aperture radar (SAR) interferometry (InSAR) is an important tool for measuring the land displacement caused by different geological processes [1]. Frequent spatial and temporal decorrelation in the Himalayan region is a strong impediment in precise deformation estimation using conventional interferometric SAR approach. In such cases, advanced DInSAR approaches PSInSAR as well as Small base line subset (SBAS) can be used to estimate earth surface deformation. The SBAS technique [2] is a DInSAR approach which uses a twelve or more number of repeat SAR acquisitions in different combinations of a properly chosen data (subsets) for generation of DInSAR interferograms using two pass interferometric approach. Finally it leads to the generation of mean deformation velocity maps and displacement time series. Herein, SBAS algorithm has been used for time series deformation estimation in the NW Himalayan region. ENVISAT ASAR IS2 swath data from 2003 to 2008 have been used for quantifying slow deformation. Himalayan region is a very active tectonic belt and active orogeny play a significant role in land deformation process [3]. Geomorphology in the region is unique and reacts to the climate change adversely bringing with land slides and subsidence. Settlements on the hill slopes are prone to land slides, landslips, rockslides and soil creep. These hazardous features have hampered the over all progress of the region as they obstruct the roads and flow of traffic, break communication, block flowing water in stream and create temporary reservoirs and also bring down lot of soil cover and thus add enormous silt and gravel to the streams. It has been observed that average deformation varies from -30.0 mm/year to 10 mm/year in the NW Himalayan region . References [1] Massonnet, D., Feigl, K.L.,Rossi, M. and Adragna, F. (1994) Radar interferometry mapping of

  20. 75 FR 81643 - In the Matter of Certain Semiconductor Products Made by Advanced Lithography Techniques and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-28

    ... certain claims of U.S. Patent No. 6,042,998. 75 FR. 44,015 (July 27, 2010). The complaint named two... COMMISSION In the Matter of Certain Semiconductor Products Made by Advanced Lithography Techniques and... for ] importation, and sale within the United States after importation of certain...

  1. Advanced Diffusion-Weighted Magnetic Resonance Imaging Techniques of the Human Spinal Cord

    PubMed Central

    Andre, Jalal B.; Bammer, Roland

    2012-01-01

    Unlike those of the brain, advances in diffusion-weighted imaging (DWI) of the human spinal cord have been challenged by the more complicated and inhomogeneous anatomy of the spine, the differences in magnetic susceptibility between adjacent air and fluid-filled structures and the surrounding soft tissues, and the inherent limitations of the initially used echo-planar imaging techniques used to image the spine. Interval advances in DWI techniques for imaging the human spinal cord, with the specific aims of improving the diagnostic quality of the images, and the simultaneous reduction in unwanted artifacts have resulted in higher-quality images that are now able to more accurately portray the complicated underlying anatomy and depict pathologic abnormality with improved sensitivity and specificity. Diffusion tensor imaging (DTI) has benefited from the advances in DWI techniques, as DWI images form the foundation for all tractography and DTI. This review provides a synopsis of the many recent advances in DWI of the human spinal cord, as well as some of the more common clinical uses for these techniques, including DTI and tractography. PMID:22158130

  2. Real-time application of advanced three-dimensional graphic techniques for research aircraft simulation

    NASA Technical Reports Server (NTRS)

    Davis, Steven B.

    1990-01-01

    Visual aids are valuable assets to engineers for design, demonstration, and evaluation. Discussed here are a variety of advanced three-dimensional graphic techniques used to enhance the displays of test aircraft dynamics. The new software's capabilities are examined and possible future uses are considered.

  3. Recognizing and Managing Complexity: Teaching Advanced Programming Concepts and Techniques Using the Zebra Puzzle

    ERIC Educational Resources Information Center

    Crabtree, John; Zhang, Xihui

    2015-01-01

    Teaching advanced programming can be a challenge, especially when the students are pursuing different majors with diverse analytical and problem-solving capabilities. The purpose of this paper is to explore the efficacy of using a particular problem as a vehicle for imparting a broad set of programming concepts and problem-solving techniques. We…

  4. Fabrication of advanced electrochemical energy materials using sol-gel processing techniques

    NASA Technical Reports Server (NTRS)

    Chu, C. T.; Chu, Jay; Zheng, Haixing

    1995-01-01

    Advanced materials play an important role in electrochemical energy devices such as batteries, fuel cells, and electrochemical capacitors. They are being used as both electrodes and electrolytes. Sol-gel processing is a versatile solution technique used in fabrication of ceramic materials with tailored stoichiometry, microstructure, and properties. The application of sol-gel processing in the fabrication of advanced electrochemical energy materials will be presented. The potentials of sol-gel derived materials for electrochemical energy applications will be discussed along with some examples of successful applications. Sol-gel derived metal oxide electrode materials such as V2O5 cathodes have been demonstrated in solid-slate thin film batteries; solid electrolytes materials such as beta-alumina for advanced secondary batteries had been prepared by the sol-gel technique long time ago; and high surface area transition metal compounds for capacitive energy storage applications can also be synthesized with this method.

  5. A comparison of population air pollution exposure estimation techniques with personal exposure estimates in a pregnant cohort.

    PubMed

    Hannam, Kimberly; McNamee, Roseanne; De Vocht, Frank; Baker, Philip; Sibley, Colin; Agius, Raymond

    2013-08-01

    There is increasing evidence of the harmful effects for mother and fetus of maternal exposure to air pollutants. Most studies use large retrospective birth outcome datasets and make a best estimate of personal exposure (PE) during pregnancy periods. We compared estimates of personal NOx and NO2 exposure of pregnant women in the North West of England with exposure estimates derived using different modelling techniques. A cohort of 85 pregnant women was recruited from Manchester and Blackpool. Participants completed a time-activity log and questionnaire at 13-22 weeks gestation and were provided with personal Ogawa samplers to measure their NOx/NO2 exposure. PE was compared to monthly averages, the nearest stationary monitor to the participants' home, weighted average of the closest monitor to home and work location, proximity to major roads, as well as to background modelled concentrations (DEFRA), inverse distance weighting (IDW), ordinary kriging (OK), and a land use regression model with and without temporal adjustment. PE was most strongly correlated with monthly adjusted DEFRA (NO2r = 0.61, NOxr = 0.60), OK and IDW (NO2r = 0.60; NOxr = 0.62) concentrations. Correlations were stronger in Blackpool than in Manchester. Where there is evidence for high temporal variability in exposure, methods of exposure estimation which focus solely on spatial methods should be adjusted temporally, with an improvement in estimation expected to be better with increased temporal variability. PMID:23800727

  6. An Automated Technique for Estimating Daily Precipitation over the State of Virginia

    NASA Technical Reports Server (NTRS)

    Follansbee, W. A.; Chamberlain, L. W., III

    1981-01-01

    Digital IR and visible imagery obtained from a geostationary satellite located over the equator at 75 deg west latitude were provided by NASA and used to obtain a linear relationship between cloud top temperature and hourly precipitation. Two computer programs written in FORTRAN were used. The first program computes the satellite estimate field from the hourly digital IR imagery. The second program computes the final estimate for the entire state area by comparing five preliminary estimates of 24 hour precipitation with control raingage readings and determining which of the five methods gives the best estimate for the day. The final estimate is then produced by incorporating control gage readings into the winning method. In presenting reliable precipitation estimates for every cell in Virginia in near real time on a daily on going basis, the techniques require on the order of 125 to 150 daily gage readings by dependable, highly motivated observers distributed as uniformly as feasible across the state.

  7. Standard practice for reporting uniaxial strength data and estimating Weibull distribution parameters for advanced ceramics

    NASA Astrophysics Data System (ADS)

    1994-04-01

    This practice covers the evaluation and subsequent reporting of uniaxial strength data and the estimation of probability distribution parameters for advanced ceramics that fail in a brittle fashion. The failure strength of advanced ceramics is treated as a continuous random variable. Typically, a number of test specimens with well-defined geometry are failed under well-defined isothermal loading conditions. The load at which each specimen fails is recorded. The resulting failure stresses are used to obtain parameter estimates associated with the underlying population distribution. This practice is restricted to the assumption that the distribution underlying the failure strengths is the two parameter Weibull distribution with size scaling. Furthermore, this practice is restricted to test specimens (tensile, flexural, pressurized ring, etc.) that are primarily subjected to uniaxial stress states. Section 8 outlines methods to correct for bias errors in the estimated Weibull parameters and to calculate confidence bounds on those estimates from data sets where all failures originate from a single flaw population (that is, a single failure mode). In samples where failures originate from multiple independent flaw populations (for example, competing failure modes), the methods outlined in Section 8 for bias correction and confidence bounds are not applicable. Measurements of the strength at failure are taken for one of two reasons: either for a comparison of the relative quality of two materials, or the prediction of the probability of failure (or, alternatively, the fracture strength) for a structure of interest. This practice will permit estimates of the distribution parameters that are needed for either.

  8. Advanced techniques for determining long term compatibility of materials with propellants

    NASA Technical Reports Server (NTRS)

    Green, R. L.

    1972-01-01

    The search for advanced measurement techniques for determining long term compatibility of materials with propellants was conducted in several parts. A comprehensive survey of the existing measurement and testing technology for determining material-propellant interactions was performed. Selections were made from those existing techniques which were determined could meet or be made to meet the requirements. Areas of refinement or changes were recommended for improvement of others. Investigations were also performed to determine the feasibility and advantages of developing and using new techniques to achieve significant improvements over existing ones. The most interesting demonstration was that of the new technique, the volatile metal chelate analysis. Rivaling the neutron activation analysis in terms of sensitivity and specificity, the volatile metal chelate technique was fully demonstrated.

  9. Advanced Transportation System Studies. Technical Area 3: Alternate Propulsion Subsystems Concepts. Volume 3; Program Cost Estimates

    NASA Technical Reports Server (NTRS)

    Levack, Daniel J. H.

    2000-01-01

    The objective of this contract was to provide definition of alternate propulsion systems for both earth-to-orbit (ETO) and in-space vehicles (upper stages and space transfer vehicles). For such propulsion systems, technical data to describe performance, weight, dimensions, etc. was provided along with programmatic information such as cost, schedule, needed facilities, etc. Advanced technology and advanced development needs were determined and provided. This volume separately presents the various program cost estimates that were generated under three tasks: the F- IA Restart Task, the J-2S Restart Task, and the SSME Upper Stage Use Task. The conclusions, technical results , and the program cost estimates are described in more detail in Volume I - Executive Summary and in individual Final Task Reports.

  10. Australia's Black Saturday fires - Comparison of techniques for estimating emissions from vegetation fires

    NASA Astrophysics Data System (ADS)

    Paton-Walsh, Clare; Emmons, Louisa K.; Wiedinmyer, Christine

    2012-12-01

    We present a comparison of techniques for estimating atmospheric emissions from fires using Australia's 2009 “Black Saturday” wildfires as a case study. Most of the fires started on Saturday the 7th of February 2009 (a date now known as “Black Saturday”) and then spread rapidly, fanned by gale force winds, creating several firestorms and killing 173 people. The fires continued into early March, when rain and cooler conditions allowed the fires to be extinguished. In this study, we compare two new techniques (and one more established method) to estimate the total emissions of a number of atmospheric trace gases from these fires. One of the new techniques is a “bottom-up” technique that combines existing inventories of fuel loads, combustion efficiencies and emission factors with an estimate of burned area derived from MODIS rapid response daily fire counts. The other new method is a “top-down” approach using MODIS aerosol optical depth as a proxy for total amounts of trace gases emitted by the fires. There are significant differences between the estimates of emissions from these fires using the different methods, highlighting the uncertainties associated with fire emission estimates. These differences are discussed along with their likely causes and used as a vehicle to explore the merits of the different methods, and further constrain fire emissions in the future.

  11. Integration of Spatial Interpolation Techniques and Association Rules for Estimation of Missing Precipitation Data

    NASA Astrophysics Data System (ADS)

    Teegavarapu, R. S.

    2006-12-01

    Deterministic and stochastic weighting methods are the most frequently used methods for estimating missing rainfall values at a gage based on values recorded at all other available recording gages. These methods may not always provide accurate estimates due to spatial and temporal variability of rainfall available at point measurements in space and also due to limitations of spatial interpolation techniques. Since an interpolated value of a variable at a point in space depends on observed values at all other points in space, temporal associations among observations in space can be beneficial in interpolation. An association rule mining (ARM) based spatial interpolation approach is proposed and investigated in the current study to estimate missing precipitation values at a gaging station. A stochastic spatial interpolation technique and three deterministic weighting methods are used in the current study. Historical daily precipitation data obtained from 15 rain gauging stations from temperate climatic region, Kentucky, USA, are used to test this approach and derive conclusions about its efficacy for estimating missing precipitation data. Results suggest that the use of association rule mining in conjunction with any spatial interpolation technique can improve the precipitation estimates and help to address one of the major limitations of any spatial interpolation technique.

  12. Estimation of convective rain volumes utilizing the are-time-integral technique

    NASA Technical Reports Server (NTRS)

    Johnson, L. Ronald; Smith, Paul L.

    1990-01-01

    Interest in the possibility of developing useful estimates of convective rainfall with Area-Time Integral (ATI) methods is increasing. The basis of the ATI technique is the observed strong correlation between rainfall volumes and ATI values. This means that rainfall can be estimated by just determining the ATI values, if previous knowledge of the relationship to rain volume is available to calibrate the technique. Examples are provided of the application of the ATI approach to gage, radar, and satellite measurements. For radar data, the degree of transferability in time and among geographical areas is examined. Recent results on transferability of the satellite ATI calculations are presented.

  13. Recent advances in 3D computed tomography techniques for simulation and navigation in hepatobiliary pancreatic surgery.

    PubMed

    Uchida, Masafumi

    2014-04-01

    A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging. PMID:24464989

  14. Application of rainfall estimates using radar-raingauge merging techniques for hydrological simulations

    NASA Astrophysics Data System (ADS)

    Nanding, Nergui; Rico-Ramirez, Miguel Angel; Han, Dawei

    2015-04-01

    Rainfall estimates by weather radar have become an important alternative to raingauge measurements for hydrological modelling over poorly gauged catchments, due to its capability for providing spatially distributed rainfall with a high resolution in space and time. However, the potential of radar rainfall estimates has often been limited by a variety of source of errors. More recently, research has proven that by combining radar rainfall estimates with raingauge measurements it is possible to obtain better rainfall estimates that are also able to capture the spatial precipitation variability. However, the impact of using merged rainfall products as compared with conventional raingauge inputs, with respect to various hydrological model structures and catchment areas, remains unclear and yet to be addressed. In the study presented by this paper, we analysed the flow simulations of different sized catchments across Northern England using rainfall inputs from different radar-raingauge merging techniques, such as Kriging with radar-based correction (KRE) and Kriging with external drift (KED). Rainfall was estimated at an hourly timescale and therefore rainfall estimates obtained from different radar-gauge merging techniques at hourly resolution are incorporated into hydrological models so that direct comparison of streamflows can be explored. The main purpose of this paper is to examine whether these merged rainfall estimates are useful as input to rainfall-runoff models over rural catchment areas, focusing on the improvement of rainfall estimates by radar-raingauge merging techniques for runoff predictions rather than on the rainfall estimates themselves in relation to the catchments sizes and storm events.

  15. Mangrove Canopy Height and Biomass Estimations by means of Pol-InSAR Techniques

    NASA Astrophysics Data System (ADS)

    Lee, S. K.; Fatoyinbo, T. E.; Trettin, C.; Simard, M.; Bandeira, S.

    2014-12-01

    Mangrove forests cover only about 1% of the Earth's terrestrial surface, but they are amongst the highest carbon-storing and carbon-exporting ecosystems globally. Estimating 3-D mangrove forest parameters has been challenging due to the complex physical environment of the forests. In previous works, remote sensing techniques have proven an excellent tool for the estimation of mangrove forests. Recent experiments have successfully demonstrated the global scale estimation of mangrove structure using spaceborne remote sensing data: SRTM (InSAR), ICESat/GLAS (lidar), Landsat ETM+ (passive optical). However, those systems had relatively low spatial and temporal resolutions. Polarimetric SAR Interferometry (Pol-InSAR) is a Synthetic Aperture Radar (SAR) remote sensing technique based on the coherent combination of both Polarimetric and interferometric observables. The Pol-InSAR has provided a step forward in quantitative 3D forest structure parameter estimation (e.g. forest canopy height and biomass) over a variety of forests. Recent developments of Pol-InSAR technique with TanDEM-X (TDX) data in mangroves have proven that TDX data can be used to produce global-scale mangrove canopy height and biomass maps at accuracies comparable to airborne lidar measurements. In this study we propose to generate 12m-resolution mangrove canopy height and biomass estimates for the coastline of Mozambique using Pol-InSAR techniques from single-/dual-pol TDX data and validated with commercial airborne lidar. To cover all of the mangroves in the costal area of Mozambique, which is about 3000 km, about 200 TDX data sets are selected and processed. The TDX height data are calibrated with commercial airborne lidar data acquired over 150 km2 of mangroves in the Zambezi delta of Mozambique while height and Biomass estimates are validated using in-situ forest inventory measurements and biomass. The results from the study will be the first country-wide, wall-to-wall estimate of mangrove structure

  16. Estimating numbers of greater prairie-chickens using mark-resight techniques

    USGS Publications Warehouse

    Clifton, A.M.; Krementz, D.G.

    2006-01-01

    Current monitoring efforts for greater prairie-chicken (Tympanuchus cupido pinnatus) populations indicate that populations are declining across their range. Monitoring the population status of greater prairie-chickens is based on traditional lek surveys (TLS) that provide an index without considering detectability. Estimators, such as immigration-emigration joint maximum-likelihood estimator from a hypergeometric distribution (IEJHE), can account for detectability and provide reliable population estimates based on resightings. We evaluated the use of mark-resight methods using radiotelemetry to estimate population size and density of greater prairie-chickens on 2 sites at a tallgrass prairie in the Flint Hills of Kansas, USA. We used average distances traveled from lek of capture to estimate density. Population estimates and confidence intervals at the 2 sites were 54 (CI 50-59) on 52.9 km 2 and 87 (CI 82-94) on 73.6 km2. The TLS performed at the same sites resulted in population ranges of 7-34 and 36-63 and always produced a lower population index than the mark-resight population estimate with a larger range. Mark-resight simulations with varying male:female ratios of marks indicated that this ratio was important in designing a population study on prairie-chickens. Confidence intervals for estimates when no marks were placed on females at the 2 sites (CI 46-50, 76-84) did not overlap confidence intervals when 40% of marks were placed on females (CI 54-64, 91-109). Population estimates derived using this mark-resight technique were apparently more accurate than traditional methods and would be more effective in detecting changes in prairie-chicken populations. Our technique could improve prairie-chicken management by providing wildlife biologists and land managers with a tool to estimate the population size and trends of lekking bird species, such as greater prairie-chickens.

  17. Performance and operating results from the demonstration of advanced combustion techniques for wall-fired boilers

    SciTech Connect

    Sorge, J.N.; Baldwin, A.L.

    1993-11-01

    This paper discusses the technical progress of a US Department of Energy Innovative Clean Coal Technology project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. The primary objective of the demonstration is to determine the long-term performance of advanced overfire air and low NO{sub x} burners applied in a stepwise fashion to a 500 MW boiler. A 50 percent NO{sub x} reduction target has been established for the project. The focus of this paper is to present the effects of excess oxygen level and burner settings on NO{sub x} emissions and unburned carbon levels and recent results from the phase of the project when low NO{sub x} burners were used in conjunction with advanced overfire air.

  18. Evaluation of the sensitivity and intrusion of workload estimation techniques in piloting tasks emphasizing mediational activity

    NASA Technical Reports Server (NTRS)

    Rahimi, M.; Wierwille, W. W.

    1982-01-01

    In this experiment, pilots flew an instrumented moving-base simulator. Mediational loading was elicited by having them solve a variety of navigational problems. The problems were sorted into low, medium, and high load conditions based on the number and complexity of arithmetic and geometric operations required to solve them. Workload estimation techniques based on opinion, spare mental capacity, primary task performance, and physiological measures were obtained and compared. This paper describes: (1) the ability of the techniques to discriminate statistically between the three levels of loading conditions, and (2) changes in primary task performance caused by introduction of the workload technique procedures and equipment.

  19. Advances in the surface modification techniques of bone-related implants for last 10 years

    PubMed Central

    Qiu, Zhi-Ye; Chen, Cen; Wang, Xiu-Mei; Lee, In-Seop

    2014-01-01

    At the time of implanting bone-related implants into human body, a variety of biological responses to the material surface occur with respect to surface chemistry and physical state. The commonly used biomaterials (e.g. titanium and its alloy, Co–Cr alloy, stainless steel, polyetheretherketone, ultra-high molecular weight polyethylene and various calcium phosphates) have many drawbacks such as lack of biocompatibility and improper mechanical properties. As surface modification is very promising technology to overcome such problems, a variety of surface modification techniques have been being investigated. This review paper covers recent advances in surface modification techniques of bone-related materials including physicochemical coating, radiation grafting, plasma surface engineering, ion beam processing and surface patterning techniques. The contents are organized with different types of techniques to applicable materials, and typical examples are also described. PMID:26816626

  20. Unified Instrumentation: Examining the Simultaneous Application of Advanced Measurement Techniques for Increased Wind Tunnel Testing Capability

    NASA Technical Reports Server (NTRS)

    Fleming, Gary A. (Editor); Bartram, Scott M.; Humphreys, William M., Jr.; Jenkins, Luther N.; Jordan, Jeffrey D.; Lee, Joseph W.; Leighty, Bradley D.; Meyers, James F.; South, Bruce W.; Cavone, Angelo A.; Ingram, JoAnne L.

    2002-01-01

    A Unified Instrumentation Test examining the combined application of Pressure Sensitive Paint, Projection Moire Interferometry, Digital Particle Image Velocimetry, Doppler Global Velocimetry, and Acoustic Microphone Array has been conducted at the NASA Langley Research Center. The fundamental purposes of conducting the test were to: (a) identify and solve compatibility issues among the techniques that would inhibit their simultaneous application in a wind tunnel, and (b) demonstrate that simultaneous use of advanced instrumentation techniques is feasible for increasing tunnel efficiency and identifying control surface actuation / aerodynamic reaction phenomena. This paper provides summary descriptions of each measurement technique used during the Unified Instrumentation Test, their implementation for testing in a unified fashion, and example results identifying areas of instrument compatibility and incompatibility. Conclusions are drawn regarding the conditions under which the measurement techniques can be operated simultaneously on a non-interference basis. Finally, areas requiring improvement for successfully applying unified instrumentation in future wind tunnel tests are addressed.

  1. POC-scale testing of an advanced fine coal dewatering equipment/technique

    SciTech Connect

    1998-09-01

    Froth flotation technique is an effective and efficient process for recovering of ultra-fine (minus 74 pm) clean coal. Economical dewatering of an ultra-fine clean-coal product to a 20% level moisture will be an important step in successful implementation of the advanced cleaning processes. This project is a step in the Department of Energy`s program to show that ultra-clean coal could be effectively dewatered to 20% or lower moisture using either conventional or advanced dewatering techniques. The cost-sharing contract effort is for 36 months beginning September 30, 1994. This report discusses technical progress made during the quarter from July 1 - September 30, 1997.

  2. An example of requirements for Advanced Subsonic Civil Transport (ASCT) flight control system using structured techniques

    NASA Technical Reports Server (NTRS)

    Mclees, Robert E.; Cohen, Gerald C.

    1991-01-01

    The requirements are presented for an Advanced Subsonic Civil Transport (ASCT) flight control system generated using structured techniques. The requirements definition starts from initially performing a mission analysis to identify the high level control system requirements and functions necessary to satisfy the mission flight. The result of the study is an example set of control system requirements partially represented using a derivative of Yourdon's structured techniques. Also provided is a research focus for studying structured design methodologies and in particular design-for-validation philosophies.

  3. New test techniques and analytical procedures for understanding the behavior of advanced propellers

    NASA Technical Reports Server (NTRS)

    Stefko, G. L.; Bober, L. J.; Neumann, H. E.

    1983-01-01

    Analytical procedures and experimental techniques were developed to improve the capability to design advanced high speed propellers. Some results from the propeller lifting line and lifting surface aerodynamic analysis codes are compared with propeller force data, probe data and laser velocimeter data. In general, the code comparisons with data indicate good qualitative agreement. A rotating propeller force balance demonstrated good accuracy and reduced test time by 50 percent. Results from three propeller flow visualization techniques are shown which illustrate some of the physical phenomena occurring on these propellers.

  4. The investigation of advanced remote sensing techniques for the measurement of aerosol characteristics

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Becher, J.

    1979-01-01

    Advanced remote sensing techniques and inversion methods for the measurement of characteristics of aerosol and gaseous species in the atmosphere were investigated. Of particular interest were the physical and chemical properties of aerosols, such as their size distribution, number concentration, and complex refractive index, and the vertical distribution of these properties on a local as well as global scale. Remote sensing techniques for monitoring of tropospheric aerosols were developed as well as satellite monitoring of upper tropospheric and stratospheric aerosols. Computer programs were developed for solving multiple scattering and radiative transfer problems, as well as inversion/retrieval problems. A necessary aspect of these efforts was to develop models of aerosol properties.

  5. Combined preputial advancement and phallopexy as a revision technique for treating paraphimosis in a dog.

    PubMed

    Wasik, S M; Wallace, A M

    2014-11-01

    A 7-year-old neutered male Jack Russell terrier-cross was presented for signs of recurrent paraphimosis, despite previous surgical enlargement of the preputial ostium. Revision surgery was performed using a combination of preputial advancement and phallopexy, which resulted in complete and permanent coverage of the glans penis by the prepuce, and at 1 year postoperatively, no recurrence of paraphimosis had been observed. The combined techniques allow preservation of the normal penile anatomy, are relatively simple to perform and provide a cosmetic result. We recommend this combination for the treatment of paraphimosis in the dog, particularly when other techniques have failed. PMID:25348145

  6. Advanced digital modulation: Communication techniques and monolithic GaAs technology

    NASA Technical Reports Server (NTRS)

    Wilson, S. G.; Oliver, J. D., Jr.; Kot, R. C.; Richards, C. R.

    1983-01-01

    Communications theory and practice are merged with state-of-the-art technology in IC fabrication, especially monolithic GaAs technology, to examine the general feasibility of a number of advanced technology digital transmission systems. Satellite-channel models with (1) superior throughput, perhaps 2 Gbps; (2) attractive weight and cost; and (3) high RF power and spectrum efficiency are discussed. Transmission techniques possessing reasonably simple architectures capable of monolithic fabrication at high speeds were surveyed. This included a review of amplitude/phase shift keying (APSK) techniques and the continuous-phase-modulation (CPM) methods, of which MSK represents the simplest case.

  7. Comparison of seed bank estimation techniques using six weed species in two soil types

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Tests of three different seed bank estimation techniques were performed on six different weed species. Petri plate germination was compared to two emergence methods, each on two different soil types (stony loam vs. silt loam). Soil types produced equal emergence proportions, however both emergence...

  8. A solar energy estimation procedure using remote sensing techniques. [watershed hydrologic models

    NASA Technical Reports Server (NTRS)

    Khorram, S.

    1977-01-01

    The objective of this investigation is to design a remote sensing-aided procedure for daily location-specific estimation of solar radiation components over the watershed(s) of interest. This technique has been tested on the Spanish Creek Watershed, Northern California, with successful results.

  9. Using the Randomized Response Technique to Estimate the Extent of Delinquent Behavior in Schools.

    ERIC Educational Resources Information Center

    Gottfredson, Gary D.

    The Randomized Response Technique (RRT) appears to have promise in future work which studies the relation of school variables to disruption or delinquent behavior. The RRT is especially useful in situations when it is difficult or undesirable directly to ask stigmatizing questions. The proportions of students in this study estimated to have used…

  10. ESTIMATING CHLOROFORM BIOTRANSFORMATION IN F-344 RAT LIVER USING IN VITRO TECHNIQUES AND PHARMACOKINETIC MODELING

    EPA Science Inventory

    ESTIMATING CHLOROFORM BIOTRANSFORMATION IN F-344 RAT LIVER USING IN VITRO TECHNIQUES AND PHARMACOKINETIC MODELING

    Linskey, C.F.1, Harrison, R.A.2., Zhao, G.3., Barton, H.A., Lipscomb, J.C4., and Evans, M.V2., 1UNC, ESE, Chapel Hill, NC ; 2USEPA, ORD, NHEERL, RTP, NC; 3 UN...

  11. A NOVEL TECHNIQUE FOR QUANTITATIVE ESTIMATION OF UPTAKE OF DIESEL EXHAUST PARTICLES BY LUNG CELLS

    EPA Science Inventory

    While airborne particulates like diesel exhaust particulates (DEP) exert significant toxicological effects on lungs, quantitative estimation of accumulation of DEP inside lung cells has not been reported due to a lack of an accurate and quantitative technique for this purpose. I...

  12. Development of a surface isolation estimation technique suitable for application of polar orbiting satellite data

    NASA Technical Reports Server (NTRS)

    Davis, P. A.; Penn, L. M. (Principal Investigator)

    1981-01-01

    A technique is developed for the estimation of total daily insolation on the basis of data derivable from operational polar-orbiting satellites. Although surface insolation and meteorological observations are used in the development, the algorithm is constrained in application by the infrequent daytime polar-orbiter coverage.

  13. A review of sex estimation techniques during examination of skeletal remains in forensic anthropology casework.

    PubMed

    Krishan, Kewal; Chatterjee, Preetika M; Kanchan, Tanuj; Kaur, Sandeep; Baryah, Neha; Singh, R K

    2016-04-01

    Sex estimation is considered as one of the essential parameters in forensic anthropology casework, and requires foremost consideration in the examination of skeletal remains. Forensic anthropologists frequently employ morphologic and metric methods for sex estimation of human remains. These methods are still very imperative in identification process in spite of the advent and accomplishment of molecular techniques. A constant boost in the use of imaging techniques in forensic anthropology research has facilitated to derive as well as revise the available population data. These methods however, are less reliable owing to high variance and indistinct landmark details. The present review discusses the reliability and reproducibility of various analytical approaches; morphological, metric, molecular and radiographic methods in sex estimation of skeletal remains. Numerous studies have shown a higher reliability and reproducibility of measurements taken directly on the bones and hence, such direct methods of sex estimation are considered to be more reliable than the other methods. Geometric morphometric (GM) method and Diagnose Sexuelle Probabiliste (DSP) method are emerging as valid methods and widely used techniques in forensic anthropology in terms of accuracy and reliability. Besides, the newer 3D methods are shown to exhibit specific sexual dimorphism patterns not readily revealed by traditional methods. Development of newer and better methodologies for sex estimation as well as re-evaluation of the existing ones will continue in the endeavour of forensic researchers for more accurate results. PMID:26926105

  14. Techniques to estimate generalized skew coefficients of annual peak streamflow for natural basins in Texas

    USGS Publications Warehouse

    Judd, Linda J.; Asquith, William H.; Slade, Raymond M., Jr.

    1996-01-01

    This report presents two techniques to estimate generalized skew coefficients used for log-Pearson Type III peak-streamflow frequency analysis of natural basins in Texas. A natural basin has less than 10 percent impervious cover, and less than 10 percent of its drainage area is controlled by reservoirs. The estimation of generalized skew coefficients is based on annual peak and historical peak streamflow for all U.S. Geological Survey streamflow-gaging stations having at least 20 years of annual peak-streamflow record from natural basins in Texas. Station skew coefficients calculated for each of 255 Texas stations were used to estimate generalized skew coefficients for Texas. One technique to estimate generalized skew coefficients involved the use of regression equations developed for each of eight regions in Texas, and the other involved development of a statewide map of generalized skew coefficients. The weighted mean of the weighted mean standard errors of the regression equations for the eight regions is 0.36 log10 skew units, and the weighted mean standard error of the map is 0.35 log10 skew units. The technique based on the map is preferred for estimating generalized skew coefficients because of its smooth transition from one region of the State to another.

  15. Study of advanced techniques for determining the long term performance of components

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The application of existing and new technology to the problem of determining the long-term performance capability of liquid rocket propulsion feed systems is discussed. The long term performance of metal to metal valve seats in a liquid propellant fuel system is stressed. The approaches taken in conducting the analysis are: (1) advancing the technology of characterizing components through the development of new or more sensitive techniques and (2) improving the understanding of the physical of degradation.

  16. ADVANCING THE FUNDAMENTAL UNDERSTANDING AND SCALE-UP OF TRISO FUEL COATERS VIA ADVANCED MEASUREMENT AND COMPUTATIONAL TECHNIQUES

    SciTech Connect

    Biswas, Pratim; Al-Dahhan, Muthanna

    2012-11-01

    to advance the fundamental understanding of the hydrodynamics by systematically investigating the effect of design and operating variables, to evaluate the reported dimensionless groups as scaling factors, and to establish a reliable scale-up methodology for the TRISO fuel particle spouted bed coaters based on hydrodynamic similarity via advanced measurement and computational techniques. An additional objective is to develop an on-line non-invasive measurement technique based on gamma ray densitometry (i.e. Nuclear Gauge Densitometry) that can be installed and used for coater process monitoring to ensure proper performance and operation and to facilitate the developed scale-up methodology. To achieve the objectives set for the project, the work will use optical probes and gamma ray computed tomography (CT) (for the measurements of solids/voidage holdup cross-sectional distribution and radial profiles along the bed height, spouted diameter, and fountain height) and radioactive particle tracking (RPT) (for the measurements of the 3D solids flow field, velocity, turbulent parameters, circulation time, solids lagrangian trajectories, and many other of spouted bed related hydrodynamic parameters). In addition, gas dynamic measurement techniques and pressure transducers will be utilized to complement the obtained information. The measurements obtained by these techniques will be used as benchmark data to evaluate and validate the computational fluid dynamic (CFD) models (two fluid model or discrete particle model) and their closures. The validated CFD models and closures will be used to facilitate the developed methodology for scale-up, design and hydrodynamic similarity. Successful execution of this work and the proposed tasks will advance the fundamental understanding of the coater flow field and quantify it for proper and safe design, scale-up, and performance. Such achievements will overcome the barriers to AGR applications and will help assure that the US maintains

  17. An improved technique for global solar radiation estimation using numerical weather prediction

    NASA Astrophysics Data System (ADS)

    Shamim, M. A.; Remesan, R.; Bray, M.; Han, D.

    2015-07-01

    Global solar radiation is the driving force in hydrological cycle especially for evapotranspiration (ET) and is quite infrequently measured. This has led to the reliance on indirect techniques of estimation for data scarce regions. This study presents an improved technique that uses information from a numerical weather prediction (NWP) model (National Centre for Atmospheric Research NCAR's Mesoscale Meteorological model version 5 MM5), for the determination of a cloud cover index (CI), a major factor in the attenuation of the incident solar radiation. The cloud cover index (CI) together with the atmospheric transmission factor (KT) and output from a global clear sky solar radiation were then used for the estimation of global solar radiation for the Brue catchment located in the southwest of England. The results clearly show an improvement in the estimated global solar radiation in comparison to the prevailing approaches.

  18. 5D parameter estimation of near-field sources using hybrid evolutionary computational techniques.

    PubMed

    Zaman, Fawad; Qureshi, Ijaz Mansoor

    2014-01-01

    Hybrid evolutionary computational technique is developed to jointly estimate the amplitude, frequency, range, and 2D direction of arrival (elevation and azimuth angles) of near-field sources impinging on centrosymmetric cross array. Specifically, genetic algorithm is used as a global optimizer, whereas pattern search and interior point algorithms are employed as rapid local search optimizers. For this, a new multiobjective fitness function is constructed, which is the combination of mean square error and correlation between the normalized desired and estimated vectors. The performance of the proposed hybrid scheme is compared not only with the individual responses of genetic algorithm, interior point algorithm, and pattern search, but also with the existing traditional techniques. The proposed schemes produced fairly good results in terms of estimation accuracy, convergence rate, and robustness against noise. A large number of Monte-Carlo simulations are carried out to test out the validity and reliability of each scheme. PMID:24701156

  19. A novel technique for estimating aerosol optical thickness trends using meteorological parameters

    NASA Astrophysics Data System (ADS)

    Emetere, Moses E.; Akinyemi, M. L.; Akin-Ojo, O.

    2016-02-01

    Estimating aerosol optical thickness (AOT) over regions can be tasking if satellite data set over such region is very scanty. Therefore a technique whose application captures real-time events is most appropriate for adequate monitoring of risk indicators. A new technique i.e. arithmetic translation of pictorial model (ATOPM) was developed. The ATOPM deals with the use mathematical expression to compute other meteorological parameters obtained from satellite or ground data set. Six locations within 335 × 230 Km2 area of a selected portion of Nigeria were chosen and analyzed -using the meteorological data set (1999-2012) and MATLAB. The research affirms the use of some parameters (e.g. minimum temperature, cloud cover, relative humidity and rainfall) to estimate the aerosol optical thickness. The objective of the paper was satisfied via the use of other meteorological parameters to estimate AOT when the satellite data set over an area is scanty.

  20. Ultra-small time-delay estimation via a weak measurement technique with post-selection

    NASA Astrophysics Data System (ADS)

    Fang, Chen; Huang, Jing-Zheng; Yu, Yang; Li, Qinzheng; Zeng, Guihua

    2016-09-01

    Weak measurement is a novel technique for parameter estimation with higher precision. In this paper we develop a general theory for the parameter estimation based on a weak measurement technique with arbitrary post-selection. The weak-value amplification model and the joint weak measurement model are two special cases in our theory. Applying the developed theory, time-delay estimation is investigated in both theory and experiments. The experimental results show that when the time delay is ultra-small, the joint weak measurement scheme outperforms the weak-value amplification scheme, and is robust against not only misalignment errors but also the wavelength dependence of the optical components. These results are consistent with theoretical predictions that have not been previously verified by any experiment.

  1. POC-scale testing of an advanced fine coal dewatering equipment/technique

    SciTech Connect

    Groppo, J.G.; Parekh, B.K.; Rawls, P.

    1995-11-01

    Froth flotation technique is an effective and efficient process for recovering of ultra-fine (minus 74 {mu}m) clean coal. Economical dewatering of an ultra-fine clean coal product to a 20 percent level moisture will be an important step in successful implementation of the advanced cleaning processes. This project is a step in the Department of Energy`s program to show that ultra-clean coal could be effectively dewatered to 20 percent or lower moisture using either conventional or advanced dewatering techniques. As the contract title suggests, the main focus of the program is on proof-of-concept testing of a dewatering technique for a fine clean coal product. The coal industry is reluctant to use the advanced fine coal recovery technology due to the non-availability of an economical dewatering process. in fact, in a recent survey conducted by U.S. DOE and Battelle, dewatering of fine clean coal was identified as the number one priority for the coal industry. This project will attempt to demonstrate an efficient and economic fine clean coal slurry dewatering process.

  2. A comparative study of shear wave speed estimation techniques in optical coherence elastography applications

    NASA Astrophysics Data System (ADS)

    Zvietcovich, Fernando; Yao, Jianing; Chu, Ying-Ju; Meemon, Panomsak; Rolland, Jannick P.; Parker, Kevin J.

    2016-03-01

    Optical Coherence Elastography (OCE) is a widely investigated noninvasive technique for estimating the mechanical properties of tissue. In particular, vibrational OCE methods aim to estimate the shear wave velocity generated by an external stimulus in order to calculate the elastic modulus of tissue. In this study, we compare the performance of five acquisition and processing techniques for estimating the shear wave speed in simulations and experiments using tissue-mimicking phantoms. Accuracy, contrast-to-noise ratio, and resolution are measured for all cases. The first two techniques make the use of one piezoelectric actuator for generating a continuous shear wave propagation (SWP) and a tone-burst propagation (TBP) of 400 Hz over the gelatin phantom. The other techniques make use of one additional actuator located on the opposite side of the region of interest in order to create an interference pattern. When both actuators have the same frequency, a standing wave (SW) pattern is generated. Otherwise, when there is a frequency difference df between both actuators, a crawling wave (CrW) pattern is generated and propagates with less speed than a shear wave, which makes it suitable for being detected by the 2D cross-sectional OCE imaging. If df is not small compared to the operational frequency, the CrW travels faster and a sampled version of it (SCrW) is acquired by the system. Preliminary results suggest that TBP (error < 4.1%) and SWP (error < 6%) techniques are more accurate when compared to mechanical measurement test results.

  3. The Novel Nonlinear Adaptive Doppler Shift Estimation Technique and the Coherent Doppler Lidar System Validation Lidar

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Koch, Grady J.

    2006-01-01

    The signal processing aspect of a 2-m wavelength coherent Doppler lidar system under development at NASA Langley Research Center in Virginia is investigated in this paper. The lidar system is named VALIDAR (validation lidar) and its signal processing program estimates and displays various wind parameters in real-time as data acquisition occurs. The goal is to improve the quality of the current estimates such as power, Doppler shift, wind speed, and wind direction, especially in low signal-to-noise-ratio (SNR) regime. A novel Nonlinear Adaptive Doppler Shift Estimation Technique (NADSET) is developed on such behalf and its performance is analyzed using the wind data acquired over a long period of time by VALIDAR. The quality of Doppler shift and power estimations by conventional Fourier-transform-based spectrum estimation methods deteriorates rapidly as SNR decreases. NADSET compensates such deterioration in the quality of wind parameter estimates by adaptively utilizing the statistics of Doppler shift estimate in a strong SNR range and identifying sporadic range bins where good Doppler shift estimates are found. The authenticity of NADSET is established by comparing the trend of wind parameters with and without NADSET applied to the long-period lidar return data.

  4. Satellite Angular Velocity Estimation Based on Star Images and Optical Flow Techniques

    PubMed Central

    Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele

    2013-01-01

    An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components. PMID:24072023

  5. Satellite angular velocity estimation based on star images and optical flow techniques.

    PubMed

    Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele

    2013-01-01

    An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components. PMID:24072023

  6. An evaluation of population index and estimation techniques for tadpoles in desert pools

    USGS Publications Warehouse

    Jung, R.E.; Dayton, G.H.; Williamson, S.J.; Sauer, J.R.; Droege, S.

    2002-01-01

    Using visual (VI) and dip net indices (DI) and double-observer (DOE), removal (RE), and neutral red dye capture-recapture (CRE) estimates, we counted, estimated, and censused Couch's spadefoot (Scaphiopus couchii) and canyon treefrog (Hyla arenicolor) tadpole populations in Big Bend National Park, Texas. Initial dye experiments helped us determine appropriate dye concentrations and exposure times to use in mesocosm and field trials. The mesocosm study revealed higher tadpole detection rates, more accurate population estimates, and lower coefficients of variation among pools compared to those from the field study. In both mesocosm and field studies, CRE was the best method for estimating tadpole populations, followed by DOE and RE. In the field, RE, DI, and VI often underestimated populations in pools with higher tadpole numbers. DI improved with increased sampling. Larger pools supported larger tadpole populations, and tadpole detection rates in general decreased with increasing pool volume and surface area. Hence, pool size influenced bias in tadpole sampling. Across all techniques, tadpole detection rates differed among pools, indicating that sampling bias was inherent and techniques did not consistently sample the same proportion of tadpoles in each pool Estimating bias (i.e, calculating detection rates) therefore was essential in assessing tadpole abundance. Unlike VI and DOE, DI, RE, and CRE could be used in turbid waters in which tadpoles are not visible. The tadpole population estimates we used accommodated differences in detection probabilities in simple desert pool environments but may not work in more complex habitats.

  7. Advanced Time-Resolved Fluorescence Microscopy Techniques for the Investigation of Peptide Self-Assembly

    NASA Astrophysics Data System (ADS)

    Anthony, Neil R.

    The ubiquitous cross beta sheet peptide motif is implicated in numerous neurodegenerative diseases while at the same time offers remarkable potential for constructing isomorphic high-performance bionanomaterials. Despite an emerging understanding of the complex folding landscape of cross beta structures in determining disease etiology and final structure, we lack knowledge of the critical initial stages of nucleation and growth. In this dissertation, I advance our understanding of these key stages in the cross-beta nucleation and growth pathways using cutting-edge microscopy techniques. In addition, I present a new combined time-resolved fluorescence analysis technique with the potential to advance our current understanding of subtle molecular level interactions that play a pivotal role in peptide self-assembly. Using the central nucleating core of Alzheimer's Amyloid-beta protein, Abeta(16 22), as a model system, utilizing electron, time-resolved, and non-linear microscopy, I capture the initial and transient nucleation stages of peptide assembly into the cross beta motif. In addition, I have characterized the nucleation pathway, from monomer to paracrystalline nanotubes in terms of morphology and fluorescence lifetime, corroborating the predicted desolvation process that occurs prior to cross-beta nucleation. Concurrently, I have identified unique heterogeneous cross beta domains contained within individual nanotube structures, which have potential bionanomaterials applications. Finally, I describe a combined fluorescence theory and analysis technique that dramatically increases the sensitivity of current time-resolved techniques. Together these studies demonstrate the potential for advanced microscopy techniques in the identification and characterization of the cross-beta folding pathway, which will further our understanding of both amyloidogenesis and bionanomaterials.

  8. Estimating the concrete compressive strength using hard clustering and fuzzy clustering based regression techniques.

    PubMed

    Nagwani, Naresh Kumar; Deo, Shirish V

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939

  9. Estimating the Concrete Compressive Strength Using Hard Clustering and Fuzzy Clustering Based Regression Techniques

    PubMed Central

    Nagwani, Naresh Kumar; Deo, Shirish V.

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939

  10. Convex-hull mass estimates of the dodo (Raphus cucullatus): application of a CT-based mass estimation technique

    PubMed Central

    O’Mahoney, Thomas G.; Kitchener, Andrew C.; Manning, Phillip L.; Sellers, William I.

    2016-01-01

    The external appearance of the dodo (Raphus cucullatus, Linnaeus, 1758) has been a source of considerable intrigue, as contemporaneous accounts or depictions are rare. The body mass of the dodo has been particularly contentious, with the flightless pigeon alternatively reconstructed as slim or fat depending upon the skeletal metric used as the basis for mass prediction. Resolving this dichotomy and obtaining a reliable estimate for mass is essential before future analyses regarding dodo life history, physiology or biomechanics can be conducted. Previous mass estimates of the dodo have relied upon predictive equations based upon hind limb dimensions of extant pigeons. Yet the hind limb proportions of dodo have been found to differ considerably from those of their modern relatives, particularly with regards to midshaft diameter. Therefore, application of predictive equations to unusually robust fossil skeletal elements may bias mass estimates. We present a whole-body computed tomography (CT) -based mass estimation technique for application to the dodo. We generate 3D volumetric renders of the articulated skeletons of 20 species of extant pigeons, and wrap minimum-fit ‘convex hulls’ around their bony extremities. Convex hull volume is subsequently regressed against mass to generate predictive models based upon whole skeletons. Our best-performing predictive model is characterized by high correlation coefficients and low mean squared error (a = − 2.31, b = 0.90, r2 = 0.97, MSE = 0.0046). When applied to articulated composite skeletons of the dodo (National Museums Scotland, NMS.Z.1993.13; Natural History Museum, NHMUK A.9040 and S/1988.50.1), we estimate eviscerated body masses of 8–10.8 kg. When accounting for missing soft tissues, this may equate to live masses of 10.6–14.3 kg. Mass predictions presented here overlap at the lower end of those previously published, and support recent suggestions of a relatively slim dodo. CT-based reconstructions provide a

  11. Evaluation and comparison of techniques for estimating home range and territory size

    SciTech Connect

    Ford, R.G.; Myers, J.P.

    1981-01-01

    Estimates of territory and home range size can yield widely varying results depending upon methods of data collection and analysis. The merits of different methods of space-use patterns of Pectoral Sandpipers (Calidris melanotos) and Red Phalaropes (Phalaropus fulicarius) were examined on their breeding ground. Empirical data from these species was used to generate a series of computer-simulated home ranges. The efficiency of a non-probabilistic estimator of territory size (minimum convex polygon method) vs two probabilistic techniques, one parametric (Jennich and Turner 1969) and one nonparametric (Ford and Krumme 1979) were examined, testing for their sensitivities to sample size and to temporal dependence between successive observations. All methods are sensitive to temporal dependence and sample size, but the probabilistic techniques provide better estimates from small samples. Both the minimum convex polygon method and the parametric Jennrich-Turner technique overestimate area utilized by the species studied here, both of which deviated from a bivariate normal distribution. The Ford-Krumme approach provided the most accurate estimate of utilized area.

  12. The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation

    NASA Technical Reports Server (NTRS)

    Tsou, Haiping; Yan, Tsun-Yee

    2000-01-01

    This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.

  13. Landslide susceptibility estimation by random forests technique: sensitivity and scaling issues

    NASA Astrophysics Data System (ADS)

    Catani, F.; Lagomarsino, D.; Segoni, S.; Tofani, V.

    2013-11-01

    Despite the large number of recent advances and developments in landslide susceptibility mapping (LSM) there is still a lack of studies focusing on specific aspects of LSM model sensitivity. For example, the influence of factors such as the survey scale of the landslide conditioning variables (LCVs), the resolution of the mapping unit (MUR) and the optimal number and ranking of LCVs have never been investigated analytically, especially on large data sets. In this paper we attempt this experimentation concentrating on the impact of model tuning choice on the final result, rather than on the comparison of methodologies. To this end, we adopt a simple implementation of the random forest (RF), a machine learning technique, to produce an ensemble of landslide susceptibility maps for a set of different model settings, input data types and scales. Random forest is a combination of Bayesian trees that relates a set of predictors to the actual landslide occurrence. Being it a nonparametric model, it is possible to incorporate a range of numerical or categorical data layers and there is no need to select unimodal training data as for example in linear discriminant analysis. Many widely acknowledged landslide predisposing factors are taken into account as mainly related to the lithology, the land use, the geomorphology, the structural and anthropogenic constraints. In addition, for each factor we also include in the predictors set a measure of the standard deviation (for numerical variables) or the variety (for categorical ones) over the map unit. As in other systems, the use of RF enables one to estimate the relative importance of the single input parameters and to select the optimal configuration of the classification model. The model is initially applied using the complete set of input variables, then an iterative process is implemented and progressively smaller subsets of the parameter space are considered. The impact of scale and accuracy of input variables, as well as

  14. Techniques for estimating flood-peak discharges of rural, unregulated streams in Ohio

    USGS Publications Warehouse

    Koltun, G.F.

    2003-01-01

    Regional equations for estimating 2-, 5-, 10-, 25-, 50-, 100-, and 500-year flood-peak discharges at ungaged sites on rural, unregulated streams in Ohio were developed by means of ordinary and generalized least-squares (GLS) regression techniques. One-variable, simple equations and three-variable, full-model equations were developed on the basis of selected basin characteristics and flood-frequency estimates determined for 305 streamflow-gaging stations in Ohio and adjacent states. The average standard errors of prediction ranged from about 39 to 49 percent for the simple equations, and from about 34 to 41 percent for the full-model equations. Flood-frequency estimates determined by means of log-Pearson Type III analyses are reported along with weighted flood-frequency estimates, computed as a function of the log-Pearson Type III estimates and the regression estimates. Values of explanatory variables used in the regression models were determined from digital spatial data sets by means of a geographic information system (GIS), with the exception of drainage area, which was determined by digitizing the area within basin boundaries manually delineated on topographic maps. Use of GIS-based explanatory variables represents a major departure in methodology from that described in previous reports on estimating flood-frequency characteristics of Ohio streams. Examples are presented illustrating application of the regression equations to ungaged sites on ungaged and gaged streams. A method is provided to adjust regression estimates for ungaged sites by use of weighted and regression estimates for a gaged site on the same stream. A region-of-influence method, which employs a computer program to estimate flood-frequency characteristics for ungaged sites based on data from gaged sites with similar characteristics, was also tested and compared to the GLS full-model equations. For all recurrence intervals, the GLS full-model equations had superior prediction accuracy relative to

  15. Reduced bias and threshold choice in the extremal index estimation through resampling techniques

    NASA Astrophysics Data System (ADS)

    Gomes, Dora Prata; Neves, Manuela

    2013-10-01

    In Extreme Value Analysis there are a few parameters of particular interest among which we refer to the extremal index, a measure of extreme events clustering. It is of great interest for initial dependent samples, the common situation in many practical situations. Most semi-parametric estimators of this parameter show the same behavior: nice asymptotic properties but a high variance for small values of k, the number of upper order statistics used in the estimation and a high bias for large values of k. The Mean Square Error, a measure that encompasses bias and variance, usually shows a very sharp plot, needing an adequate choice of k. Using classical extremal index estimators considered in the literature, the emphasis is now given to derive reduced bias estimators with more stable paths, obtained through resampling techniques. An adaptive algorithm for estimating the level k for obtaining a reliable estimate of the extremal index is used. This algorithm has shown good results, but some improvements are still required. A simulation study will illustrate the properties of the estimators and the performance of the adaptive algorithm proposed.

  16. Applications of Advanced Nondestructive Measurement Techniques to Address Safety of Flight Issues on NASA Spacecraft

    NASA Technical Reports Server (NTRS)

    Prosser, Bill

    2016-01-01

    Advanced nondestructive measurement techniques are critical for ensuring the reliability and safety of NASA spacecraft. Techniques such as infrared thermography, THz imaging, X-ray computed tomography and backscatter X-ray are used to detect indications of damage in spacecraft components and structures. Additionally, sensor and measurement systems are integrated into spacecraft to provide structural health monitoring to detect damaging events that occur during flight such as debris impacts during launch and assent or from micrometeoroid and orbital debris, or excessive loading due to anomalous flight conditions. A number of examples will be provided of how these nondestructive measurement techniques have been applied to resolve safety critical inspection concerns for the Space Shuttle, International Space Station (ISS), and a variety of launch vehicles and unmanned spacecraft.

  17. Accelerated Testing Methodology in Constant Stress-Rate Testing for Advanced Structural Ceramics: A Preloading Technique

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Gyekenyesi, John P.; Huebert, Dean; Bartlett, Allen; Choi, Han-Ho

    2001-01-01

    Preloading technique was used as a means of an accelerated testing methodology in constant stress-rate (dynamic fatigue) testing for two different brittle materials. The theory developed previously for fatigue strength as a function of preload was further verified through extensive constant stress-rate testing for glass-ceramic and CRT glass in room temperature distilled water. The preloading technique was also used in this study to identify the prevailing failure mechanisms at elevated temperatures, particularly at lower test rates in which a series of mechanisms would be associated simultaneously with material failure, resulting in significant strength increase or decrease. Two different advanced ceramics including SiC whisker-reinforced composite silicon nitride and 96 wt% alumina were used at elevated temperatures. It was found that the preloading technique can be used as an additional tool to pinpoint the dominant failure mechanism that is associated with such a phenomenon of considerable strength increase or decrease.

  18. Accelerated Testing Methodology in Constant Stress-Rate Testing for Advanced Structural Ceramics: A Preloading Technique

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Gyekenyesi, John P.; Huebert, Dean; Bartlett, Allen; Choi, Han-Ho

    2001-01-01

    Preloading technique was used as a means of an accelerated testing methodology in constant stress-rate ('dynamic fatigue') testing for two different brittle materials. The theory developed previously for fatigue strength as a function of preload was further verified through extensive constant stress-rate testing for glass-ceramic and CRT glass in room temperature distilled water. The preloading technique was also used in this study to identify the prevailing failure mechanisms at elevated temperatures, particularly at lower test rate in which a series of mechanisms would be associated simultaneously with material failure, resulting in significant strength increase or decrease. Two different advanced ceramics including SiC whisker-reinforced composite silicon nitride and 96 wt% alumina were used at elevated temperatures. It was found that the preloading technique can be used as an additional tool to pinpoint the dominant failure mechanism that is associated with such a phenomenon of considerable strength increase or decrease.

  19. A technique for optimal temperature estimation for modeling sunrise/sunset thermal snap disturbance torque

    NASA Technical Reports Server (NTRS)

    Zimbelman, D. F.; Dennehy, C. J.; Welch, R. V.; Born, G. H.

    1990-01-01

    A predictive temperature estimation technique which can be used to drive a model of the Sunrise/Sunset thermal 'snap' disturbance torque experienced by low Earth orbiting spacecraft is described. The twice per orbit impulsive disturbance torque is attributed to vehicle passage in and out of the Earth's shadow cone (umbra), during which large flexible appendages undergo rapidly changing thermal conditions. Flexible members, in particular solar arrays, experience rapid cooling during umbra entrance (Sunset) and rapid heating during exit (Sunrise). The thermal 'snap' phenomena has been observed during normal on-orbit operations of both the LANDSAT-4 satellite and the Communications Technology Satellite (CTS). Thermal 'snap' has also been predicted to be a dominant source of error for the TOPEX satellite. The fundamental equations used to model the Sunrise/Sunset thermal 'snap' disturbance torque for a typical solar array like structure will be described. For this derivation the array is assumed to be a thin, cantilevered beam. The time varying thermal gradient is shown to be the driving force behind predicting the thermal 'snap' disturbance torque and therefore motivates the need for accurate estimates of temperature. The development of a technique to optimally estimate appendage surface temperature is highlighted. The objective analysis method used is structured on the Gauss-Markov Theorem and provides an optimal temperature estimate at a prescribed location given data from a distributed thermal sensor network. The optimally estimated surface temperatures could then be used to compute the thermal gradient across the body. The estimation technique is demonstrated using a typical satellite solar array.

  20. Techniques for estimating flood-peak discharges from urban basins in Missouri

    USGS Publications Warehouse

    Becker, L.D.

    1986-01-01

    Techniques are defined for estimating the magnitude and frequency of future flood peak discharges of rainfall-induced runoff from small urban basins in Missouri. These techniques were developed from an initial analysis of flood records of 96 gaged sites in Missouri and adjacent states. Final regression equations are based on a balanced, representative sampling of 37 gaged sites in Missouri. This sample included 9 statewide urban study sites, 18 urban sites in St. Louis County, and 10 predominantly rural sites statewide. Short-term records were extended on the basis of long-term climatic records and use of a rainfall-runoff model. Linear least-squares regression analyses were used with log-transformed variables to relate flood magnitudes of selected recurrence intervals (dependent variables) to selected drainage basin indexes (independent variables). For gaged urban study sites within the State, the flood peak estimates are from the frequency curves defined from the synthesized long-term discharge records. Flood frequency estimates are made for ungaged sites by using regression equations that require determination of the drainage basin size and either the percentage of impervious area or a basin development factor. Alternative sets of equations are given for the 2-, 5-, 10-, 25-, 50-, and 100-yr recurrence interval floods. The average standard errors of estimate range from about 33% for the 2-yr flood to 26% for the 100-yr flood. The techniques for estimation are applicable to flood flows that are not significantly affected by storage caused by manmade activities. Flood peak discharge estimating equations are considered applicable for sites on basins draining approximately 0.25 to 40 sq mi. (Author 's abstract)

  1. Advanced spatio-temporal filtering techniques for photogrammetric image sequence analysis in civil engineering material testing

    NASA Astrophysics Data System (ADS)

    Liebold, F.; Maas, H.-G.

    2016-01-01

    The paper shows advanced spatial, temporal and spatio-temporal filtering techniques which may be used to reduce noise effects in photogrammetric image sequence analysis tasks and tools. As a practical example, the techniques are validated in a photogrammetric spatio-temporal crack detection and analysis tool applied in load tests in civil engineering material testing. The load test technique is based on monocular image sequences of a test object under varying load conditions. The first image of a sequence is defined as a reference image under zero load, wherein interest points are determined and connected in a triangular irregular network structure. For each epoch, these triangles are compared to the reference image triangles to search for deformations. The result of the feature point tracking and triangle comparison process is a spatio-temporally resolved strain value field, wherein cracks can be detected, located and measured via local discrepancies. The strains can be visualized as a color-coded map. In order to improve the measuring system and to reduce noise, the strain values of each triangle must be treated in a filtering process. The paper shows the results of various filter techniques in the spatial and in the temporal domain as well as spatio-temporal filtering techniques applied to these data. The best results were obtained by a bilateral filter in the spatial domain and by a spatio-temporal EOF (empirical orthogonal function) filtering technique.

  2. Recursive estimation techniques for detection of small objects in infrared image data

    NASA Astrophysics Data System (ADS)

    Zeidler, J. R.; Soni, T.; Ku, W. H.

    1992-04-01

    This paper describes a recursive detection scheme for point targets in infrared (IR) images. Estimation of the background noise is done using a weighted autocorrelation matrix update method and the detection statistic is calculated using a recursive technique. A weighting factor allows the algorithm to have finite memory and deal with nonstationary noise characteristics. The detection statistic is created by using a matched filter for colored noise, using the estimated noise autocorrelation matrix. The relationship between the weighting factor, the nonstationarity of the noise and the probability of detection is described. Some results on one- and two-dimensional infrared images are presented.

  3. The use of spectral data in wheat yield estimation - An assessment of techniques explored in LACIE

    NASA Technical Reports Server (NTRS)

    Stuff, R. G.; Barnett, T. L.

    1979-01-01

    The object of the paper is to assess the results of the Large Area Crop Inventory Experiment (LACIE) and closely related research on yield estimation techniques based on remote sensing variables. The exploratory research conducted during LACIE substantiated the hypothesis of yield related information contained in Landsat multispectral scanner data and indicated some of its empirical characteristics. It is noted that leaf area and possibly other foliage features can be derived from spectral data for yield estimation through agrometeorological models and that multiple vegetative and grain related features may be discernable by Landsat derived wheat spectra at different points in the crop development.

  4. Regressions by leaps and bounds and biased estimation techniques in yield modeling

    NASA Technical Reports Server (NTRS)

    Marquina, N. E. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. It was observed that OLS was not adequate as an estimation procedure when the independent or regressor variables were involved in multicollinearities. This was shown to cause the presence of small eigenvalues of the extended correlation matrix A'A. It was demonstrated that the biased estimation techniques and the all-possible subset regression could help in finding a suitable model for predicting yield. Latent root regression was an excellent tool that found how many predictive and nonpredictive multicollinearities there were.

  5. Review of recent advances in analytical techniques for the determination of neurotransmitters

    PubMed Central

    Perry, Maura; Li, Qiang; Kennedy, Robert T.

    2009-01-01

    Methods and advances for monitoring neurotransmitters in vivo or for tissue analysis of neurotransmitters over the last five years are reviewed. The review is organized primarily by neurotransmitter type. Transmitter and related compounds may be monitored by either in vivo sampling coupled to analytical methods or implanted sensors. Sampling is primarily performed using microdialysis, but low-flow push-pull perfusion may offer advantages of spatial resolution while minimizing the tissue disruption associated with higher flow rates. Analytical techniques coupled to these sampling methods include liquid chromatography, capillary electrophoresis, enzyme assays, sensors, and mass spectrometry. Methods for the detection of amino acid, monoamine, neuropeptide, acetylcholine, nucleoside, and soluable gas neurotransmitters have been developed and improved upon. Advances in the speed and sensitivity of these methods have enabled improvements in temporal resolution and increased the number of compounds detectable. Similar advances have enabled improved detection at tissue samples, with a substantial emphasis on single cell and other small samples. Sensors provide excellent temporal and spatial resolution for in vivo monitoring. Advances in application to catecholamines, indoleamines, and amino acids have been prominent. Improvements in stability, sensitivity, and selectivity of the sensors have been of paramount interest. PMID:19800472

  6. Biomass estimation of wetland vegetation in Poyang Lake area using ENVISAT advanced synthetic aperture radar data

    NASA Astrophysics Data System (ADS)

    Liao, Jingjuan; Shen, Guozhuang; Dong, Lei

    2013-01-01

    Biomass estimation of wetlands plays a role in understanding dynamic changes of the wetland ecosystem. Poyang Lake is the largest freshwater lake in China, with an area of about 3000 km2. The lake's wetland ecosystem has a significant impact on leveraging China's environmental change. Synthetic aperture radar (SAR) data are a good choice for biomass estimation during rainy and dry seasons in this region. In this paper, we discuss the neural network algorithms (NNAs) to retrieve wetland biomass using the alternating-polarization ENVISAT advanced synthetic aperture radar (ASAR) data. Two field measurements were carried out coinciding with the satellite overpasses through the hydrological cycle in April to November. A radiative transfer model of forest canopy, the Michigan Microwave Canopy Scattering (MIMICS) model, was modified to fit to herbaceous wetland ecosystems. With both ASAR and MIMICS simulations as input data, the NNA-estimated biomass was validated with ground-measured data. This study indicates the capability of NNA combined with a modified MIMICS model to retrieve wetland biomass from SAR imagery. Finally, the overall biomass of Poyang Lake wetland vegetation has been estimated. It reached a level of 1.09×109, 1.86×108, and 9.87×108 kg in April, July, and November 2007, respectively.

  7. Cost estimates for near-term depolyment of advanced traffic management systems. Final report

    SciTech Connect

    Stevens, S.S.; Chin, S.M.

    1993-02-15

    The objective of this study is to provide cost est engineering, design, installation, operation and maintenance of Advanced Traffic Management Systems (ATMS) in the largest 75 metropolitan areas in the United States. This report gives estimates for deployment costs for ATMS in the next five years, subject to the qualifications and caveats set out in following paragraphs. The report considers infrastructure components required to realize fully a functional ATMS over each of two highway networks (as discussed in the Section describing our general assumptions) under each of the four architectures identified in the MITRE Intelligent Vehicle Highway Systems (IVHS) Architecture studies. The architectures are summarized in this report in Table 2. Estimates are given for eight combinations of highway networks and architectures. We estimate that it will cost between $8.5 Billion (minimal network) and $26 Billion (augmented network) to proceed immediately with deployment of ATMS in the largest 75 metropolitan areas. Costs are given in 1992 dollars, and are not adjusted for future inflation. Our estimates are based partially on completed project costs, which have been adjusted to 1992 dollars. We assume that a particular architecture will be chosen; projected costs are broken by architecture.

  8. Impact of advanced microstructural characterization techniques on modeling and analysis of radiation damage

    SciTech Connect

    Garner, F.A.; Odette, G.R.

    1980-01-01

    The evolution of radiation-induced alterations of dimensional and mechanical properties has been shown to be a direct and often predictable consequence of radiation-induced microstructural changes. Recent advances in understanding of the nature and role of each microstructural component in determining the property of interest has led to a reappraisal of the type and priority of data needed for further model development. This paper presents an overview of the types of modeling and analysis activities in progress, the insights that prompted these activities, and specific examples of successful and ongoing efforts. A review is presented of some problem areas that in the authors' opinion are not yet receiving sufficient attention and which may benefit from the application of advanced techniques of microstructural characterization. Guidelines based on experience gained in previous studies are also provided for acquisition of data in a form most applicable to modeling needs.

  9. System engineering techniques for establishing balanced design and performance guidelines for the advanced telerobotic testbed

    NASA Technical Reports Server (NTRS)

    Zimmerman, W. F.; Matijevic, J. R.

    1987-01-01

    Novel system engineering techniques have been developed and applied to establishing structured design and performance objectives for the Telerobotics Testbed that reduce technical risk while still allowing the testbed to demonstrate an advancement in state-of-the-art robotic technologies. To estblish the appropriate tradeoff structure and balance of technology performance against technical risk, an analytical data base was developed which drew on: (1) automation/robot-technology availability projections, (2) typical or potential application mission task sets, (3) performance simulations, (4) project schedule constraints, and (5) project funding constraints. Design tradeoffs and configuration/performance iterations were conducted by comparing feasible technology/task set configurations against schedule/budget constraints as well as original program target technology objectives. The final system configuration, task set, and technology set reflected a balanced advancement in state-of-the-art robotic technologies, while meeting programmatic objectives and schedule/cost constraints.

  10. Advanced MRI techniques to improve our understanding of experience-induced neuroplasticity.

    PubMed

    Tardif, Christine Lucas; Gauthier, Claudine Joëlle; Steele, Christopher John; Bazin, Pierre-Louis; Schäfer, Andreas; Schaefer, Alexander; Turner, Robert; Villringer, Arno

    2016-05-01

    Over the last two decades, numerous human MRI studies of neuroplasticity have shown compelling evidence for extensive and rapid experience-induced brain plasticity in vivo. To date, most of these studies have consisted of simply detecting a difference in structural or functional images with little concern for their lack of biological specificity. Recent reviews and public debates have stressed the need for advanced imaging techniques to gain a better understanding of the nature of these differences - characterizing their extent in time and space, their underlying biological and network dynamics. The purpose of this article is to give an overview of advanced imaging techniques for an audience of cognitive neuroscientists that can assist them in the design and interpretation of future MRI studies of neuroplasticity. The review encompasses MRI methods that probe the morphology, microstructure, function, and connectivity of the brain with improved specificity. We underline the possible physiological underpinnings of these techniques and their recent applications within the framework of learning- and experience-induced plasticity in healthy adults. Finally, we discuss the advantages of a multi-modal approach to gain a more nuanced and comprehensive description of the process of learning. PMID:26318050

  11. Comparative assessment of techniques for initial pose estimation using monocular vision

    NASA Astrophysics Data System (ADS)

    Sharma, Sumant; D`Amico, Simone

    2016-06-01

    This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.

  12. Estimation of flooded area in the Bahr El-Jebel basin using remote sensing techniques

    NASA Astrophysics Data System (ADS)

    Shamseddin, M. A. H.; Hata, T.; Tada, A.; Bashir, M. A.; Tanakamaru, T.

    2006-07-01

    In spite of the importance of Sudd (swamp) area estimation for any hydrological project in the southern Sudan, yet, no abroad agreement on its size, due to the inaccessibility and civil war. In this study, remote sensing techniques are used to estimate the Bahr El-Jebel flooded area. MODIS-Terra (Moderate Resolution Imaging Spectroradiometer) level 1B satellite images are analyzed on basis of the unsupervised classification method. The annual mean of Bahr El-Jebel flooded area has been estimated at 20 400 km2, which is 96% of Sutcliffe and Park (1999) estimation on basis of water balance model prediction. And only, 53% of SEBAL (Surface Energy Balance Algorithm for Land) model estimation. The accuracy of the classification is 71%. The study also found the swelling and shrinkage pattern of Sudd area throughout the year is following the trends of Lake Victoria outflow patterns. The study has used two evaporation methods (open water evaporation and SEBAL model) to estimate the annual storage volume of Bahr El-Jebel River by using a water balance model. Also the storage changes due time is generated throughout the study years.

  13. A spline-based parameter estimation technique for static models of elastic structures

    NASA Technical Reports Server (NTRS)

    Dutt, P.; Taasan, S.

    1986-01-01

    The problem of identifying the spatially varying coefficient of elasticity using an observed solution to the forward problem is considered. Under appropriate conditions this problem can be treated as a first order hyperbolic equation in the unknown coefficient. Some continuous dependence results are developed for this problem and a spline-based technique is proposed for approximating the unknown coefficient, based on these results. The convergence of the numerical scheme is established and error estimates obtained.

  14. A spline-based parameter estimation technique for static models of elastic structures

    NASA Technical Reports Server (NTRS)

    Dutt, P.; Ta'asan, S.

    1989-01-01

    The problem of identifying the spatially varying coefficient of elasticity using an observed solution to the forward problem is considered. Under appropriate conditions this problem can be treated as a first order hyperbolic equation in the unknown coefficient. Some continuous dependence results are developed for this problem and a spline-based technique is proposed for approximating the unknown coefficient, based on these results. The convergence of the numerical scheme is established and error estimates obtained.

  15. Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Jennings, Esther

    2013-01-01

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  16. Estimation of Biochemical Constituents From Fresh, Green Leaves By Spectrum Matching Techniques

    NASA Technical Reports Server (NTRS)

    Goetz, A. F. H.; Gao, B. C.; Wessman, C. A.; Bowman, W. D.

    1990-01-01

    Estimation of biochemical constituents in vegetation such as lignin, cellulose, starch, sugar and protein by remote sensing methods is an important goal in ecological research. The spectral reflectances of dried leaves exhibit diagnostic absorption features which can be used to estimate the abundance of important constituents. Lignin and nitrogen concentrations have been obtained from canopies by use of imaging spectrometry and multiple linear regression techniques. The difficulty in identifying individual spectra of leaf constituents in the region beyond 1 micrometer is that liquid water contained in the leaf dominates the spectral reflectance of leaves in this region. By use of spectrum matching techniques, originally used to quantify whole column water abundance in the atmosphere and equivalent liquid water thickness in leaves, we have been able to remove the liquid water contribution to the spectrum. The residual spectra resemble spectra for cellulose in the 1.1 micrometer region, lignin in the 1.7 micrometer region, and starch in the 2.0-2.3 micrometer region. In the entire 1.0-2.3 micrometer region each of the major constituents contributes to the spectrum. Quantitative estimates will require using unmixing techniques on the residual spectra.

  17. Advanced techniques and technology for efficient data storage, access, and transfer

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.; Miller, Warner

    1991-01-01

    Advanced techniques for efficiently representing most forms of data are being implemented in practical hardware and software form through the joint efforts of three NASA centers. These techniques adapt to local statistical variations to continually provide near optimum code efficiency when representing data without error. Demonstrated in several earlier space applications, these techniques are the basis of initial NASA data compression standards specifications. Since the techniques clearly apply to most NASA science data, NASA invested in the development of both hardware and software implementations for general use. This investment includes high-speed single-chip very large scale integration (VLSI) coding and decoding modules as well as machine-transferrable software routines. The hardware chips were tested in the laboratory at data rates as high as 700 Mbits/s. A coding module's definition includes a predictive preprocessing stage and a powerful adaptive coding stage. The function of the preprocessor is to optimally process incoming data into a standard form data source that the second stage can handle.The built-in preprocessor of the VLSI coder chips is ideal for high-speed sampled data applications such as imaging and high-quality audio, but additionally, the second stage adaptive coder can be used separately with any source that can be externally preprocessed into the 'standard form'. This generic functionality assures that the applicability of these techniques and their recent high-speed implementations should be equally broad outside of NASA.

  18. Comparative assessment of bone pose estimation using Point Cluster Technique and OpenSim.

    PubMed

    Lathrop, Rebecca L; Chaudhari, Ajit M W; Siston, Robert A

    2011-11-01

    Estimating the position of the bones from optical motion capture data is a challenge associated with human movement analysis. Bone pose estimation techniques such as the Point Cluster Technique (PCT) and simulations of movement through software packages such as OpenSim are used to minimize soft tissue artifact and estimate skeletal position; however, using different methods for analysis may produce differing kinematic results which could lead to differences in clinical interpretation such as a misclassification of normal or pathological gait. This study evaluated the differences present in knee joint kinematics as a result of calculating joint angles using various techniques. We calculated knee joint kinematics from experimental gait data using the standard PCT, the least squares approach in OpenSim applied to experimental marker data, and the least squares approach in OpenSim applied to the results of the PCT algorithm. Maximum and resultant RMS differences in knee angles were calculated between all techniques. We observed differences in flexion/extension, varus/valgus, and internal/external rotation angles between all approaches. The largest differences were between the PCT results and all results calculated using OpenSim. The RMS differences averaged nearly 5° for flexion/extension angles with maximum differences exceeding 15°. Average RMS differences were relatively small (< 1.08°) between results calculated within OpenSim, suggesting that the choice of marker weighting is not critical to the results of the least squares inverse kinematics calculations. The largest difference between techniques appeared to be a constant offset between the PCT and all OpenSim results, which may be due to differences in the definition of anatomical reference frames, scaling of musculoskeletal models, and/or placement of virtual markers within OpenSim. Different methods for data analysis can produce largely different kinematic results, which could lead to the misclassification

  19. Linear regression techniques for use in the EC tracer method of secondary organic aerosol estimation

    NASA Astrophysics Data System (ADS)

    Saylor, Rick D.; Edgerton, Eric S.; Hartsell, Benjamin E.

    A variety of linear regression techniques and simple slope estimators are evaluated for use in the elemental carbon (EC) tracer method of secondary organic carbon (OC) estimation. Linear regression techniques based on ordinary least squares are not suitable for situations where measurement uncertainties exist in both regressed variables. In the past, regression based on the method of Deming [1943. Statistical Adjustment of Data. Wiley, London] has been the preferred choice for EC tracer method parameter estimation. In agreement with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], we find that in the limited case where primary non-combustion OC (OC non-comb) is assumed to be zero, the ratio of averages (ROA) approach provides a stable and reliable estimate of the primary OC-EC ratio, (OC/EC) pri. In contrast with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], however, we find that the optimal use of Deming regression (and the more general York et al. [2004. Unified equations for the slope, intercept, and standard errors of the best straight line. American Journal of Physics 72, 367-375] regression) provides excellent results as well. For the more typical case where OC non-comb is allowed to obtain a non-zero value, we find that regression based on the method of York is the preferred choice for EC tracer method parameter estimation. In the York regression technique, detailed information on uncertainties in the measurement of OC and EC is used to improve the linear best fit to the given data. If only limited information is available on the relative uncertainties of OC and EC, then Deming regression should be used. On the other hand, use of ROA in the estimation of secondary OC, and thus the assumption of a zero OC non-comb value, generally leads to an overestimation of the contribution of secondary OC to total measured OC.

  20. Comparison of Machine Learning Techniques for Estimating the Power Consumption ofHousehold Electric Appliances

    NASA Astrophysics Data System (ADS)

    Murata, Hiroshi; Onoda, Takashi; Yoshimoto, Katsuhisa; Nakano, Yukio

    A non-intrusive monitoring system estimates the behavior of individual electric appliances from the measurement of the total household load demand curve. The total load demand curve is measured at the entrance of the power line into the house. The power consumption of individual appliances can be estimated using several machine learning techniques by analyzing the characteristic frequency contents from the load curve of the hosehold. In this paper, we present results of applying several regression methods such as multi-layered perceptrons (MLP), radial basis function networks (RBFN) and Support Vector regressors (SVR) to estimate the power consumption of an air conditioner. Our experiments show RBFN can achieve the best accuracy for the non-intrusive monitoring system.

  1. Quantitative ultrasound characterization of locally advanced breast cancer by estimation of its scatterer properties

    SciTech Connect

    Tadayyon, Hadi; Sadeghi-Naini, Ali; Czarnota, Gregory; Wirtzfeld, Lauren; Wright, Frances C.

    2014-01-15

    Purpose: Tumor grading is an important part of breast cancer diagnosis and currently requires biopsy as its standard. Here, the authors investigate quantitative ultrasound parameters in locally advanced breast cancers that can potentially separate tumors from normal breast tissue and differentiate tumor grades. Methods: Ultrasound images and radiofrequency data from 42 locally advanced breast cancer patients were acquired and analyzed. Parameters related to the linear regression of the power spectrum—midband fit, slope, and 0-MHz-intercept—were determined from breast tumors and normal breast tissues. Mean scatterer spacing was estimated from the spectral autocorrelation, and the effective scatterer diameter and effective acoustic concentration were estimated from the Gaussian form factor. Parametric maps of each quantitative ultrasound parameter were constructed from the gated radiofrequency segments in tumor and normal tissue regions of interest. In addition to the mean values of the parametric maps, higher order statistical features, computed from gray-level co-occurrence matrices were also determined and used for characterization. Finally, linear and quadratic discriminant analyses were performed using combinations of quantitative ultrasound parameters to classify breast tissues. Results: Quantitative ultrasound parameters were found to be statistically different between tumor and normal tissue (p < 0.05). The combination of effective acoustic concentration and mean scatterer spacing could separate tumor from normal tissue with 82% accuracy, while the addition of effective scatterer diameter to the combination did not provide significant improvement (83% accuracy). Furthermore, the two advanced parameters, including effective scatterer diameter and mean scatterer spacing, were found to be statistically differentiating among grade I, II, and III tumors (p = 0.014 for scatterer spacing, p = 0.035 for effective scatterer diameter). The separation of the tumor

  2. Comparison of different techniques for streamflow-related extremes estimation in ungauged catchments

    NASA Astrophysics Data System (ADS)

    Rossi, Giuseppe; Caporali, Enrica; Chiarello, Valentina

    2013-04-01

    High and low flows and associated floods and droughts are natural phenomena caused by opposite meteorological extremes, affected by various, but similar catchment processes. Knowledge of peak flow and low flow discharges is fundamental in all water-related studies and infrastructures design. They are estimated starting from measurements of river discharges at stream gauging stations. The lack of observations at the site of interest as well as the inaccuracy of the measurements, however, bring inevitably to the necessity of developing predictive models. Regional analysis is the classical approach to estimate river flow characteristics at sites where little or no data exists. Once the homogeneous regions are defined, specific interpolation techniques are needed to regionalize the hydrological variables. Particularly, two different techniques are chosen here for estimating streamflow-related variables: the top-kriging and the multivariate analysis. Top-kriging is chosen because it is directly connected to the hydrographic network structure and geometric organization, while the Multivariate analysis, based on natural logarithms of seven geomorphoclimatic characteristics, is able to take into account the catchment properties. These methods are applied over the geographical space of Tuscany Region, in Central Italy. The results are validated using a cross-validation procedure, and are compared even with classical interpolation approaches (e.g. the ordinary kriging). With the aim to define the most suitable procedure for streamflow extremes estimation, the results are compared through different error measurement methods (mean square error, mean relative error, etc.).

  3. Random sets technique for information fusion applied to estimation of brain functional images

    NASA Astrophysics Data System (ADS)

    Smith, Therese M.; Kelly, Patrick A.

    1999-05-01

    A new mathematical technique for information fusion based on random sets, developed and described by Goodman, Mahler and Nguyen (The Mathematics of Data Fusion, Kluwer, 1997) can be useful for estimation of functional brian images. Many image estimation algorithms employ prior models that incorporate general knowledge about sizes, shapes and locations of brain regions. Recently, algorithms have been proposed using specific prior knowledge obtained from other imaging modalities (for example, Bowsher, et al., IEEE Trans. Medical Imaging, 1996). However, there is more relevant information than is presently used. A technique that permits use of additional prior information about activity levels would improve the quality of prior models, and hence, of the resulting image estimate. The use of random sets provides this capability because it allows seemingly non-statistical (or ambiguous) information such as that contained in inference rules to be represented and combined with observations in a single statistical model, corresponding to a global joint density. This paper illustrates the use of this approach by constructing an example global joint density function for brain functional activity from measurements of functional activity, anatomical information, clinical observations and inference rules. The estimation procedure is tested on a data phantom with Poisson noise.

  4. Oscillation and Reaction Board Techniques for Estimating Inertial Properties of a Below-knee Prosthesis

    PubMed Central

    Smith, Jeremy D.; Ferris, Abbie E.; Heise, Gary D.; Hinrichs, Richard N.; Martin, Philip E.

    2014-01-01

    The purpose of this study was two-fold: 1) demonstrate a technique that can be used to directly estimate the inertial properties of a below-knee prosthesis, and 2) contrast the effects of the proposed technique and that of using intact limb inertial properties on joint kinetic estimates during walking in unilateral, transtibial amputees. An oscillation and reaction board system was validated and shown to be reliable when measuring inertial properties of known geometrical solids. When direct measurements of inertial properties of the prosthesis were used in inverse dynamics modeling of the lower extremity compared with inertial estimates based on an intact shank and foot, joint kinetics at the hip and knee were significantly lower during the swing phase of walking. Differences in joint kinetics during stance, however, were smaller than those observed during swing. Therefore, researchers focusing on the swing phase of walking should consider the impact of prosthesis inertia property estimates on study outcomes. For stance, either one of the two inertial models investigated in our study would likely lead to similar outcomes with an inverse dynamics assessment. PMID:24837164

  5. Innovative Techniques for Estimating Illegal Activities in a Human-Wildlife-Management Conflict

    PubMed Central

    Cross, Paul; St. John, Freya A. V.; Khan, Saira; Petroczi, Andrea

    2013-01-01

    Effective management of biological resources is contingent upon stakeholder compliance with rules. With respect to disease management, partial compliance can undermine attempts to control diseases within human and wildlife populations. Estimating non-compliance is notoriously problematic as rule-breakers may be disinclined to admit to transgressions. However, reliable estimates of rule-breaking are critical to policy design. The European badger (Meles meles) is considered an important vector in the transmission and maintenance of bovine tuberculosis (bTB) in cattle herds. Land managers in high bTB prevalence areas of the UK can cull badgers under license. However, badgers are also known to be killed illegally. The extent of illegal badger killing is currently unknown. Herein we report on the application of three innovative techniques (Randomized Response Technique (RRT); projective questioning (PQ); brief implicit association test (BIAT)) for investigating illegal badger killing by livestock farmers across Wales. RRT estimated that 10.4% of farmers killed badgers in the 12 months preceding the study. Projective questioning responses and implicit associations relate to farmers' badger killing behavior reported via RRT. Studies evaluating the efficacy of mammal vector culling and vaccination programs should incorporate estimates of non-compliance. Mitigating the conflict concerning badgers as a vector of bTB requires cross-disciplinary scientific research, departure from deep-rooted positions, and the political will to implement evidence-based management. PMID:23341973

  6. Innovative techniques for estimating illegal activities in a human-wildlife-management conflict.

    PubMed

    Cross, Paul; St John, Freya A V; Khan, Saira; Petroczi, Andrea

    2013-01-01

    Effective management of biological resources is contingent upon stakeholder compliance with rules. With respect to disease management, partial compliance can undermine attempts to control diseases within human and wildlife populations. Estimating non-compliance is notoriously problematic as rule-breakers may be disinclined to admit to transgressions. However, reliable estimates of rule-breaking are critical to policy design. The European badger (Meles meles) is considered an important vector in the transmission and maintenance of bovine tuberculosis (bTB) in cattle herds. Land managers in high bTB prevalence areas of the UK can cull badgers under license. However, badgers are also known to be killed illegally. The extent of illegal badger killing is currently unknown. Herein we report on the application of three innovative techniques (Randomized Response Technique (RRT); projective questioning (PQ); brief implicit association test (BIAT)) for investigating illegal badger killing by livestock farmers across Wales. RRT estimated that 10.4% of farmers killed badgers in the 12 months preceding the study. Projective questioning responses and implicit associations relate to farmers' badger killing behavior reported via RRT. Studies evaluating the efficacy of mammal vector culling and vaccination programs should incorporate estimates of non-compliance. Mitigating the conflict concerning badgers as a vector of bTB requires cross-disciplinary scientific research, departure from deep-rooted positions, and the political will to implement evidence-based management. PMID:23341973

  7. Reliability and Efficacy of Water Use Estimation Techniques and their Impact on Water Management and Policy

    NASA Astrophysics Data System (ADS)

    Singh, A.; Deeds, N.; Kelley, V.

    2012-12-01

    Estimating how much water is being used by various water users is key to effective management and optimal utilization of groundwater resources. This is especially true for aquifers like the Ogallala that are severely stressed and display depleting trends over the last many years. The High Plains Underground Water Conservation District (HPWD) is the largest and oldest of the Texas water conservation districts, and oversees approximately 1.7 million irrigated acres. Water users within the 16 counties that comprise the HPWD draw from the Ogallala extensively. The HPWD has recently proposed flow-meters as well as various 'alternative methods' for water users to report water usage. Alternative methods include using a) site specific energy conversion factors to convert total amount of energy used (for pumping stations) to water pumped, b) reporting nozzle package (on center pivot irrigation systems) specifications and hours of usage, and c) reporting concentrated animal feeding operations (CAFOs). The focus of this project was to evaluate the reliability and effectiveness for each of these water use estimation techniques for regulatory purposes. Reliability and effectiveness of direct flow-metering devices was also addressed. Findings indicate that due to site-specific variability and hydrogeologic heterogeneity, alternative methods for estimating water use can have significant uncertainties associated with water use estimates. The impact of these uncertainties on overall water usage, conservation, and management was also evaluated. The findings were communicated to the Stakeholder Advisory Group and the Water Conservation District with guidelines and recommendations on how best to implement the various techniques.

  8. Simultaneous Estimation of Photometric Redshifts and SED Parameters: Improved Techniques and a Realistic Error Budget

    NASA Astrophysics Data System (ADS)

    Acquaviva, Viviana; Raichoor, Anand; Gawiser, Eric

    2015-05-01

    We seek to improve the accuracy of joint galaxy photometric redshift estimation and spectral energy distribution (SED) fitting. By simulating different sources of uncorrected systematic errors, we demonstrate that if the uncertainties in the photometric redshifts are estimated correctly, so are those on the other SED fitting parameters, such as stellar mass, stellar age, and dust reddening. Furthermore, we find that if the redshift uncertainties are over(under)-estimated, the uncertainties in SED parameters tend to be over(under)-estimated by similar amounts. These results hold even in the presence of severe systematics and provide, for the first time, a mechanism to validate the uncertainties on these parameters via comparison with spectroscopic redshifts. We propose a new technique (annealing) to re-calibrate the joint uncertainties in the photo-z and SED fitting parameters without compromising the performance of the SED fitting + photo-z estimation. This procedure provides a consistent estimation of the multi-dimensional probability distribution function in SED fitting + z parameter space, including all correlations. While the performance of joint SED fitting and photo-z estimation might be hindered by template incompleteness, we demonstrate that the latter is “flagged” by a large fraction of outliers in redshift, and that significant improvements can be achieved by using flexible stellar populations synthesis models and more realistic star formation histories. In all cases, we find that the median stellar age is better recovered than the time elapsed from the onset of star formation. Finally, we show that using a photometric redshift code such as EAZY to obtain redshift probability distributions that are then used as priors for SED fitting codes leads to only a modest bias in the SED fitting parameters and is thus a viable alternative to the simultaneous estimation of SED parameters and photometric redshifts.

  9. Determination of Electromagnetic Properties of Mesh Material Using Advanced Radiometer Techniques

    NASA Technical Reports Server (NTRS)

    Arrington, R. F.; Blume, H. J. C.

    1985-01-01

    The need for a large diameter deployable antenna to map soil moisture with a 10 kilometer or better resolution using a microwave radiometer is discussed. A 6 meter deployable antenna is also needed to map sea surface temperature on the Navy Remote Ocean Sensor System (NROSS). Both of these deployable antennas require a mesh membrane material as the reflecting surface. The determination of the electromagnetic properties of mesh materials is a difficult problem. The Antenna and Microwave Research Branch (AMRB) of Langley Research Center was asked to measure the material to be used on MROSS by NRL. A cooperative program was initiated to measure this mesh material using two advanced radiometer techniques.

  10. Measuring the microbiome: perspectives on advances in DNA-based techniques for exploring microbial life

    PubMed Central

    Bunge, John; Gilbert, Jack A.; Moore, Jason H.

    2012-01-01

    This article reviews recent advances in ‘microbiome studies’: molecular, statistical and graphical techniques to explore and quantify how microbial organisms affect our environments and ourselves given recent increases in sequencing technology. Microbiome studies are moving beyond mere inventories of specific ecosystems to quantifications of community diversity and descriptions of their ecological function. We review the last 24 months of progress in this sort of research, and anticipate where the next 2 years will take us. We hope that bioinformaticians will find this a helpful springboard for new collaborations with microbiologists. PMID:22308073

  11. Techniques for measurement of the thermal expansion of advanced composite materials

    NASA Technical Reports Server (NTRS)

    Tompkins, Stephen S.

    1989-01-01

    Techniques available to measure small thermal displacements in flat laminates and structural tubular elements of advanced composite materials are described. Emphasis is placed on laser interferometry and the laser interferometric dilatometer system used at the National Aeronautics and Space Administration (NASA) Langley Research Center. Thermal expansion data are presented for graphite-fiber reinforced 6061 and 2024 aluminum laminates and for graphite fiber reinforced AZ91 C and QH21 A magnesium laminates before and after processing to minimize or eliminate thermal strain hysteresis. Data are also presented on the effects of reinforcement volume content on thermal expansion of silicon-carbide whisker and particulate reinforced aluminum.

  12. [Recent advances in the techniques of protein-protein interaction study].

    PubMed

    Wang, Ming-Qiang; Wu, Jin-Xia; Zhang, Yu-Hong; Han, Ning; Bian, Hong-Wu; Zhu, Mu-Yuan

    2013-11-01

    Protein-protein interactions play key roles in the development of organisms and the response to biotic and abiotic stresses. Several wet-lab methods have been developed to study this challenging area,including yeast two-hybrid system, tandem affinity purification, Co-immunoprecipitation, GST Pull-down, bimolecular fluorescence complementation, fluorescence resonance energy transfer and surface plasmon resonance analysis. In this review, we discuss theoretical principles and relative advantages and disvantages of these techniques,with an emphasis on recent advances to compensate for limitations. PMID:24579310

  13. Advanced SuperDARN meteor wind observations based on raw time series analysis technique

    NASA Astrophysics Data System (ADS)

    Tsutsumi, M.; Yukimatu, A. S.; Holdsworth, D. A.; Lester, M.

    2009-04-01

    The meteor observation technique based on SuperDARN raw time series analysis has been upgraded. This technique extracts meteor information as biproducts and does not degrade the quality of normal SuperDARN operations. In the upgrade the radar operating system (RADOPS) has been modified so that it can oversample every 15 km during the normal operations, which have a range resolution of 45 km. As an alternative method for better range determination a frequency domain interferometry (FDI) capability was also coded in RADOPS, where the operating radio frequency can be changed every pulse sequence. Test observations were conducted using the CUTLASS Iceland East and Finland radars, where oversampling and FDI operation (two frequencies separated by 3 kHz) were simultaneously carried out. Meteor ranges obtained in both ranging techniques agreed very well. The ranges were then combined with the interferometer data to estimate meteor echo reflection heights. Although there were still some ambiguities in the arrival angles of echoes because of the rather long antenna spacing of the interferometers, the heights and arrival angles of most of meteor echoes were more accurately determined than previously. Wind velocities were successfully estimated over the height range of 84 to 110 km. The FDI technique developed here can be further applied to the common SuperDARN operation, and study of fine horizontal structures of F region plasma irregularities is expected in the future.

  14. A Multiple Criteria Decision Modelling approach to selection of estimation techniques for fitting extreme floods

    NASA Astrophysics Data System (ADS)

    Duckstein, L.; Bobée, B.; Ashkar, F.

    1991-09-01

    The problem of fitting a probability distribution, here log-Pearson Type III distribution, to extreme floods is considered from the point of view of two numerical and three non-numerical criteria. The six techniques of fitting considered include classical techniques (maximum likelihood, moments of logarithms of flows) and new methods such as mixed moments and the generalized method of moments developed by two of the co-authors. The latter method consists of fitting the distribution using moments of different order, in particular the SAM method (Sundry Averages Method) uses the moments of order 0 (geometric mean), 1 (arithmetic mean), -1 (harmonic mean) and leads to a smaller variance of the parameters. The criteria used to select the method of parameter estimation are: - the two statistical criteria of mean square error and bias; - the two computational criteria of program availability and ease of use; - the user-related criterion of acceptability. These criteria are transformed into value functions or fuzzy set membership functions and then three Multiple Criteria Decision Modelling (MCDM) techniques, namely, composite programming, ELECTRE, and MCQA, are applied to rank the estimation techniques.

  15. A biomechanical review of the techniques used to estimate or measure resistive forces in swimming.

    PubMed

    Sacilotto, Gina B D; Ball, Nick; Mason, Bruce R

    2014-02-01

    Resistive or drag forces encountered during free swimming greatly influence the swim performance of elite competitive swimmers. The benefits in understanding the factors which affect the drag encountered will enhance performance within the sport. However, the current techniques used to experimentally measure or estimate drag values are questioned for their consistency, therefore limiting investigations in these factors. This paper aims to further understand how the resistive forces in swimming are measured and calculated. All techniques outlined demonstrate both strengths and weaknesses in the overall assessment of free swimming. By reviewing all techniques in this area, the reader should be able to select which one is best depending on what researchers want to gain from the testing. PMID:24676518

  16. A comparison of conventional and advanced ultrasonic inspection techniques in the characterization of TMC materials

    NASA Astrophysics Data System (ADS)

    Holland, Mark R.; Handley, Scott M.; Miller, James G.; Reighard, Mark K.

    Results obtained with a conventional ultrasonic inspection technique as well as those obtained with more advanced ultrasonic NDE methods in the characterization of an 8-ply quasi-isotropic titanium matrix composite (TMC) specimen are presented. Images obtained from a conventional ultrasonic inspection of TMC material are compared with those obtained using more sophisticated ultrasonic inspection methods. It is suggested that the latter techniques are able to provide quantitative images of TMC material. They are able to reveal the same potential defect indications while simultaneously providing more quantitative information concerning the material's inherent properties. Band-limited signal loss and slope-of-attenuation images provide quantitative data on the inherent material characteristics and defects in TMC.

  17. A comparison of conventional and advanced ultrasonic inspection techniques in the characterization of TMC materials

    NASA Technical Reports Server (NTRS)

    Holland, Mark R.; Handley, Scott M.; Miller, James G.; Reighard, Mark K.

    1992-01-01

    Results obtained with a conventional ultrasonic inspection technique as well as those obtained with more advanced ultrasonic NDE methods in the characterization of an 8-ply quasi-isotropic titanium matrix composite (TMC) specimen are presented. Images obtained from a conventional ultrasonic inspection of TMC material are compared with those obtained using more sophisticated ultrasonic inspection methods. It is suggested that the latter techniques are able to provide quantitative images of TMC material. They are able to reveal the same potential defect indications while simultaneously providing more quantitative information concerning the material's inherent properties. Band-limited signal loss and slope-of-attenuation images provide quantitative data on the inherent material characteristics and defects in TMC.

  18. Recent advances in coupling capillary electrophoresis based separation techniques to ESI and MALDI MS

    PubMed Central

    Zhong, Xuefei; Zhang, Zichuan; Jiang, Shan; Li, Lingjun

    2014-01-01

    Coupling capillary electrophoresis (CE) based separation techniques to mass spectrometry creates a powerful platform for analysis of a wide range of biomolecules from complex samples because it combines the high separation efficiency of CE and the sensitivity and selectivity of MS detection. ESI and MALDI, as the most common soft ionization techniques employed for CE and MS coupling, offer distinct advantages for biomolecular characterization. This review is focused primarily on technological advances in combining CE and chip-based CE with ESI and MALDI MS detection in the past five years. Selected applications in the analyses of metabolites, peptides, and proteins with the recently developed CE-MS platforms are also highlighted. PMID:24170529

  19. Recent Advances and New Techniques in Visualization of Ultra-short Relativistic Electron Bunches

    SciTech Connect

    Xiang, Dao; /SLAC

    2012-06-05

    Ultrashort electron bunches with rms length of {approx} 1 femtosecond (fs) can be used to generate ultrashort x-ray pulses in FELs that may open up many new regimes in ultrafast sciences. It is also envisioned that ultrashort electron bunches may excite {approx}TeV/m wake fields for plasma wake field acceleration and high field physics studies. Recent success of using 20 pC electron beam to drive an x-ray FEL at LCLS has stimulated world-wide interests in using low charge beam (1 {approx} 20 pC) to generate ultrashort x-ray pulses (0.1 fs {approx} 10 fs) in FELs. Accurate measurement of the length (preferably the temporal profile) of the ultrashort electron bunch is essential for understanding the physics associated with the bunch compression and transportation. However, the shorter and shorter electron bunch greatly challenges the present beam diagnostic methods. In this paper we review the recent advances in the measurement of ultra-short electron bunches. We will focus on several techniques and their variants that provide the state-of-the-art temporal resolution. Methods to further improve the resolution of these techniques and the promise to break the 1 fs time barrier is discussed. We review recent advances in the measurement of ultrashort relativistic electron bunches. We will focus on several techniques and their variants that are capable of breaking the femtosecond time barrier in measurements of ultrashort bunches. Techniques for measuring beam longitudinal phase space as well as the x-ray pulse shape in an x-ray FEL are also discussed.

  20. Individual Particle Analysis of Ambient PM 2.5 Using Advanced Electron Microscopy Techniques

    SciTech Connect

    Gerald J. Keeler; Masako Morishita

    2006-12-31

    The overall goal of this project was to demonstrate a combination of advanced electron microscopy techniques that can be effectively used to identify and characterize individual particles and their sources. Specific techniques to be used include high-angle annular dark field scanning transmission electron microscopy (HAADF-STEM), STEM energy dispersive X-ray spectrometry (EDX), and energy-filtered TEM (EFTEM). A series of ambient PM{sub 2.5} samples were collected in communities in southwestern Detroit, MI (close to multiple combustion sources) and Steubenville, OH (close to several coal fired utility boilers). High-resolution TEM (HRTEM) -imaging showed a series of nano-metal particles including transition metals and elemental composition of individual particles in detail. Submicron and nano-particles with Al, Fe, Ti, Ca, U, V, Cr, Si, Ba, Mn, Ni, K and S were observed and characterized from the samples. Among the identified nano-particles, combinations of Al, Fe, Si, Ca and Ti nano-particles embedded in carbonaceous particles were observed most frequently. These particles showed very similar characteristics of ultrafine coal fly ash particles that were previously reported. By utilizing HAADF-STEM, STEM-EDX, and EF-TEM, this investigation was able to gain information on the size, morphology, structure, and elemental composition of individual nano-particles collected in Detroit and Steubenville. The results showed that the contributions of local combustion sources - including coal fired utilities - to ultrafine particle levels were significant. Although this combination of advanced electron microscopy techniques by itself can not identify source categories, these techniques can be utilized as complementary analytical tools that are capable of providing detailed information on individual particles.

  1. Global precipitation estimates based on a technique for combining satellite-based estimates, rain gauge analysis, and NWP model precipitation information

    NASA Technical Reports Server (NTRS)

    Huffman, George J.; Adler, Robert F.; Rudolf, Bruno; Schneider, Udo; Keehn, Peter R.

    1995-01-01

    The 'satellite-gauge model' (SGM) technique is described for combining precipitation estimates from microwave satellite data, infrared satellite data, rain gauge analyses, and numerical weather prediction models into improved estimates of global precipitation. Throughout, monthly estimates on a 2.5 degrees x 2.5 degrees lat-long grid are employed. First, a multisatellite product is developed using a combination of low-orbit microwave and geosynchronous-orbit infrared data in the latitude range 40 degrees N - 40 degrees S (the adjusted geosynchronous precipitation index) and low-orbit microwave data alone at higher latitudes. Then the rain gauge analysis is brougth in, weighting each field by its inverse relative error variance to produce a nearly global, observationally based precipitation estimate. To produce a complete global estimate, the numerical model results are used to fill data voids in the combined satellite-gauge estimate. Our sequential approach to combining estimates allows a user to select the multisatellite estimate, the satellite-gauge estimate, or the full SGM estimate (observationally based estimates plus the model information). The primary limitation in the method is imperfections in the estimation of relative error for the individual fields. The SGM results for one year of data (July 1987 to June 1988) show important differences from the individual estimates, including model estimates as well as climatological estimates. In general, the SGM results are drier in the subtropics than the model and climatological results, reflecting the relatively dry microwave estimates that dominate the SGM in oceanic regions.

  2. Accuracy and sampling error of two age estimation techniques using rib histomorphometry on a modern sample.

    PubMed

    García-Donas, Julieta G; Dyke, Jeffrey; Paine, Robert R; Nathena, Despoina; Kranioti, Elena F

    2016-02-01

    Most age estimation methods are proven problematic when applied in highly fragmented skeletal remains. Rib histomorphometry is advantageous in such cases; yet it is vital to test and revise existing techniques particularly when used in legal settings (Crowder and Rosella, 2007). This study tested Stout & Paine (1992) and Stout et al. (1994) histological age estimation methods on a Modern Greek sample using different sampling sites. Six left 4th ribs of known age and sex were selected from a modern skeletal collection. Each rib was cut into three equal segments. Two thin sections were acquired from each segment. A total of 36 thin sections were prepared and analysed. Four variables (cortical area, intact and fragmented osteon density and osteon population density) were calculated for each section and age was estimated according to Stout & Paine (1992) and Stout et al. (1994). The results showed that both methods produced a systemic underestimation of the individuals (to a maximum of 43 years) although a general improvement in accuracy levels was observed when applying the Stout et al. (1994) formula. There is an increase of error rates with increasing age with the oldest individual showing extreme differences between real age and estimated age. Comparison of the different sampling sites showed small differences between the estimated ages suggesting that any fragment of the rib could be used without introducing significant error. Yet, a larger sample should be used to confirm these results. PMID:26698389

  3. Estimating the sources of global sea level rise with data assimilation techniques

    PubMed Central

    Hay, Carling C.; Morrow, Eric; Kopp, Robert E.; Mitrovica, Jerry X.

    2013-01-01

    A rapidly melting ice sheet produces a distinctive geometry, or fingerprint, of sea level (SL) change. Thus, a network of SL observations may, in principle, be used to infer sources of meltwater flux. We outline a formalism, based on a modified Kalman smoother, for using tide gauge observations to estimate the individual sources of global SL change. We also report on a series of detection experiments based on synthetic SL data that explore the feasibility of extracting source information from SL records. The Kalman smoother technique iteratively calculates the maximum-likelihood estimate of Greenland ice sheet (GIS) and West Antarctic ice sheet (WAIS) melt at each time step, and it accommodates data gaps while also permitting the estimation of nonlinear trends. Our synthetic tests indicate that when all tide gauge records are used in the analysis, it should be possible to estimate GIS and WAIS melt rates greater than ∼0.3 and ∼0.4 mm of equivalent eustatic sea level rise per year, respectively. We have also implemented a multimodel Kalman filter that allows us to account rigorously for additional contributions to SL changes and their associated uncertainty. The multimodel filter uses 72 glacial isostatic adjustment models and 3 ocean dynamic models to estimate the most likely models for these processes given the synthetic observations. We conclude that our modified Kalman smoother procedure provides a powerful method for inferring melt rates in a warming world. PMID:22543163

  4. Estimating the sources of global sea level rise with data assimilation techniques

    NASA Astrophysics Data System (ADS)

    Hay, Carling C.; Morrow, Eric; Kopp, Robert E.; Mitrovica, Jerry X.

    2013-02-01

    A rapidly melting ice sheet produces a distinctive geometry, or fingerprint, of sea level (SL) change. Thus, a network of SL observations may, in principle, be used to infer sources of meltwater flux. We outline a formalism, based on a modified Kalman smoother, for using tide gauge observations to estimate the individual sources of global SL change. We also report on a series of detection experiments based on synthetic SL data that explore the feasibility of extracting source information from SL records. The Kalman smoother technique iteratively calculates the maximum-likelihood estimate of Greenland ice sheet (GIS) and West Antarctic ice sheet (WAIS) melt at each time step, and it accommodates data gaps while also permitting the estimation of nonlinear trends. Our synthetic tests indicate that when all tide gauge records are used in the analysis, it should be possible to estimate GIS and WAIS melt rates greater than ∼0.3 and ∼0.4 mm of equivalent eustatic sea level rise per year, respectively. We have also implemented a multimodel Kalman filter that allows us to account rigorously for additional contributions to SL changes and their associated uncertainty. The multimodel filter uses 72 glacial isostatic adjustment models and 3 ocean dynamic models to estimate the most likely models for these processes given the synthetic observations. We conclude that our modified Kalman smoother procedure provides a powerful method for inferring melt rates in a warming world.

  5. Comparison of different automatic adaptive threshold selection techniques for estimating discharge from river width

    NASA Astrophysics Data System (ADS)

    Elmi, Omid; Javad Tourian, Mohammad; Sneeuw, Nico

    2015-04-01

    The importance of river discharge monitoring is critical for e.g., water resource planning, climate change, hazard monitoring. River discharge has been measured at in situ gauges for more than a century. Despite various attempts, some basins are still ungauged. Moreover, a reduction in the number of worldwide gauging stations increases the interest to employ remote sensing data for river discharge monitoring. Finding an empirical relationship between simultaneous in situ measurements of discharge and river widths derived from satellite imagery has been introduced as a straightforward remote sensing alternative. Classifying water and land in an image is the primary task for defining the river width. Water appears dark in the near infrared and infrared bands in satellite images. As a result low values in the histogram usually represent the water content. In this way, applying a threshold on the image histogram and separating into two different classes is one of the most efficient techniques to build a water mask. Beside its simple definition, finding the appropriate threshold value in each image is the most critical issue. The threshold is variable due to changes in the water level, river extent, atmosphere, sunlight radiation, onboard calibration of the satellite over time. These complexities in water body classification are the main source of error in river width estimation. In this study, we are looking for the most efficient adaptive threshold algorithm to estimate the river discharge. To do this, all cloud free MODIS images coincident with the in situ measurement are collected. Next a number of automatic threshold selection techniques are employed to generate different dynamic water masks. Then, for each of them a separate empirical relationship between river widths and discharge measurements are determined. Through these empirical relationships, we estimate river discharge at the gauge and then validate our results against in situ measurements and also

  6. Biotechnology Apprenticeship for Secondary-Level Students: Teaching Advanced Cell Culture Techniques for Research

    PubMed Central

    Lewis, Jennifer R.; Kotur, Mark S.; Butt, Omar; Kulcarni, Sumant; Riley, Alyssa A.; Ferrell, Nick; Sullivan, Kathryn D.; Ferrari, Mauro

    2002-01-01

    The purpose of this article is to discuss small-group apprenticeships (SGAs) as a method to instruct cell culture techniques to high school participants. The study aimed to teach cell culture practices and to introduce advanced imaging techniques to solve various biomedical engineering problems. Participants designed and completed experiments using both flow cytometry and laser scanning cytometry during the 1-month summer apprenticeship. In addition to effectively and efficiently teaching cell biology laboratory techniques, this course design provided an opportunity for research training, career exploration, and mentoring. Students participated in active research projects, working with a skilled interdisciplinary team of researchers in a large research institution with access to state-of-the-art instrumentation. The instructors, composed of graduate students, laboratory managers, and principal investigators, worked well together to present a real and worthwhile research experience. The students enjoyed learning cell culture techniques while contributing to active research projects. The institution's researchers were equally enthusiastic to instruct and serve as mentors. In this article, we clarify and illuminate the value of small-group laboratory apprenticeships to the institution and the students by presenting the results and experiences of seven middle and high school participants and their instructors. PMID:12587031

  7. Biotechnology apprenticeship for secondary-level students: teaching advanced cell culture techniques for research.

    PubMed

    Lewis, Jennifer R; Kotur, Mark S; Butt, Omar; Kulcarni, Sumant; Riley, Alyssa A; Ferrell, Nick; Sullivan, Kathryn D; Ferrari, Mauro

    2002-01-01

    The purpose of this article is to discuss small-group apprenticeships (SGAs) as a method to instruct cell culture techniques to high school participants. The study aimed to teach cell culture practices and to introduce advanced imaging techniques to solve various biomedical engineering problems. Participants designed and completed experiments using both flow cytometry and laser scanning cytometry during the 1-month summer apprenticeship. In addition to effectively and efficiently teaching cell biology laboratory techniques, this course design provided an opportunity for research training, career exploration, and mentoring. Students participated in active research projects, working with a skilled interdisciplinary team of researchers in a large research institution with access to state-of-the-art instrumentation. The instructors, composed of graduate students, laboratory managers, and principal investigators, worked well together to present a real and worthwhile research experience. The students enjoyed learning cell culture techniques while contributing to active research projects. The institution's researchers were equally enthusiastic to instruct and serve as mentors. In this article, we clarify and illuminate the value of small-group laboratory apprenticeships to the institution and the students by presenting the results and experiences of seven middle and high school participants and their instructors. PMID:12587031

  8. Advancement of an Infra-Red Technique for Whole-Field Concentration Measurements in Fluidized Beds

    PubMed Central

    Medrano, Jose A.; de Nooijer, Niek C. A.; Gallucci, Fausto; van Sint Annaland, Martin

    2016-01-01

    For a better understanding and description of the mass transport phenomena in dense multiphase gas-solids systems such as fluidized bed reactors, detailed and quantitative experimental data on the concentration profiles is required, which demands advanced non-invasive concentration monitoring techniques with a high spatial and temporal resolution. A novel technique based on the selective detection of a gas component in a gas mixture using infra-red properties has been further developed. The first stage development was carried out using a very small sapphire reactor and CO2 as tracer gas. Although the measuring principle was demonstrated, the real application was hindered by the small reactor dimensions related to the high costs and difficult handling of large sapphire plates. In this study, a new system has been developed, that allows working at much larger scales and yet with higher resolution. In the new system, propane is used as tracer gas and quartz as reactor material. In this study, a thorough optimization and calibration of the technique is presented which is subsequently applied for whole-field measurements with high temporal resolution. The developed technique allows the use of a relatively inexpensive configuration for the measurement of detailed concentration fields and can be applied to a large variety of important chemical engineering topics. PMID:26927127

  9. Advancement of an Infra-Red Technique for Whole-Field Concentration Measurements in Fluidized Beds.

    PubMed

    Medrano, Jose A; de Nooijer, Niek C A; Gallucci, Fausto; van Sint Annaland, Martin

    2016-01-01

    For a better understanding and description of the mass transport phenomena in dense multiphase gas-solids systems such as fluidized bed reactors, detailed and quantitative experimental data on the concentration profiles is required, which demands advanced non-invasive concentration monitoring techniques with a high spatial and temporal resolution. A novel technique based on the selective detection of a gas component in a gas mixture using infra-red properties has been further developed. The first stage development was carried out using a very small sapphire reactor and CO₂ as tracer gas. Although the measuring principle was demonstrated, the real application was hindered by the small reactor dimensions related to the high costs and difficult handling of large sapphire plates. In this study, a new system has been developed, that allows working at much larger scales and yet with higher resolution. In the new system, propane is used as tracer gas and quartz as reactor material. In this study, a thorough optimization and calibration of the technique is presented which is subsequently applied for whole-field measurements with high temporal resolution. The developed technique allows the use of a relatively inexpensive configuration for the measurement of detailed concentration fields and can be applied to a large variety of important chemical engineering topics. PMID:26927127

  10. A technique for estimating 4D-CBCT using prior knowledge and limited-angle projections

    SciTech Connect

    Zhang, You; Yin, Fang-Fang; Ren, Lei; Segars, W. Paul

    2013-12-15

    Purpose: To develop a technique to estimate onboard 4D-CBCT using prior information and limited-angle projections for potential 4D target verification of lung radiotherapy.Methods: Each phase of onboard 4D-CBCT is considered as a deformation from one selected phase (prior volume) of the planning 4D-CT. The deformation field maps (DFMs) are solved using a motion modeling and free-form deformation (MM-FD) technique. In the MM-FD technique, the DFMs are estimated using a motion model which is extracted from planning 4D-CT based on principal component analysis (PCA). The motion model parameters are optimized by matching the digitally reconstructed radiographs of the deformed volumes to the limited-angle onboard projections (data fidelity constraint). Afterward, the estimated DFMs are fine-tuned using a FD model based on data fidelity constraint and deformation energy minimization. The 4D digital extended-cardiac-torso phantom was used to evaluate the MM-FD technique. A lung patient with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume, including changes of respiration amplitude, lesion size and lesion average-position, and phase shift between lesion and body respiratory cycle. The lesions were contoured in both the estimated and “ground-truth” onboard 4D-CBCT for comparison. 3D volume percentage-difference (VPD) and center-of-mass shift (COMS) were calculated to evaluate the estimation accuracy of three techniques: MM-FD, MM-only, and FD-only. Different onboard projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy.Results: For all simulated patient and projection acquisition scenarios, the mean VPD (±S.D.)/COMS (±S.D.) between lesions in prior images and “ground-truth” onboard images were 136.11% (±42.76%)/15.5 mm (±3.9 mm). Using orthogonal-view 15°-each scan angle, the mean VPD/COMS between the lesion

  11. Charge mitigation techniques using glow and corona discharges for advanced gravitational wave detectors

    NASA Astrophysics Data System (ADS)

    Campsie, P.; Cunningham, L.; Hendry, M.; Hough, J.; Reid, S.; Rowan, S.; Hammond, G. D.

    2011-11-01

    Charging of silica test masses in gravitational wave detectors could potentially become a significant low-frequency noise source for advanced detectors. Charging noise has already been observed and confirmed in the GEO600 detector and is thought to have been observed in one of the LIGO detectors. In this paper, two charge mitigation techniques using glow and corona discharges were investigated to create repeatable and robust procedures. The glow discharge procedure was used to mitigate charge under vacuum and would be intended to be used in the instance where an optic has become charged while the detector is in operation. The corona discharge procedure was used to discharge samples at atmospheric pressure and would be intended to be used to discharge the detector optics during the cleaning of the optics. Both techniques were shown to reduce both polarities of surface charge on fused silica to a level that would not limit advanced LIGO. Measurements of the transmission of samples that had undergone the charge mitigation procedures showed no significant variation in transmission, at a sensitivity of ~ 200 ppm, in TiO2-doped Ta2O5/SiO2 multi-layer coated fused silica.

  12. Development of Advanced Nuclide Separation and Recovery Methods using Ion-Exchanhge Techniques in Nuclear Backend

    NASA Astrophysics Data System (ADS)

    Miura, Hitoshi

    The development of compact separation and recovery methods using selective ion-exchange techniques is very important for the reprocessing and high-level liquid wastes (HLLWs) treatment in the nuclear backend field. The selective nuclide separation techniques are effective for the volume reduction of wastes and the utilization of valuable nuclides, and expected for the construction of advanced nuclear fuel cycle system and the rationalization of waste treatment. In order to accomplish the selective nuclide separation, the design and synthesis of novel adsorbents are essential for the development of compact and precise separation processes. The present paper deals with the preparation of highly functional and selective hybrid microcapsules enclosing nano-adsorbents in the alginate gel polymer matrices by sol-gel methods, their characterization and the clarification of selective adsorption properties by batch and column methods. The selective separation of Cs, Pd and Re in real HLLW was further accomplished by using novel microcapsules, and an advanced nuclide separation system was proposed by the combination of selective processes using microcapsules.

  13. Use of Advanced Magnetic Resonance Imaging Techniques in Neuromyelitis Optica Spectrum Disorder

    PubMed Central

    Kremer, Stephane; Renard, Felix; Achard, Sophie; Lana-Peixoto, Marco A.; Palace, Jacqueline; Asgari, Nasrin; Klawiter, Eric C.; Tenembaum, Silvia N.; Banwell, Brenda; Greenberg, Benjamin M.; Bennett, Jeffrey L.; Levy, Michael; Villoslada, Pablo; Saiz, Albert; Fujihara, Kazuo; Chan, Koon Ho; Schippling, Sven; Paul, Friedemann; Kim, Ho Jin; de Seze, Jerome; Wuerfel, Jens T.

    2016-01-01

    Brain parenchymal lesions are frequently observed on conventional magnetic resonance imaging (MRI) scans of patients with neuromyelitis optica (NMO) spectrum disorder, but the specific morphological and temporal patterns distinguishing them unequivocally from lesions caused by other disorders have not been identified. This literature review summarizes the literature on advanced quantitative imaging measures reported for patients with NMO spectrum disorder, including proton MR spectroscopy, diffusion tensor imaging, magnetization transfer imaging, quantitative MR volumetry, and ultrahigh-field strength MRI. It was undertaken to consider the advanced MRI techniques used for patients with NMO by different specialists in the field. Although quantitative measures such as proton MR spectroscopy or magnetization transfer imaging have not reproducibly revealed diffuse brain injury, preliminary data from diffusion-weighted imaging and brain tissue volumetry indicate greater white matter than gray matter degradation. These findings could be confirmed by ultrahigh-field MRI. The use of nonconventional MRI techniques may further our understanding of the pathogenic processes in NMO spectrum disorders and may help us identify the distinct radiographic features corresponding to specific phenotypic manifestations of this disease. PMID:26010909

  14. Advanced grazing-incidence techniques for modern soft-matter materials analysis

    SciTech Connect

    Hexemer, Alexander; Müller-Buschbaum, Peter

    2015-01-01

    The complex nano-morphology of modern soft-matter materials is successfully probed with advanced grazing-incidence techniques. Based on grazing-incidence small- and wide-angle X-ray and neutron scattering (GISAXS, GIWAXS, GISANS and GIWANS), new possibilities arise which are discussed with selected examples. Due to instrumental progress, highly interesting possibilities for local structure analysis in this material class arise from the use of micro- and nanometer-sized X-ray beams in micro- or nanofocused GISAXS and GIWAXS experiments. The feasibility of very short data acquisition times down to milliseconds creates exciting possibilities forin situandin operandoGISAXS and GIWAXS studies. Tuning the energy of GISAXS and GIWAXS in the soft X-ray regime and in time-of flight GISANS allows the tailoring of contrast conditions and thereby the probing of more complex morphologies. In addition, recent progress in software packages, useful for data analysis for advanced grazing-incidence techniques, is discussed.

  15. Advanced MRI Techniques in the Evaluation of Complex Cystic Breast Lesions

    PubMed Central

    Popli, Manju Bala; Gupta, Pranav; Arse, Devraj; Kumar, Pawan; Kaur, Prabhjot

    2016-01-01

    OBJECTIVE The purpose of this research work was to evaluate complex cystic breast lesions by advanced MRI techniques and correlating imaging with histologic findings. METHODS AND MATERIALS In a cross-sectional design from September 2013 to August 2015, 50 patients having sonographically detected complex cystic lesions of the breast were included in the study. Morphological characteristics were assessed. Dynamic contrast-enhanced MRI along with diffusion-weighted imaging and MR spectroscopy were used to further classify lesions into benign and malignant categories. All the findings were correlated with histopathology. RESULTS Of the 50 complex cystic lesions, 32 proved to be benign and 18 were malignant on histopathology. MRI features of heterogeneous enhancement on CE-MRI (13/18), Type III kinetic curve (13/18), reduced apparent diffusion coefficient (18/18), and tall choline peak (17/18) were strong predictors of malignancy. Thirteen of the 18 lesions showed a combination of Type III curve, reduced apparent diffusion coefficient value, and tall choline peak. CONCLUSIONS Advanced MRI techniques like dynamic imaging, diffusion-weighted sequences, and MR spectroscopy provide a high level of diagnostic confidence in the characterization of complex cystic breast lesion, thus allowing early diagnosis and significantly reducing patient morbidity and mortality. From our study, lesions showing heterogeneous contrast enhancement, Type III kinetic curve, diffusion restriction, and tall choline peak were significantly associated with malignant complex cystic lesions of the breast. PMID:27330299

  16. Advanced grazing-incidence techniques for modern soft-matter materials analysis

    DOE PAGESBeta

    Hexemer, Alexander; Müller-Buschbaum, Peter

    2015-01-01

    The complex nano-morphology of modern soft-matter materials is successfully probed with advanced grazing-incidence techniques. Based on grazing-incidence small- and wide-angle X-ray and neutron scattering (GISAXS, GIWAXS, GISANS and GIWANS), new possibilities arise which are discussed with selected examples. Due to instrumental progress, highly interesting possibilities for local structure analysis in this material class arise from the use of micro- and nanometer-sized X-ray beams in micro- or nanofocused GISAXS and GIWAXS experiments. The feasibility of very short data acquisition times down to milliseconds creates exciting possibilities forin situandin operandoGISAXS and GIWAXS studies. Tuning the energy of GISAXS and GIWAXS in themore » soft X-ray regime and in time-of flight GISANS allows the tailoring of contrast conditions and thereby the probing of more complex morphologies. In addition, recent progress in software packages, useful for data analysis for advanced grazing-incidence techniques, is discussed.« less

  17. Use of Advanced Magnetic Resonance Imaging Techniques in Neuromyelitis Optica Spectrum Disorder.

    PubMed

    Kremer, Stephane; Renard, Felix; Achard, Sophie; Lana-Peixoto, Marco A; Palace, Jacqueline; Asgari, Nasrin; Klawiter, Eric C; Tenembaum, Silvia N; Banwell, Brenda; Greenberg, Benjamin M; Bennett, Jeffrey L; Levy, Michael; Villoslada, Pablo; Saiz, Albert; Fujihara, Kazuo; Chan, Koon Ho; Schippling, Sven; Paul, Friedemann; Kim, Ho Jin; de Seze, Jerome; Wuerfel, Jens T; Cabre, Philippe; Marignier, Romain; Tedder, Thomas; van Pelt, Danielle; Broadley, Simon; Chitnis, Tanuja; Wingerchuk, Dean; Pandit, Lekha; Leite, Maria Isabel; Apiwattanakul, Metha; Kleiter, Ingo; Prayoonwiwat, Naraporn; Han, May; Hellwig, Kerstin; van Herle, Katja; John, Gareth; Hooper, D Craig; Nakashima, Ichiro; Sato, Douglas; Yeaman, Michael R; Waubant, Emmanuelle; Zamvil, Scott; Stüve, Olaf; Aktas, Orhan; Smith, Terry J; Jacob, Anu; O'Connor, Kevin

    2015-07-01

    Brain parenchymal lesions are frequently observed on conventional magnetic resonance imaging (MRI) scans of patients with neuromyelitis optica (NMO) spectrum disorder, but the specific morphological and temporal patterns distinguishing them unequivocally from lesions caused by other disorders have not been identified. This literature review summarizes the literature on advanced quantitative imaging measures reported for patients with NMO spectrum disorder, including proton MR spectroscopy, diffusion tensor imaging, magnetization transfer imaging, quantitative MR volumetry, and ultrahigh-field strength MRI. It was undertaken to consider the advanced MRI techniques used for patients with NMO by different specialists in the field. Although quantitative measures such as proton MR spectroscopy or magnetization transfer imaging have not reproducibly revealed diffuse brain injury, preliminary data from diffusion-weighted imaging and brain tissue volumetry indicate greater white matter than gray matter degradation. These findings could be confirmed by ultrahigh-field MRI. The use of nonconventional MRI techniques may further our understanding of the pathogenic processes in NMO spectrum disorders and may help us identify the distinct radiographic features corresponding to specific phenotypic manifestations of this disease. PMID:26010909

  18. Advanced grazing-incidence techniques for modern soft-matter materials analysis

    PubMed Central

    Hexemer, Alexander; Müller-Buschbaum, Peter

    2015-01-01

    The complex nano-morphology of modern soft-matter materials is successfully probed with advanced grazing-incidence techniques. Based on grazing-incidence small- and wide-angle X-ray and neutron scattering (GISAXS, GIWAXS, GISANS and GIWANS), new possibilities arise which are discussed with selected examples. Due to instrumental progress, highly interesting possibilities for local structure analysis in this material class arise from the use of micro- and nanometer-sized X-ray beams in micro- or nanofocused GISAXS and GIWAXS experiments. The feasibility of very short data acquisition times down to milliseconds creates exciting possibilities for in situ and in operando GISAXS and GIWAXS studies. Tuning the energy of GISAXS and GIWAXS in the soft X-ray regime and in time-of flight GISANS allows the tailoring of contrast conditions and thereby the probing of more complex morphologies. In addition, recent progress in software packages, useful for data analysis for advanced grazing-incidence techniques, is discussed. PMID:25610632

  19. Multicohort model for prevalence estimation of advanced malignant melanoma in the USA: an increasing public health concern.

    PubMed

    Lin, Amy Y; Wang, Peter F; Li, Haojie; Kolker, Jennifer A

    2012-12-01

    The aim of the study was to estimate the current prevalence of advanced cutaneous malignant melanoma in 2010 in the USA and to project prevalence estimates to the year 2015. An excel-based, multicohort natural disease history model was developed. It used incidence, recurrence, all-cause mortality, and US population data from the up-to-date surveillance, epidemiology, and end results program, the US census, and the literature. The prevalence was stratified by tumor stage, sex, and age. The model estimated that there were 800 735 malignant melanoma cases (258 per 100 000 individuals) in the USA in 2010, of which 10.4% were in advanced stages including stage III (22 per 100 000 individuals) and stage IV (four per 100 000 individuals). Among these advanced cases, 58.8% were men. In total, 42.1% of patients with advanced malignant melanoma were 65 years of age and older. Of these elderly patients with an advanced stage of the disease, 65.7% were men. The total number of cases and number of advanced cases were projected to increase from 2010 to 2015 by 24.4 and 21.0%, respectively. There will be approximately one million malignant melanoma cases (306 per 100 000 individuals) in the USA in 2015. The prevalence of advanced malignant melanoma is expected to increase in the next few years. Advanced malignant melanoma disproportionately affects men and the elderly in the USA. PMID:22990665

  20. Advanced Intensity-Modulation Continuous-Wave Lidar Techniques for Column CO2 Measurements

    NASA Astrophysics Data System (ADS)

    Campbell, J. F.; Lin, B.; Nehrir, A. R.; Obland, M. D.; Liu, Z.; Browell, E. V.; Chen, S.; Kooi, S. A.; Fan, T. F.

    2015-12-01

    Global and regional atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission and Atmospheric Carbon and Transport (ACT) - America airborne investigation are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity-Modulated Continuous-Wave (IM-CW) lidar techniques are being investigated as a means of facilitating CO2 measurements from space and airborne platforms to meet the mission science measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud returns. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of intervening optically thin clouds, thereby minimizing bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the Earth's surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques and provides very high (at sub-meter level) range resolution. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These techniques are used in a new data processing architecture to support the ASCENDS CarbonHawk Experiment Simulator (ACES) and ACT-America programs.

  1. Advanced intensity-modulation continuous-wave lidar techniques for ASCENDS CO2 column measurements

    NASA Astrophysics Data System (ADS)

    Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.; Harrison, F. W.; Obland, Michael D.; Meadows, Byron

    2015-10-01

    Global atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity- Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud contamination. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating the need to correct for sidelobe bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These results are extended to include Richardson-Lucy deconvolution techniques to extend the resolution of the lidar beyond that implied by limit of the bandwidth of the modulation, where it is shown useful for making tree canopy measurements.

  2. Advanced Intensity-Modulation Continuous-Wave Lidar Techniques for ASCENDS O2 Column Measurements

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.; Harrison, F. Wallace; Obland, Michael D.; Meadows, Byron

    2015-01-01

    Global atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity- Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud contamination. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating the need to correct for sidelobe bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These results are extended to include Richardson-Lucy deconvolution techniques to extend the resolution of the lidar beyond that implied by limit of the bandwidth of the modulation, where it is shown useful for making tree canopy measurements.

  3. Parameter estimation in PS-InSAR deformation studies using integer least-squares techniques

    NASA Astrophysics Data System (ADS)

    Hanssen, R. F.; Ferretti, A.

    2002-12-01

    Interferometric synthetic aperture radar (InSAR) methods are increasingly used for measuring deformations of the earth's surface. Unfortunately, in many cases the problem of temporal decorrelation hampers successful measurements over longer time intervals. The permanent scatterers approach (PS-InSAR) for processing time series of SAR interferograms proves to be a good alternative by recognizing and analyzing single scatterers with a reliable phase behavior in time. Ambiguity resolution or phase unwrapping is the process of resolving the unknown cycle ambiguities in the radar data, and is one of the main problems in InSAR data analysis. In a single interferogram, the problem of phase unwrapping and parameter estimation is usually solved for in separate consecutive computations. It is often assumed that the final result of the phase unwrapping is a deterministic signal, used as input for the parameter estimation, e.g. elevation and deformation. As a result, errors in the ambiguity resolution are usually not propagated into the final results, which can lead to a serious underestimation of errors in the parameters and consequently in the geophysical models which use these parameters. In fact, however, the resolved phase ambiguities are stochastic as well, even though they are described with a probability mass function in stead of a probability density function. In this paper, the integer least-squares technique for integrated ambiguity resolution and parameter estimation is applied to PS-InSAR data analysis, using a three-step procedure. First, a standard least-squares adjustment is performed, assuming the ambiguities are float parameters, leading to the real-valued 'float'-solution. Second, the ambiguities are resolved using the float ambiguity estimates. Third, if the second step was successful, the integer estimates are used to correct the float solution estimate. It has been proved that the integer least-squares estimator is an optimal method in the sense that it

  4. System Design Techniques for Reducing the Power Requirements of Advanced life Support Systems

    NASA Technical Reports Server (NTRS)

    Finn, Cory; Levri, Julie; Pawlowski, Chris; Crawford, Sekou; Luna, Bernadette (Technical Monitor)

    2000-01-01

    The high power requirement associated with overall operation of regenerative life support systems is a critical Z:p technological challenge. Optimization of individual processors alone will not be sufficient to produce an optimized system. System studies must be used in order to improve the overall efficiency of life support systems. Current research efforts at NASA Ames Research Center are aimed at developing approaches for reducing system power and energy usage in advanced life support systems. System energy integration and energy reuse techniques are being applied to advanced life support, in addition to advanced control methods for efficient distribution of power and thermal resources. An overview of current results of this work will be presented. The development of integrated system designs that reuse waste heat from sources such as crop lighting and solid waste processing systems will reduce overall power and cooling requirements. Using an energy integration technique known as Pinch analysis, system heat exchange designs are being developed that match hot and cold streams according to specific design principles. For various designs, the potential savings for power, heating and cooling are being identified and quantified. The use of state-of-the-art control methods for distribution of resources, such as system cooling water or electrical power, will also reduce overall power and cooling requirements. Control algorithms are being developed which dynamically adjust the use of system resources by the various subsystems and components in order to achieve an overall goal, such as smoothing of power usage and/or heat rejection profiles, while maintaining adequate reserves of food, water, oxygen, and other consumables, and preventing excessive build-up of waste materials. Reductions in the peak loading of the power and thermal systems will lead to lower overall requirements. Computer simulation models are being used to test various control system designs.

  5. Study of solid oxide fuel cell interconnects, protective coatings and advanced physical vapor deposition techniques

    NASA Astrophysics Data System (ADS)

    Gannon, Paul Edward

    High energy conversion efficiency, decreased environmentally-sensitive emissions and fuel flexibility have attracted increasing attention toward solid oxide fuel cell (SOFC) systems for stationary, transportation and portable power generation. Critical durability and cost issues, however, continue to impede wide-spread deployment. Many intermediate temperature (600-800°C) planar SOFC systems employ metallic alloy interconnect components, which physically connect individual fuel cells into electric series, facilitate gas distribution to appropriate SOFC electrode chambers (fuel/anode and oxidant[air]/cathode) and provide SOFC stack mechanical support. These demanding multifunctional requirements challenge commercially-available and inexpensive metallic alloys due to corrosion and related effects. Many ongoing investigations are aimed at enabling inexpensive metallic alloys (via bulk and/or surface modifications) as SOFC interconnects (SOFC(IC)s). In this study, two advanced physical vapor deposition (PVD) techniques: large area filtered vacuum arc deposition (LAFAD), and filtered arc plasma-assisted electron beam PVD (FA-EBPVD) were used to deposit a wide-variety of protective nanocomposite (amorphous/nanocrystalline) ceramic thin-film (<5microm) coatings on commercial and specialty stainless steels with different surface finishes. Both bare and coated steel specimens were subjected to SOFC(IC)-relevant exposures and evaluated using complimentary surface analysis techniques. Significant improvements were observed under simulated SOFC(IC) exposures with many coated specimens at ˜800°C relative to uncoated specimens: stable surface morphology; low area specific resistance (ASR <100mO·cm 2 >1,000 hours); and, dramatically reduced Cr volatility (>30-fold). Analyses and discussions of SOFC(IC) corrosion, advanced PVD processes and protective coating behavior are intended to advance understanding and accelerate the development of durable and commercially-viable SOFC

  6. Comparison of Erosion Rates Estimated by Sediment Budget Techniques and Suspended Sediment Monitoring and Regulatory Implications

    NASA Astrophysics Data System (ADS)

    O'Connor, M.; Eads, R.

    2007-12-01

    Watersheds in the northern California Coast Range have been designated as "impaired" with respect to water quality because of excessive sediment loads and/or high water temperature. Sediment budget techniques have typically been used by regulatory authorities to estimate current erosion rates and to develop targets for future desired erosion rates. This study examines erosion rates estimated by various methods for portions of the Gualala River watershed, designated as having water quality impaired by sediment under provisions of the Clean Water Act Section 303(d), located in northwest Sonoma County (~90 miles north of San Francisco). The watershed is underlain by Jurassic age sedimentary and meta-sedimentary rocks of the Franciscan formation. The San Andreas Fault passes through the western edge of watershed, and other active faults are present. A substantial portion of the watershed is mantled by rock slides and earth flows, many of which are considered dormant. The Coast Range is geologically young, and rapid rates of uplift are believed to have contributed to high erosion rates. This study compares quantitative erosion rate estimates developed at different spatial and temporal scales. It is motivated by a proposed vineyard development project in the watershed, and the need to document conditions in the project area, assess project environmental impacts and meet regulatory requirements pertaining to water quality. Erosion rate estimates were previously developed using sediment budget techniques for relatively large drainage areas (~100 to 1,000 km2) by the North Coast Regional Water Quality Control Board and US EPA and by the California Geological Survey. In this study, similar sediment budget techniques were used for smaller watersheds (~3 to 8 km2), and were supplemented by a suspended sediment monitoring program utilizing Turbidity Threshold Sampling techniques (as described in a companion study in this session). The duration of the monitoring program to date

  7. Technique for estimating the 2- to 500-year flood discharges on unregulated streams in rural Missouri

    USGS Publications Warehouse

    Alexander, Terry W.; Wilson, Gary L.

    1995-01-01

    A generalized least-squares regression technique was used to relate the 2- to 500-year flood discharges from 278 selected streamflow-gaging stations to statistically significant basin characteristics. The regression relations (estimating equations) were defined for three hydrologic regions (I, II, and III) in rural Missouri. Ordinary least-squares regression analyses indicate that drainage area (Regions I, II, and III) and main-channel slope (Regions I and II) are the only basin characteristics needed for computing the 2- to 500-year design-flood discharges at gaged or ungaged stream locations. The resulting generalized least-squares regression equations provide a technique for estimating the 2-, 5-, 10-, 25-, 50-, 100-, and 500-year flood discharges on unregulated streams in rural Missouri. The regression equations for Regions I and II were developed from stream-flow-gaging stations with drainage areas ranging from 0.13 to 11,500 square miles and 0.13 to 14,000 square miles, and main-channel slopes ranging from 1.35 to 150 feet per mile and 1.20 to 279 feet per mile. The regression equations for Region III were developed from streamflow-gaging stations with drainage areas ranging from 0.48 to 1,040 square miles. Standard errors of estimate for the generalized least-squares regression equations in Regions I, II, and m ranged from 30 to 49 percent.

  8. Estimation of flood environmental effects using flood zone mapping techniques in Halilrood Kerman, Iran.

    PubMed

    Boudaghpour, Siamak; Bagheri, Majid; Bagheri, Zahra

    2014-01-01

    High flood occurrences with large environmental damages have a growing trend in Iran. Dynamic movements of water during a flood cause different environmental damages in geographical areas with different characteristics such as topographic conditions. In general, environmental effects and damages caused by a flood in an area can be investigated from different points of view. The current essay is aiming at detecting environmental effects of flood occurrences in Halilrood catchment area of Kerman province in Iran using flood zone mapping techniques. The intended flood zone map was introduced in four steps. Steps 1 to 3 pave the way to calculate and estimate flood zone map in the understudy area while step 4 determines the estimation of environmental effects of flood occurrence. Based on our studies, wide range of accuracy for estimating the environmental effects of flood occurrence was introduced by using of flood zone mapping techniques. Moreover, it was identified that the existence of Jiroft dam in the study area can decrease flood zone from 260 hectares to 225 hectares and also it can decrease 20% of flood peak intensity. As a result, 14% of flood zone in the study area can be saved environmentally. PMID:25649059

  9. Food consumption and digestion time estimation of spotted scat, Scatophagus argus, using X-radiography technique

    SciTech Connect

    Hashim, Marina; Abidin, Diana Atiqah Zainal; Das, Simon K.; Ghaffar, Mazlan Abd.

    2014-09-03

    The present study was conducted to investigate the food consumption pattern and gastric emptying time using x-radiography technique in scats fish, Scatophagus argus feeding to satiation in laboratory conditions. Prior to feeding experiment, fish of various sizes were examined their stomach volume, using freshly prepared stomachs ligatured at the tips of the burret, where the maximum amount of distilled water collected in the stomach were measured (ml). Stomach volume is correlated with maximum food intake (S{sub max}) and it can estimate the maximum stomach distension by allometric model i.e volume=0.0000089W{sup 2.93}. Gastric emptying time was estimated using a qualitative X-radiography technique, where the fish of various sizes were fed to satiation at different time since feeding. All the experimental fish was feed into satiation using radio-opaque barium sulphate (BaSO{sub 4}) paste injected in the wet shrimp in proportion to the body weight. The BaSO{sub 4} was found suitable to track the movement of feed/prey in the stomach over time and gastric emptying time of scats fish can be estimated. The results of qualitative X-Radiography observation of gastric motility, showed the fish (200 gm) that fed to maximum satiation meal (circa 11 gm) completely emptied their stomach within 30 - 36 hrs. The results of the present study will provide the first baseline information on the stomach volume, gastric emptying of scats fish in captivity.

  10. Food consumption and digestion time estimation of spotted scat, Scatophagus argus, using X-radiography technique

    NASA Astrophysics Data System (ADS)

    Hashim, Marina; Abidin, Diana Atiqah Zainal; Das, Simon K.; Ghaffar, Mazlan Abd.

    2014-09-01

    The present study was conducted to investigate the food consumption pattern and gastric emptying time using x-radiography technique in scats fish, Scatophagus argus feeding to satiation in laboratory conditions. Prior to feeding experiment, fish of various sizes were examined their stomach volume, using freshly prepared stomachs ligatured at the tips of the burret, where the maximum amount of distilled water collected in the stomach were measured (ml). Stomach volume is correlated with maximum food intake (Smax) and it can estimate the maximum stomach distension by allometric model i.e volume=0.0000089W2.93. Gastric emptying time was estimated using a qualitative X-radiography technique, where the fish of various sizes were fed to satiation at different time since feeding. All the experimental fish was feed into satiation using radio-opaque barium sulphate (BaSO4) paste injected in the wet shrimp in proportion to the body weight. The BaSO4 was found suitable to track the movement of feed/prey in the stomach over time and gastric emptying time of scats fish can be estimated. The results of qualitative X-Radiography observation of gastric motility, showed the fish (200 gm) that fed to maximum satiation meal (circa 11 gm) completely emptied their stomach within 30 - 36 hrs. The results of the present study will provide the first baseline information on the stomach volume, gastric emptying of scats fish in captivity.

  11. Laparoscopic total pelvic exenteration using transanal minimal invasive surgery technique with en bloc bilateral lymph node dissection for advanced rectal cancer.

    PubMed

    Hayashi, Kengo; Kotake, Masanori; Kakiuchi, Daiki; Yamada, Sho; Hada, Masahiro; Kato, Yosuke; Hiranuma, Chikashi; Oyama, Kaeko; Hara, Takuo

    2016-12-01

    A 59-year-old man presenting with fecal occult blood visited our hospital. He was diagnosed with advanced lower rectal cancer, which was contiguous with the prostate and the left seminal vesicle. There were no metastatic lesions with lymph nodes or other organs. We performed laparoscopic total pelvic exenteration (LTPE) using transanal minimal invasive surgery technique with bilateral en bloc lateral lymph node dissection for advanced primary rectal cancer after neoadjuvant chemoradiotherapy. The total operative time was 760 min, and the estimated blood loss was 200 ml. LTPE is not well established technically, but it has many advantages including good visibility of the surgical field, less blood loss, and smaller wounds. A laparoscopic approach may be an appropriate choice for treating locally advanced lower rectal cancer, which requires TPE. PMID:27460130

  12. Effective gene prediction by high resolution frequency estimator based on least-norm solution technique

    PubMed Central

    2014-01-01

    Linear algebraic concept of subspace plays a significant role in the recent techniques of spectrum estimation. In this article, the authors have utilized the noise subspace concept for finding hidden periodicities in DNA sequence. With the vast growth of genomic sequences, the demand to identify accurately the protein-coding regions in DNA is increasingly rising. Several techniques of DNA feature extraction which involves various cross fields have come up in the recent past, among which application of digital signal processing tools is of prime importance. It is known that coding segments have a 3-base periodicity, while non-coding regions do not have this unique feature. One of the most important spectrum analysis techniques based on the concept of subspace is the least-norm method. The least-norm estimator developed in this paper shows sharp period-3 peaks in coding regions completely eliminating background noise. Comparison of proposed method with existing sliding discrete Fourier transform (SDFT) method popularly known as modified periodogram method has been drawn on several genes from various organisms and the results show that the proposed method has better as well as an effective approach towards gene prediction. Resolution, quality factor, sensitivity, specificity, miss rate, and wrong rate are used to establish superiority of least-norm gene prediction method over existing method. PMID:24386895

  13. Effective gene prediction by high resolution frequency estimator based on least-norm solution technique.

    PubMed

    Roy, Manidipa; Barman, Soma

    2014-01-01

    Linear algebraic concept of subspace plays a significant role in the recent techniques of spectrum estimation. In this article, the authors have utilized the noise subspace concept for finding hidden periodicities in DNA sequence. With the vast growth of genomic sequences, the demand to identify accurately the protein-coding regions in DNA is increasingly rising. Several techniques of DNA feature extraction which involves various cross fields have come up in the recent past, among which application of digital signal processing tools is of prime importance. It is known that coding segments have a 3-base periodicity, while non-coding regions do not have this unique feature. One of the most important spectrum analysis techniques based on the concept of subspace is the least-norm method. The least-norm estimator developed in this paper shows sharp period-3 peaks in coding regions completely eliminating background noise. Comparison of proposed method with existing sliding discrete Fourier transform (SDFT) method popularly known as modified periodogram method has been drawn on several genes from various organisms and the results show that the proposed method has better as well as an effective approach towards gene prediction. Resolution, quality factor, sensitivity, specificity, miss rate, and wrong rate are used to establish superiority of least-norm gene prediction method over existing method. PMID:24386895

  14. TRAC-PF1: an advanced best-estimate computer program for pressurized water reactor analysis

    SciTech Connect

    Liles, D.R.; Mahaffy, J.H.

    1984-02-01

    The Transient Reactor Analysis Code (TRAC) is being developed at the Los Alamos National Laboratory to provide advanced best-estimate predictions of postulated accidents in light water reactors. The TRAC-PF1 program provides this capability for pressurized water reactors and for many thermal-hydraulic experimental facilities. The code features either a one-dimensional or a three-dimensional treatment of the pressure vessel and its associated internals; a two-phase, two-fluid nonequilibrium hydrodynamics model with a noncondensable gas field; flow-regime-dependent constitutive equation treatment; optional reflood tracking capability for both bottom flood and falling-film quench fronts; and consistent treatment of entire accident sequences including the generation of consistent initial conditions. This report describes the thermal-hydraulic models and the numerical solution methods used in the code. Detailed programming and user information also are provided.

  15. Advanced Ecosystem Mapping Techniques for Large Arctic Study Domains Using Calibrated High-Resolution Imagery

    NASA Astrophysics Data System (ADS)

    Macander, M. J.; Frost, G. V., Jr.

    2015-12-01

    Regional-scale mapping of vegetation and other ecosystem properties has traditionally relied on medium-resolution remote sensing such as Landsat (30 m) and MODIS (250 m). Yet, the burgeoning availability of high-resolution (<=2 m) imagery and ongoing advances in computing power and analysis tools raises the prospect of performing ecosystem mapping at fine spatial scales over large study domains. Here we demonstrate cutting-edge mapping approaches over a ~35,000 km² study area on Alaska's North Slope using calibrated and atmospherically-corrected mosaics of high-resolution WorldView-2 and GeoEye-1 imagery: (1) an a priori spectral approach incorporating the Satellite Imagery Automatic Mapper (SIAM) algorithms; (2) image segmentation techniques; and (3) texture metrics. The SIAM spectral approach classifies radiometrically-calibrated imagery to general vegetation density categories and non-vegetated classes. The SIAM classes were developed globally and their applicability in arctic tundra environments has not been previously evaluated. Image segmentation, or object-based image analysis, automatically partitions high-resolution imagery into homogeneous image regions that can then be analyzed based on spectral, textural, and contextual information. We applied eCognition software to delineate waterbodies and vegetation classes, in combination with other techniques. Texture metrics were evaluated to determine the feasibility of using high-resolution imagery to algorithmically characterize periglacial surface forms (e.g., ice-wedge polygons), which are an important physical characteristic of permafrost-dominated regions but which cannot be distinguished by medium-resolution remote sensing. These advanced mapping techniques provide products which can provide essential information supporting a broad range of ecosystem science and land-use planning applications in northern Alaska and elsewhere in the circumpolar Arctic.

  16. PREFACE: 16th International workshop on Advanced Computing and Analysis Techniques in physics research (ACAT2014)

    NASA Astrophysics Data System (ADS)

    Fiala, L.; Lokajicek, M.; Tumova, N.

    2015-05-01

    This volume of the IOP Conference Series is dedicated to scientific contributions presented at the 16th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2014), this year the motto was ''bridging disciplines''. The conference took place on September 1-5, 2014, at the Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic. The 16th edition of ACAT explored the boundaries of computing system architectures, data analysis algorithmics, automatic calculations, and theoretical calculation technologies. It provided a forum for confronting and exchanging ideas among these fields, where new approaches in computing technologies for scientific research were explored and promoted. This year's edition of the workshop brought together over 140 participants from all over the world. The workshop's 16 invited speakers presented key topics on advanced computing and analysis techniques in physics. During the workshop, 60 talks and 40 posters were presented in three tracks: Computing Technology for Physics Research, Data Analysis - Algorithms and Tools, and Computations in Theoretical Physics: Techniques and Methods. The round table enabled discussions on expanding software, knowledge sharing and scientific collaboration in the respective areas. ACAT 2014 was generously sponsored by Western Digital, Brookhaven National Laboratory, Hewlett Packard, DataDirect Networks, M Computers, Bright Computing, Huawei and PDV-Systemhaus. Special appreciations go to the track liaisons Lorenzo Moneta, Axel Naumann and Grigory Rubtsov for their work on the scientific program and the publication preparation. ACAT's IACC would also like to express its gratitude to all referees for their work on making sure the contributions are published in the proceedings. Our thanks extend to the conference liaisons Andrei Kataev and Jerome Lauret who worked with the local contacts and made this conference possible as well as to the program

  17. Compensation technique for the intrinsic error in ultrasound motion estimation using a speckle tracking method

    NASA Astrophysics Data System (ADS)

    Taki, Hirofumi; Yamakawa, Makoto; Shiina, Tsuyoshi; Sato, Toru

    2015-07-01

    High-accuracy ultrasound motion estimation has become an essential technique in blood flow imaging, elastography, and motion imaging of the heart wall. Speckle tracking has been one of the best motion estimators; however, conventional speckle-tracking methods neglect the effect of out-of-plane motion and deformation. Our proposed method assumes that the cross-correlation between a reference signal and a comparison signal depends on the spatio-temporal distance between the two signals. The proposed method uses the decrease in the cross-correlation value in a reference frame to compensate for the intrinsic error caused by out-of-plane motion and deformation without a priori information. The root-mean-square error of the estimated lateral tissue motion velocity calculated by the proposed method ranged from 6.4 to 34% of that using a conventional speckle-tracking method. This study demonstrates the high potential of the proposed method for improving the estimation of tissue motion using an ultrasound speckle-tracking method in medical diagnosis.

  18. Bearing estimation based on orthogonal projections technique with a horizontal linear array in shallow water

    NASA Astrophysics Data System (ADS)

    Yi, Feng; Sun, Chao; Bai, Xiao-Hui

    2012-11-01

    A new signal-subspace high-resolution bearing estimation method based on the orthogonal projections technique is proposed in this paper. Firstly, the received data are calculated step by step to form a set of basis vectors for the signal-subspace, utilizing an orthogonal projections algorithm that does not construct and eigen-decompose the covariance matrix. This procedure retains a linear complexity in computation and guarantees maximum signal energy in the spanned signal-subspace. Then the algorithm exploits the singular value decomposition of the matrix, comprised of the signal-subspace and the modal subspace that is obtained also from the received data, and the source bearings are estimated by detecting the intersection between the estimated signal-subspace and the modal subspace. The computational complexity of the proposed method is compared to that of the subspace intersection method, and its performance is compared to that of the conventional bearing estimation method, including conventional beamforming (CBF), and minimum variance distortionless response beamforming (MVDR). The performance of the proposed method under different condition such as sensor number, sensor inter-space, received signal-noise ratio (SNR), snapshot number is also investigated. Numerical simulation results in typical shallow water demonstrate the effectiveness of the proposed method.

  19. Estimation of root zone storage capacity at the catchment scale using improved Mass Curve Technique

    NASA Astrophysics Data System (ADS)

    Zhao, Jie; Xu, Zongxue; Singh, Vijay P.

    2016-09-01

    The root zone storage capacity (Sr) greatly influences runoff generation, soil water movement, and vegetation growth and is hence an important variable for ecological and hydrological modelling. However, due to the great heterogeneity in soil texture and structure, there seems to be no effective approach to monitor or estimate Sr at the catchment scale presently. To fill the gap, in this study the Mass Curve Technique (MCT) was improved by incorporating a snowmelt module for the estimation of Sr at the catchment scale in different climatic regions. The "range of perturbation" method was also used to generate different scenarios for determining the sensitivity of the improved MCT-derived Sr to its influencing factors after the evaluation of plausibility of Sr derived from the improved MCT. Results can be showed as: (i) Sr estimates of different catchments varied greatly from ∼10 mm to ∼200 mm with the changes of climatic conditions and underlying surface characteristics. (ii) The improved MCT is a simple but powerful tool for the Sr estimation in different climatic regions of China, and incorporation of more catchments into Sr comparisons can further improve our knowledge on the variability of Sr. (iii) Variation of Sr values is an integrated consequence of variations in rainfall, snowmelt water and evapotranspiration. Sr values are most sensitive to variations in evapotranspiration of ecosystems. Besides, Sr values with a longer return period are more stable than those with a shorter return period when affected by fluctuations in its influencing factors.

  20. Motor unit action potential conduction velocity estimated from surface electromyographic signals using image processing techniques.

    PubMed

    Soares, Fabiano Araujo; Carvalho, João Luiz Azevedo; Miosso, Cristiano Jacques; de Andrade, Marcelino Monteiro; da Rocha, Adson Ferreira

    2015-01-01

    In surface electromyography (surface EMG, or S-EMG), conduction velocity (CV) refers to the velocity at which the motor unit action potentials (MUAPs) propagate along the muscle fibers, during contractions. The CV is related to the type and diameter of the muscle fibers, ion concentration, pH, and firing rate of the motor units (MUs). The CV can be used in the evaluation of contractile properties of MUs, and of muscle fatigue. The most popular methods for CV estimation are those based on maximum likelihood estimation (MLE). This work proposes an algorithm for estimating CV from S-EMG signals, using digital image processing techniques. The proposed approach is demonstrated and evaluated, using both simulated and experimentally-acquired multichannel S-EMG signals. We show that the proposed algorithm is as precise and accurate as the MLE method in typical conditions of noise and CV. The proposed method is not susceptible to errors associated with MUAP propagation direction or inadequate initialization parameters, which are common with the MLE algorithm. Image processing -based approaches may be useful in S-EMG analysis to extract different physiological parameters from multichannel S-EMG signals. Other new methods based on image processing could also be developed to help solving other tasks in EMG analysis, such as estimation of the CV for individual MUs, localization and tracking of innervation zones, and study of MU recruitment strategies. PMID:26384112

  1. Technique for estimating magnitude and frequency of peak flows in Maryland

    USGS Publications Warehouse

    Dillow, Jonathan J.A.

    1996-01-01

    Methods are presented for estimating peak-flow magnitudes of selected frequencies for drainage basins in Maryland. The methods were developed by generalized least-squares regression techniques using data from 219 streamflow-gaging stations in and near Maryland, and apply to peak flows with recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years. The State is divided into five hydrologic regions: the Appalachian Plateaus and Allegheny Ridges region, the Blue Ridge and Great Valley region, the Piedmont region, the Western Coastal Plain region, and the Eastern Coastal Plain region. Sets of equations for calculating peak discharges based on physical basin characteristics and their associated standard errors of prediction are provided for each of the five hydrologic regions. Basin characteristics and flood-frequency characteristics are tabulated for 236 streamflow- gaging stations in Maryland and surrounding States. Methods of estimating peak flows at sites in Maryland for ungaged and gaged sites are presented.

  2. Technique for estimating flood-peak discharges and frequencies on rural streams in Illinois

    USGS Publications Warehouse

    Curtis, G.W.

    1987-01-01

    Flood-peak discharges and frequencies are presented for 394 gaged sites in Illinois, Indiana, and Wisconsin for recurrence intervals ranging from 2 to 100 years. A technique is presented for estimating flood-peak discharges at recurrence intervals ranging from 2 to 500 years for nonregulated streams in Illinois with drainage areas ranging from 0.02 to 10,000 square miles. Multiple-regression analyses, using basin characteristics and peak streamflow data from 268 of the 394 gaged sites, were used to define the flood-frequency relation. The most significant independent variables for estimating flood-peak discharge are drainage area, slope, rainfall intensity and a regional factor. Examples are given to show a step-by-step procedure in calculating a 50-year flood for a site on an ungaged stream, a site at a gaged location, and a site near a gaged location. (USGS)

  3. Use of LANDSAT 2 data technique to estimate silverleaf sunflower infestation

    NASA Technical Reports Server (NTRS)

    Richardson, A. J.; Escobar, D. E.; Gausman, H. W.; Everitt, J. H. (Principal Investigator)

    1982-01-01

    The feasibility of the technique using the Earth Resources Technology Satellite (LANDSAT-2) multispectral scanner (MSS) was tested; to distinguish silverleaf sunflowers (Helianthus argophyllus Torr. and Gray) from other plant species and to estimate the hectarage percent of its infestation. Sunflowers gave high mean digital counts in all four LANDSAT MSS bands that were manifested as a pinkish image response on the LANDSAT color composite imagery. Photo- and LANDSAT-estimated hectare percentages for silverleaf sunflower within a 23,467 ha study area were 9.1 and 9.5%, respectively. The geographic occurrence of sunflower areas on the line-printer recognition map was in good agreement with their known aerial photographic locations.

  4. Integrated State Estimation and Contingency Analysis Software Implementation using High Performance Computing Techniques

    SciTech Connect

    Chen, Yousu; Glaesemann, Kurt R.; Rice, Mark J.; Huang, Zhenyu

    2015-12-31

    Power system simulation tools are traditionally developed in sequential mode and codes are optimized for single core computing only. However, the increasing complexity in the power grid models requires more intensive computation. The traditional simulation tools will soon not be able to meet the grid operation requirements. Therefore, power system simulation tools need to evolve accordingly to provide faster and better results for grid operations. This paper presents an integrated state estimation and contingency analysis software implementation using high performance computing techniques. The software is able to solve large size state estimation problems within one second and achieve a near-linear speedup of 9,800 with 10,000 cores for contingency analysis application. The performance evaluation is presented to show its effectiveness.

  5. A technique for estimating time of concentration and storage coefficient values for Illinois streams

    USGS Publications Warehouse

    Graf, Julia B.; Garklavs, George; Oberg, Kevin A.

    1982-01-01

    Values of the unit hydrograph parameters time of concentration (TC) and storage coefficient (R) can be estimated for streams in Illinois by a two-step technique developed from data for 98 gaged basins in the State. The sum of TC and R is related to stream length (L) and main channel slope (S) by the relation (TC + R)e = 35.2L0.39S-0.78. The variable R/(TC + R) is not significantly correlated with drainage area, slope, or length, but does exhibit a regional trend. Regional values of R/(TC + R) are used with the computed values of (TC + R)e to solve for estimated values of time of concentration (TCe) and storage coefficient (Re). The use of the variable R/(TC + R) is thought to account for variations in unit hydrograph parameters caused by physiographic variables such as basin topography, flood-plain development, and basin storage characteristics. (USGS)

  6. AN EVALUATION OF TWO GROUND-BASED CROWN CLOSURE ESTIMATION TECHNIQUES COMPARED TO CROWN CLOSURE ESTIMATES DERIVED FROM HIGH RESOLUTION IMAGERY

    EPA Science Inventory

    Two ground-based canopy closure estimation techniques, the Spherical Densitometer (SD) and the Vertical Tube (VT), were compared for the effect of deciduous understory on dominant/co-dominant crown closure estimates in even-aged loblolly (Pinus taeda) pine stands located in the N...

  7. AN EVALUATION OF TWO GROUND-BASED CROWN CLOSURE ESTIMATION TECHNIQUES COMPARED TO CROWN CLOSURE ESTIMATES DERIVED FROM HIGH RESOLUTION IMAGERY

    EPA Science Inventory

    Two ground-based canopy closure estimation techniques, the Spherical Densitometer (SD) and the Vertical Tube (VT), were compared for the effect of deciduous understory on dominantlco-dominant crown closure estimates in even-aged loblolly (Pinus taeda) pine stands located in the N...

  8. A technique for estimating seed production of common moist soil plants

    USGS Publications Warehouse

    Laubhan, Murray K.

    1992-01-01

    Seeds of native herbaceous vegetation adapted to germination in hydric soils (i.e., moist-soil plants) provide waterfowl with nutritional resources including essential amino acids, vitamins, and minerals that occur only in small amounts or are absent in other foods. These elements are essential for waterfowl to successfully complete aspects of the annual cycle such as molt and reproduction. Moist-soil vegetation also has the advantages of consistent production of foods across years with varying water availability, low management costs, high tolerance to diverse environmental conditions, and low deterioration rates of seeds after flooding. The amount of seed produced differs among plant species and varies annually depending on environmental conditions and management practices. Further, many moist-soil impoundments contain diverse vegetation, and seed production by a particular plant species usually is not uniform across an entire unit. Consequently, estimating total seed production within an impoundment is extremely difficult. The chemical composition of seeds also varies among plant species. For example, beggartick seeds contain high amounts of protein but only an intermediate amount of minerals. In contrast, barnyardgrass is a good source of minerals but is low in protein. Because of these differences, it is necessary to know the amount of seed produced by each plant species if the nutritional resources provided in an impoundment are to be estimated. The following technique for estimating seed production takes into account the variation resulting from different environmental conditions and management practices as well as differences in the amount of seed produced by various plant species. The technique was developed to provide resource managers with the ability to make quick and reliable estimates of seed production. Although on-site information must be collected, the amount of field time required is small (i.e., about 1 min per sample); sampling normally is

  9. An experimental result of estimating an application volume by machine learning techniques.

    PubMed

    Hasegawa, Tatsuhito; Koshino, Makoto; Kimura, Haruhiko

    2015-01-01

    In this study, we improved the usability of smartphones by automating a user's operations. We developed an intelligent system using machine learning techniques that periodically detects a user's context on a smartphone. We selected the Android operating system because it has the largest market share and highest flexibility of its development environment. In this paper, we describe an application that automatically adjusts application volume. Adjusting the volume can be easily forgotten because users need to push the volume buttons to alter the volume depending on the given situation. Therefore, we developed an application that automatically adjusts the volume based on learned user settings. Application volume can be set differently from ringtone volume on Android devices, and these volume settings are associated with each specific application including games. Our application records a user's location, the volume setting, the foreground application name and other such attributes as learning data, thereby estimating whether the volume should be adjusted using machine learning techniques via Weka. PMID:25713755

  10. Remote sensing techniques for mapping range sites and estimating range yield

    NASA Technical Reports Server (NTRS)

    Benson, L. A.; Frazee, C. J.; Waltz, F. A.; Reed, C.; Carey, R. L.; Gropper, J. L.

    1974-01-01

    Image interpretation procedures for determining range yield and for extrapolating range information were investigated for an area of the Pine Ridge Indian Reservation in southwestern South Dakota. Soil and vegetative data collected in the field utilizing a grid sampling design and digital film data from color infrared film and black and white films were analyzed statistically using correlation and regression techniques. The pattern recognition techniques used were K-class, mode seeking, and thresholding. The herbage yield equation derived for the detailed test site was used to predict yield for an adjacent similar field. The herbage yield estimate for the adjacent field was 1744 lbs. of dry matter per acre and was favorably compared to the mean yield of 1830 lbs. of dry matter per acre based upon ground observations. Also an inverse relationship was observed between vegetative cover and the ratio of MSS 5 to MSS 7 of ERTS-1 imagery.

  11. Location Estimation in Wireless Sensor Networks Using Spring-Relaxation Technique

    PubMed Central

    Zhang, Qing; Foh, Chuan Heng; Seet, Boon-Chong; Fong, A. C. M.

    2010-01-01

    Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN). Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS) is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization. PMID:22363204

  12. Utilization of advanced calibration techniques in stochastic rock fall analysis of quarry slopes

    NASA Astrophysics Data System (ADS)

    Preh, Alexander; Ahmadabadi, Morteza; Kolenprat, Bernd

    2016-04-01

    In order to study rock fall dynamics, a research project was conducted by the Vienna University of Technology and the Austrian Central Labour Inspectorate (Federal Ministry of Labour, Social Affairs and Consumer Protection). A part of this project included 277 full-scale drop tests at three different quarries in Austria and recording key parameters of the rock fall trajectories. The tests involved a total of 277 boulders ranging from 0.18 to 1.8 m in diameter and from 0.009 to 8.1 Mg in mass. The geology of these sites included strong rock belonging to igneous, metamorphic and volcanic types. In this paper the results of the tests are used for calibration and validation a new stochastic computer model. It is demonstrated that the error of the model (i.e. the difference between observed and simulated results) has a lognormal distribution. Selecting two parameters, advanced calibration techniques including Markov Chain Monte Carlo Technique, Maximum Likelihood and Root Mean Square Error (RMSE) are utilized to minimize the error. Validation of the model based on the cross validation technique reveals that in general, reasonable stochastic approximations of the rock fall trajectories are obtained in all dimensions, including runout, bounce heights and velocities. The approximations are compared to the measured data in terms of median, 95% and maximum values. The results of the comparisons indicate that approximate first-order predictions, using a single set of input parameters, are possible and can be used to aid practical hazard and risk assessment.

  13. Advances of Peripheral Nerve Repair Techniques to Improve Hand Function: A Systematic Review of Literature

    PubMed Central

    P, Mafi; S, Hindocha; M, Dhital; M, Saleh

    2012-01-01

    Concepts of neuronal damage and repair date back to ancient times. The research in this topic has been growing ever since and numerous nerve repair techniques have evolved throughout the years. Due to our greater understanding of nerve injuries and repair we now distinguish between central and peripheral nervous system. In this review, we have chosen to concentrate on peripheral nerve injuries and in particular those involving the hand. There are no reviews bringing together and summarizing the latest research evidence concerning the most up-to-date techniques used to improve hand function. Therefore, by identifying and evaluating all the published literature in this field, we have summarized all the available information about the advances in peripheral nerve techniques used to improve hand function. The most important ones are the use of resorbable poly[(R)-3-hydroxybutyrate] (PHB), epineural end-to-end suturing, graft repair, nerve transfer, side to side neurorrhaphy and end to side neurorrhaphy between median, radial and ulnar nerves, nerve transplant, nerve repair, external neurolysis and epineural sutures, adjacent neurotization without nerve suturing, Agee endoscopic operation, tourniquet induced anesthesia, toe transfer and meticulous intrinsic repair, free auto nerve grafting, use of distal based neurocutaneous flaps and tubulization. At the same time we found that the patient’s age, tension of repair, time of repair, level of injury and scar formation following surgery affect the prognosis. Despite the thorough findings of this systematic review we suggest that further research in this field is needed. PMID:22431951

  14. New advanced surface modification technique: titanium oxide ceramic surface implants: long-term clinical results

    NASA Astrophysics Data System (ADS)

    Szabo, Gyorgy; Kovacs, Lajos; Barabas, Jozsef; Nemeth, Zsolt; Maironna, Carlo

    2001-11-01

    The purpose of this paper is to discuss the background to advanced surface modification technologies and to present a new technique, involving the formation of a titanium oxide ceramic coating, with relatively long-term results of its clinical utilization. Three general techniques are used to modify surfaces: the addition or removal of material and the change of material already present. Surface properties can also be changed without the addition or removal of material, through the laser or electron beam thermal treatment. The new technique outlined in this paper relates to the production of a corrosion-resistant 2000-2500 A thick, ceramic oxide layer with a coherent crystalline structure on the surface of titanium implants. The layer is grown electrochemically from the bulk of the metal and is modified by heat treatment. Such oxide ceramic-coated implants have a number of advantageous properties relative to implants covered with various other coatings: a higher external hardness, a greater force of adherence between the titanium and the oxide ceramic coating, a virtually perfect insulation between the organism and the metal (no possibility of metal allergy), etc. The coated implants were subjected to various physical, chemical, electronmicroscopic, etc. tests for a qualitative characterization. Finally, these implants (plates, screws for maxillofacial osteosynthesis and dental root implants) were applied in surgical practice for a period of 10 years. Tests and the experience acquired demonstrated the good properties of the titanium oxide ceramic-coated implants.

  15. Measurements of the subcriticality using advanced technique of shooting source during operation of NPP reactors

    SciTech Connect

    Lebedev, G. V. Petrov, V. V.; Bobylyov, V. T.; Butov, R. I.; Zhukov, A. M.; Sladkov, A. A.

    2014-12-15

    According to the rules of nuclear safety, the measurements of the subcriticality of reactors should be carried out in the process of performing nuclear hazardous operations. An advanced technique of shooting source of neutrons is proposed to meet this requirement. As such a source, a pulsed neutron source (PNS) is used. In order to realize this technique, it is recommended to enable a PNS with a frequency of 1–20 Hz. The PNS is stopped after achieving a steady-state (on average) number of neutrons in the reactor volume. The change in the number of neutrons in the reactor volume is measured in time with an interval of discreteness of ∼0.1 s. The results of these measurements with the application of a system of point-kinetics equations are used in order to calculate the sought subcriticality. The basic idea of the proposed technique used to measure the subcriticality is elaborated in a series of experiments on the Kvant assembly. The conditions which should be implemented in order to obtain a positive result of measurements are formulated. A block diagram of the basic version of the experimental setup is presented, whose main element is a pulsed neutron generator.

  16. Measurements of the subcriticality using advanced technique of shooting source during operation of NPP reactors

    NASA Astrophysics Data System (ADS)

    Lebedev, G. V.; Petrov, V. V.; Bobylyov, V. T.; Butov, R. I.; Zhukov, A. M.; Sladkov, A. A.

    2014-12-01

    According to the rules of nuclear safety, the measurements of the subcriticality of reactors should be carried out in the process of performing nuclear hazardous operations. An advanced technique of shooting source of neutrons is proposed to meet this requirement. As such a source, a pulsed neutron source (PNS) is used. In order to realize this technique, it is recommended to enable a PNS with a frequency of 1-20 Hz. The PNS is stopped after achieving a steady-state (on average) number of neutrons in the reactor volume. The change in the number of neutrons in the reactor volume is measured in time with an interval of discreteness of ˜0.1 s. The results of these measurements with the application of a system of point-kinetics equations are used in order to calculate the sought subcriticality. The basic idea of the proposed technique used to measure the subcriticality is elaborated in a series of experiments on the Kvant assembly. The conditions which should be implemented in order to obtain a positive result of measurements are formulated. A block diagram of the basic version of the experimental setup is presented, whose main element is a pulsed neutron generator.

  17. Locking the Advanced LIGO Gravitational Wave Detector: with a focus on the Arm Length Stabilization Technique

    NASA Astrophysics Data System (ADS)

    Staley, Alexa

    2015-11-01

    This thesis begins with an introduction on the theory of general relativity and gravitational waves. Common astrophysical sources are described in Chapter 2. Chapter 3 begins with a description of the installed instrument. A discussion on the detector design sensitivity, limiting noise sources, and estimated detection rates is also given. At the end of Chapte 3, the complications of lock acquisition are highlighted. The arm length stabilization system was introduced to Advanced LIGO as a partial way to solve the difficulties of locking. Chapter 4 discusses the motivation for the use of this scheme and explains the methodology. A detailed discussion on the arm length stabilization model is given, along with the noise budget in Chapters 5 and 6 respectively. The full lock sequence is described in Chapter 7. The thesis concludes with the current status of the interferometers. (Abstract shortened by UMI.).

  18. Rainfall Estimation over the Nile Basin using Multi-Spectral, Multi- Instrument Satellite Techniques

    NASA Astrophysics Data System (ADS)

    Habib, E.; Kuligowski, R.; Sazib, N.; Elshamy, M.; Amin, D.; Ahmed, M.

    2012-04-01

    Management of Egypt's Aswan High Dam is critical not only for flood control on the Nile but also for ensuring adequate water supplies for most of Egypt since rainfall is scarce over the vast majority of its land area. However, reservoir inflow is driven by rainfall over Sudan, Ethiopia, Uganda, and several other countries from which routine rain gauge data are sparse. Satellite- derived estimates of rainfall offer a much more detailed and timely set of data to form a basis for decisions on the operation of the dam. A single-channel infrared (IR) algorithm is currently in operational use at the Egyptian Nile Forecast Center (NFC). In this study, the authors report on the adaptation of a multi-spectral, multi-instrument satellite rainfall estimation algorithm (Self- Calibrating Multivariate Precipitation Retrieval, SCaMPR) for operational application by NFC over the Nile Basin. The algorithm uses a set of rainfall predictors that come from multi-spectral Infrared cloud top observations and self-calibrate them to a set of predictands that come from the more accurate, but less frequent, Microwave (MW) rain rate estimates. For application over the Nile Basin, the SCaMPR algorithm uses multiple satellite IR channels that have become recently available to NFC from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). Microwave rain rates are acquired from multiple sources such as the Special Sensor Microwave/Imager (SSM/I), the Special Sensor Microwave Imager and Sounder (SSMIS), the Advanced Microwave Sounding Unit (AMSU), the Advanced Microwave Scanning Radiometer on EOS (AMSR-E), and the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm has two main steps: rain/no-rain separation using discriminant analysis, and rain rate estimation using stepwise linear regression. We test two modes of algorithm calibration: real- time calibration with continuous updates of coefficients with newly coming MW rain rates, and calibration using static

  19. Recent Advances of Portable Multi-Sensor Technique of Volcanic Plume Measurement

    NASA Astrophysics Data System (ADS)

    Shinohara, H.

    2005-12-01

    A technique has been developed to estimate chemical composition volcanic gases based on the measurement of volcanic plumes at distance from a source vent by the use of a portable multi-sensor system consisting a humidity sensor, an SO2 electrochemical sensor and a CO2 IR analyzer (Shinohara, 2005). Since volcanic plume is a mixture of the atmosphere and volcanic gases, the volcanic gas composition can be estimated by subtracting the atmospheric background from the plume data. This technique enabled us to estimate concentration ratios of major volcanic gas species (i.e., H2O, CO2 and SO2) without any complicated chemical analyses even for gases emitted from an inaccessible open vent. Since the portable multi-sensor system was light (~ 5 kg) and small enough to carry in a medium size backpack, we could apply this technique to measure volcanic plumes at summit of various volcanoes including those which require us a tough climbing, such as Villarrica volcano, Chile. We further improved the sensor system and the measurements techniques, including application of LI-840 IR H2O and CO2 analyzer, H2S electrochemical sensor and H2 semi-conductor sensor. Application of the new LI-840 analyzer enabled us to measure H2O concentration in the plume with similar response time with CO2 concentration. The H2S electrochemical sensor of Komyo Co. has a chemical filter to removed SO2 to achieve a low sensitivity (0.1%) to SO2, and we can measure a high SO2/H2S ratio up to 1000. The semi-conductor sensor can measure H2 concentration in the range from the background level in the atmosphere (~0.5 ppm) to ~50 ppm. Response of the H2 sensor is slower (90% response time = ~90 sec) than other sensors in particular in low concentration range, and the measurement is still semi-quantitative with errors up to ±50%. The H2/H2O ratios are quite variable in volcanic gases ranging from less than 10-5 up to 10-1, and the ratio is largely controlled by temperature and pressure condition of the

  20. An advanced shape-fitting algorithm applied to quadrupedal mammals: improving volumetric mass estimates

    PubMed Central

    Brassey, Charlotte A.; Gardiner, James D.

    2015-01-01

    Body mass is a fundamental physical property of an individual and has enormous bearing upon ecology and physiology. Generating reliable estimates for body mass is therefore a necessary step in many palaeontological studies. Whilst early reconstructions of mass in extinct species relied upon isolated skeletal elements, volumetric techniques are increasingly applied to fossils when skeletal completeness allows. We apply a new ‘alpha shapes’ (α-shapes) algorithm to volumetric mass estimation in quadrupedal mammals. α-shapes are defined by: (i) the underlying skeletal structure to which they are fitted; and (ii) the value α, determining the refinement of fit. For a given skeleton, a range of α-shapes may be fitted around the individual, spanning from very coarse to very fine. We fit α-shapes to three-dimensional models of extant mammals and calculate volumes, which are regressed against mass to generate predictive equations. Our optimal model is characterized by a high correlation coefficient and mean square error (r2=0.975, m.s.e.=0.025). When applied to the woolly mammoth (Mammuthus primigenius) and giant ground sloth (Megatherium americanum), we reconstruct masses of 3635 and 3706 kg, respectively. We consider α-shapes an improvement upon previous techniques as resulting volumes are less sensitive to uncertainties in skeletal reconstructions, and do not require manual separation of body segments from skeletons. PMID:26361559

  1. Estimating annual CO(2) flux for Lutjewad station using three different gap-filling techniques.

    PubMed

    Dragomir, Carmelia M; Klaassen, Wim; Voiculescu, Mirela; Georgescu, Lucian P; van der Laan, Sander

    2012-01-01

    Long-term measurements of CO(2) flux can be obtained using the eddy covariance technique, but these datasets are affected by gaps which hinder the estimation of robust long-term means and annual ecosystem exchanges. We compare results obtained using three gap-fill techniques: multiple regression (MR), multiple imputation (MI), and artificial neural networks (ANNs), applied to a one-year dataset of hourly CO(2) flux measurements collected in Lutjewad, over a flat agriculture area near the Wadden Sea dike in the north of the Netherlands. The dataset was separated in two subsets: a learning and a validation set. The performances of gap-filling techniques were analysed by calculating statistical criteria: coefficient of determination (R(2)), root mean square error (RMSE), mean absolute error (MAE), maximum absolute error (MaxAE), and mean square bias (MSB). The gap-fill accuracy is seasonally dependent, with better results in cold seasons. The highest accuracy is obtained using ANN technique which is also less sensitive to environmental/seasonal conditions. We argue that filling gaps directly on measured CO(2) fluxes is more advantageous than the common method of filling gaps on calculated net ecosystem change, because ANN is an empirical method and smaller scatter is expected when gap filling is applied directly to measurements. PMID:22566781

  2. Techniques for estimating flood-peak discharges of rural, unregulated streams in Ohio

    USGS Publications Warehouse

    Koltun, G.F.; Roberts, J.W.

    1990-01-01

    Multiple-regression equations are presented for estimating flood-peak discharges having recurrence intervals of 2, 5, 10, 25, 50, and 100 years at ungaged sites on rural, unregulated streams in Ohio. The average standard errors of prediction for the equations range from 33.4% to 41.4%. Peak discharge estimates determined by log-Pearson Type III analysis using data collected through the 1987 water year are reported for 275 streamflow-gaging stations. Ordinary least-squares multiple-regression techniques were used to divide the State into three regions and to identify a set of basin characteristics that help explain station-to- station variation in the log-Pearson estimates. Contributing drainage area, main-channel slope, and storage area were identified as suitable explanatory variables. Generalized least-square procedures, which include historical flow data and account for differences in the variance of flows at different gaging stations, spatial correlation among gaging station records, and variable lengths of station record were used to estimate the regression parameters. Weighted peak-discharge estimates computed as a function of the log-Pearson Type III and regression estimates are reported for each station. A method is provided to adjust regression estimates for ungaged sites by use of weighted and regression estimates for a gaged site located on the same stream. Limitations and shortcomings cited in an earlier report on the magnitude and frequency of floods in Ohio are addressed in this study. Geographic bias is no longer evident for the Maumee River basin of northwestern Ohio. No bias is found to be associated with the forested-area characteristic for the range used in the regression analysis (0.0 to 99.0%), nor is this characteristic significant in explaining peak discharges. Surface-mined area likewise is not significant in explaining peak discharges, and the regression equations are not biased when applied to basins having approximately 30% or less

  3. Spectral Estimation Techniques for time series with Long Gaps: Applications to Paleomagnetism and Geomagnetic Depth Sounding

    NASA Astrophysics Data System (ADS)

    Smith-Boughner, Lindsay

    Many Earth systems cannot be studied directly. One cannot measure the velocities of convecting fluid in the Earth's core but can measure the magnetic field generated by these motions on the surface. Examining how the magnetic field changes over long periods of time, using power spectral density estimation provides insight into the dynamics driving the system. The changes in the magnetic field can also be used to study Earth properties - variations in magnetic fields outside of Earth like the ring-current induce currents to flow in the Earth, generating magnetic fields. Estimating the transfer function between the external changes and the induced response characterizes the electromagnetic response of the Earth. From this response inferences can be made about the electrical conductivity of the Earth. However, these types of time series, and many others have long breaks in the record with no samples available and limit the analysis. Standard methods require interpolation or section averaging, with associated problems of introducing bias or reducing the frequency resolution. Extending the methods of Fodor and Stark (2000), who adapt a set of orthogonal multi-tapers to compensate for breaks in sampling- an algorithm and software package for applying these techniques is developed. Methods of empirically estimating the average transfer function of a set of tapers and confidence intervals are also tested. These methods are extended for cross-spectral, coherence and transfer function estimation in the presence of noise. With these methods, new analysis of a highly interrupted ocean sediment core from the Oligocene (Hartl et al., 1993) reveals a quasi-periodic signal in the calibrated paleointensity time series at 2.5 cpMy. The power in the magnetic field during this period appears to be dominated by reversal rate processes with less overall power than the early Oligocene. Previous analysis of the early Oligocene by Constable et al. (1998) detected a signal near 8 cp

  4. Convex interpolation techniques for the estimation of erupted mass and granulometry from the deposit

    NASA Astrophysics Data System (ADS)

    Spanu, A.; De'Michieli Vitturi, M.; Barsotti, S.

    2012-12-01

    Tephra deposits are often the only available information on past volcanic eruptions that can be used to characterize their eruptive styles and quantify their intensities. In this work with the term tephra we refer to volcanic particles in the range φ -5 - φ4 released into the atmosphere during explosive events. Historically several methods have been proposed to estimate total erupted volume and grain size distribution based on deposit sampling data. A widely used approach for the volume estimation is based on a best fitting procedure of thickness data using various exponential segments or power-law curves, whereas recently Voronoi's tassellation method has been adopted to estimate total grain size distribution. Sometimes it can be difficult to sample a deposit and consequently the scarcity of measurements could affect significantly the accuracy of these estimations. Furthermore since the release from the vent to the deposition, particles are subject to different physical and chemical processes like aggregations and breaking at the impact. Thus the deposit may not be representative of the initial granulometry. Nevertheless the data obtained from the deposit are widely used as input parameters in numerical dispersion models. In this work we want to quantify how this partial information affects the estimate of volume and total grain size distribution comparing the results obtained with three different convex interpolation techniques: Voronoi, Delaunay and Natural Neighbor. In order to have a complete data at the ground we create a synthetic deposit by using the Vol-calpuff dispersal code, knowing, in this way, the erupted and deposited mass and granulometry. We tested the three methods over several datasets obtained randomly sampling the simulated deposit and characterized by different sizes and distributions. Here, we focused on a typical eruption at Mt. Etna characterized by a 3000 m a.g.l high plume, lasting for 9 hours and with a total erupted mass of 10^10 Kg

  5. Fuel Distribution Estimate via Spin Period to Precession Period Ratio for the Advanced Composition Explorer

    NASA Technical Reports Server (NTRS)

    DeHart, Russell; Smith, Eric; Lakin, John

    2015-01-01

    The spin period to precession period ratio of a non-axisymmetric spin-stabilized spacecraft, the Advanced Composition Explorer (ACE), was used to estimate the remaining mass and distribution of fuel within its propulsion system. This analysis was undertaken once telemetry suggested that two of the four fuel tanks had no propellant remaining, contrary to pre-launch expectations of the propulsion system performance. Numerical integration of possible fuel distributions was used to calculate moments of inertia for the spinning spacecraft. A Fast Fourier Transform (FFT) of output from a dynamics simulation was employed to relate calculated moments of inertia to spin and precession periods. The resulting modeled ratios were compared to the actual spin period to precession period ratio derived from the effect of post-maneuver nutation angle on sun sensor measurements. A Monte Carlo search was performed to tune free parameters using the observed spin period to precession period ratio over the life of the mission. This novel analysis of spin and precession periods indicates that at the time of launch, propellant was distributed unevenly between the two pairs of fuel tanks, with one pair having approximately 20% more propellant than the other pair. Furthermore, it indicates the pair of the tanks with less fuel expelled all of its propellant by 2014 and that approximately 46 kg of propellant remains in the other two tanks, an amount that closely matches the operational fuel accounting estimate. Keywords: Fuel Distribution, Moments of Inertia, Precession, Spin, Nutation

  6. Estimating snow leopard population abundance using photography and capture-recapture techniques

    USGS Publications Warehouse

    Jackson, R.M.; Roe, J.D.; Wangchuk, R.; Hunter, D.O.

    2006-01-01

    Conservation and management of snow leopards (Uncia uncia) has largely relied on anecdotal evidence and presence-absence data due to their cryptic nature and the difficult terrain they inhabit. These methods generally lack the scientific rigor necessary to accurately estimate population size and monitor trends. We evaluated the use of photography in capture-mark-recapture (CMR) techniques for estimating snow leopard population abundance and density within Hemis National Park, Ladakh, India. We placed infrared camera traps along actively used travel paths, scent-sprayed rocks, and scrape sites within 16- to 30-km2 sampling grids in successive winters during January and March 2003-2004. We used head-on, oblique, and side-view camera configurations to obtain snow leopard photographs at varying body orientations. We calculated snow leopard abundance estimates using the program CAPTURE. We obtained a total of 66 and 49 snow leopard captures resulting in 8.91 and 5.63 individuals per 100 trap-nights during 2003 and 2004, respectively. We identified snow leopards based on the distinct pelage patterns located primarily on the forelimbs, flanks, and dorsal surface of the tail. Capture probabilities ranged from 0.33 to 0.67. Density estimates ranged from 8.49 (SE = 0.22; individuals per 100 km2 in 2003 to 4.45 (SE = 0.16) in 2004. We believe the density disparity between years is attributable to different trap density and placement rather than to an actual decline in population size. Our results suggest that photographic capture-mark-recapture sampling may be a useful tool for monitoring demographic patterns. However, we believe a larger sample size would be necessary for generating a statistically robust estimate of population density and abundance based on CMR models.

  7. Planning and scheduling the Hubble Space Telescope: Practical application of advanced techniques

    NASA Technical Reports Server (NTRS)

    Miller, Glenn E.

    1994-01-01

    NASA's Hubble Space Telescope (HST) is a major astronomical facility that was launched in April, 1990. In late 1993, the first of several planned servicing missions refurbished the telescope, including corrections for a manufacturing flaw in the primary mirror. Orbiting above the distorting effects of the Earth's atmosphere, the HST provides an unrivaled combination of sensitivity, spectral coverage and angular resolution. The HST is arguably the most complex scientific observatory ever constructed and effective use of this valuable resource required novel approaches to astronomical observation and the development of advanced software systems including techniques to represent scheduling preferences and constraints, a constraint satisfaction problem (CSP) based scheduler and a rule based planning system. This paper presents a discussion of these systems and the lessons learned from operational experience.

  8. Planning and scheduling the Hubble Space Telescope: Practical application of advanced techniques

    NASA Astrophysics Data System (ADS)

    Miller, Glenn E.

    1994-10-01

    NASA's Hubble Space Telescope (HST) is a major astronomical facility that was launched in April, 1990. In late 1993, the first of several planned servicing missions refurbished the telescope, including corrections for a manufacturing flaw in the primary mirror. Orbiting above the distorting effects of the Earth's atmosphere, the HST provides an unrivaled combination of sensitivity, spectral coverage and angular resolution. The HST is arguably the most complex scientific observatory ever constructed and effective use of this valuable resource required novel approaches to astronomical observation and the development of advanced software systems including techniques to represent scheduling preferences and constraints, a constraint satisfaction problem (CSP) based scheduler and a rule based planning system. This paper presents a discussion of these systems and the lessons learned from operational experience.

  9. Vibrio parahaemolyticus: a review on the pathogenesis, prevalence, and advance molecular identification techniques

    PubMed Central

    Letchumanan, Vengadesh; Chan, Kok-Gan; Lee, Learn-Han

    2014-01-01

    Vibrio parahaemolyticus is a Gram-negative halophilic bacterium that is found in estuarine, marine and coastal environments. V. parahaemolyticus is the leading causal agent of human acute gastroenteritis following the consumption of raw, undercooked, or mishandled marine products. In rare cases, V. parahaemolyticus causes wound infection, ear infection or septicaemia in individuals with pre-existing medical conditions. V. parahaemolyticus has two hemolysins virulence factors that are thermostable direct hemolysin (tdh)-a pore-forming protein that contributes to the invasiveness of the bacterium in humans, and TDH-related hemolysin (trh), which plays a similar role as tdh in the disease pathogenesis. In addition, the bacterium is also encodes for adhesions and type III secretion systems (T3SS1 and T3SS2) to ensure its survival in the environment. This review aims at discussing the V. parahaemolyticus growth and characteristics, pathogenesis, prevalence and advances in molecular identification techniques. PMID:25566219

  10. Effects of age, system experience, and navigation technique on driving with an advanced traveler information system.

    PubMed

    Dingus, T A; Hulse, M C; Mollenhauer, M A; Fleischman, R N; McGehee, D V; Manakkal, N

    1997-06-01

    This paper explores the effects of age, system experience, and navigation technique on driving, navigation performance, and safety for drivers who used TravTek, an Advanced Traveler Information System. The first two studies investigated various route guidance configurations on the road in a specially equipped instrumented vehicle with an experimenter present. The third was a naturalistic quasi-experimental field study that collected data unobtrusively from more than 1200 TravTek rental car drivers with no in-vehicle experimenter. The results suggest that with increased experience, drivers become familiar with the system and develop strategies for substantially more efficient and safer use. The results also showed that drivers over age 65 had difficulty driving and navigating concurrently. They compensated by driving slowly and more cautiously. Despite this increased caution, older drivers made more safety-related errors than did younger drivers. The results also showed that older drivers benefited substantially from a well-designed ATIS driver interface. PMID:9302887

  11. Visualisation of Ecohydrological Processes and Relationships for Teaching Using Advanced Techniques

    NASA Astrophysics Data System (ADS)

    Guan, H.; Wang, H.; Gutierrez-Jurado, H. A.; Yang, Y.; Deng, Z.

    2014-12-01

    Ecohydrology is an emerging discipline with a rapid research growth. This calls for enhancing ecohydrology education in both undergraduate and postgraduate levels. In other hydrology disciplines, hydrological processes are commonly observed in environments (e.g. streamflow, infiltration) or easily demonstrated in labs (e.g. Darcy's column). It is relatively difficult to demonstrate ecohydrological concepts and processes (e.g. soil-vegetation water relationship) in teaching. In this presentation, we report examples of using some advanced techniques to illustrate ecohydrological concepts, relationships, and processes, with measurements based on a native vegetation catchment in South Australia. They include LIDAR images showing the relationship between topography-control hdyroclimatic conditions and vegetation distribution, electrical resistivity tomography derived images showing stem structures, continuous stem water potential monitoring showing diurnal variations of plant water status, root zone moisture depletion during dry spells, and responses to precipitation inputs, and incorporating sapflow measurements to demonstrate environmental stress on plant stomatal behaviours.

  12. Integrating advanced materials simulation techniques into an automated data analysis workflow at the Spallation Neutron Source

    SciTech Connect

    Borreguero Calvo, Jose M; Campbell, Stuart I; Delaire, Olivier A; Doucet, Mathieu; Goswami, Monojoy; Hagen, Mark E; Lynch, Vickie E; Proffen, Thomas E; Ren, Shelly; Savici, Andrei T; Sumpter, Bobby G

    2014-01-01

    This presentation will review developments on the integration of advanced modeling and simulation techniques into the analysis step of experimental data obtained at the Spallation Neutron Source. A workflow framework for the purpose of refining molecular mechanics force-fields against quasi-elastic neutron scattering data is presented. The workflow combines software components to submit model simulations to remote high performance computers, a message broker interface for communications between the optimizer engine and the simulation production step, and tools to convolve the simulated data with the experimental resolution. A test application shows the correction to a popular fixed-charge water model in order to account polarization effects due to the presence of solvated ions. Future enhancements to the refinement workflow are discussed. This work is funded through the DOE Center for Accelerating Materials Modeling.

  13. PREFACE: 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011)

    NASA Astrophysics Data System (ADS)

    Teodorescu, Liliana; Britton, David; Glover, Nigel; Heinrich, Gudrun; Lauret, Jérôme; Naumann, Axel; Speer, Thomas; Teixeira-Dias, Pedro

    2012-06-01

    ACAT2011 This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011) which took place on 5-7 September 2011 at Brunel University, UK. The workshop series, which began in 1990 in Lyon, France, brings together computer science researchers and practitioners, and researchers from particle physics and related fields in order to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. It is a forum for the exchange of ideas among the fields, exploring and promoting cutting-edge computing, data analysis and theoretical calculation techniques in fundamental physics research. This year's edition of the workshop brought together over 100 participants from all over the world. 14 invited speakers presented key topics on computing ecosystems, cloud computing, multivariate data analysis, symbolic and automatic theoretical calculations as well as computing and data analysis challenges in astrophysics, bioinformatics and musicology. Over 80 other talks and posters presented state-of-the art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. Panel and round table discussions on data management and multivariate data analysis uncovered new ideas and collaboration opportunities in the respective areas. This edition of ACAT was generously sponsored by the Science and Technology Facility Council (STFC), the Institute for Particle Physics Phenomenology (IPPP) at Durham University, Brookhaven National Laboratory in the USA and Dell. We would like to thank all the participants of the workshop for the high level of their scientific contributions and for the enthusiastic participation in all its activities which were, ultimately, the key factors in the

  14. Robotic-assisted laparoscopic anterior pelvic exenteration in patients with advanced ovarian cancer: Farghaly's technique.

    PubMed

    Farghaly, S A

    2010-01-01

    The safety and efficacy of the robotic-assisted laparoscopic approach to anterior pelvic exenteration is evaluated in patients with advanced ovarian cancer undergoing anterior pelvic exenteration for involvement of the urinary bladder during primary cytoreduction surgery. All patients undergo preoperative lab work, imaging studies and bowel preparation prior to surgery. The Davinci surgical system is used to perform urinary cystectomy, total hysterectomy, bilateral salpingo-oophorectomy, bilateral pelvic adenectomy (including obturator, hypogastic, external iliac, and common iliac lymph nodes). In addition, debulking to less than 1 cm is performed. The anterior pelvic exenteration procedure involves wide perivesical dissection. Then the robot is locked, and ileal conduit is performed via a 6 cm lower midline incision. Operative time can be maintained in 4.6 hours with a mean blood loss of 215 ml and hospital stay of five days. Farghaly's technique of robotic-assisted laparoscopic anterior pelvic exenteration in patients with advanced ovarian cancer is safe, feasible, and cost-effective with acceptable operative, pathological and short- and long-term clinical outcomes. It retains the advantage of minimally invasive surgery. PMID:20882872

  15. Techniques for estimating the unknown functions of incomplete experimental spectral and correlation response matrices

    NASA Astrophysics Data System (ADS)

    Antunes, Jose; Borsoi, Laurent; Delaune, Xavier; Piteau, Philippe

    2016-02-01

    In this paper, we propose analytical and numerical straightforward approximate methods to estimate the unknown terms of incomplete spectral or correlation matrices, when the cross-spectra or cross-correlations available from multiple measurements do not cover all pairs of transducer locations. The proposed techniques may be applied whenever the available data includes the auto-spectra at all measurement locations, as well as selected cross-spectra which implicates all measurement locations. The suggested methods can also be used for checking the consistency between the spectral or correlation functions pertaining to measurement matrices, in cases of suspicious data. After presenting the proposed spectral estimation formulations, we discuss their merits and limitations. Then we illustrate their use on a realistic simulation of a multi-supported tube subjected to turbulence excitation from cross-flow. Finally, we show the effectiveness of the proposed techniques by extracting the modal responses of the simulated flow-excited tube, using the SOBI (Second Order Blind Identification) method, from an incomplete response matrix 1

  16. Characterization of water movement in a reconstructed slope in Keokuk, Iowa, using advanced geophysical techniques

    NASA Astrophysics Data System (ADS)

    Schettler, Megan Elizabeth

    This project addresses the topic of evaluating water movement inside a hillslope using a combination of conventional and advanced geophysical techniques. While slope dynamics have been widely studied, ground water movement in hills is still poorly understood. A combination of piezometers, ground-penetrating radar (GPR), and electrical resistivity (ER) surveys were used in an effort to monitor fluctuations in the subsurface water level in a reengineered slope near Keokuk, Iowa. This information, integrated with rainfall data, formed a picture of rainfall-groundwater response dynamics. There were two hypotheses: 1) that the depth and fluctuation of the water table could be accurately sensed using a combination of monitoring wells, ground-penetrating radar and resistivity surveys; and 2) that the integration of data from the instrumentation array and the geophysical surveys would enable the characterization of water movement in the slope in response to rainfall events. This project also sought to evaluate the utility and limitations of using these techniques in landslide and hydrology studies, advance our understanding of hillslope hydrology, and improve our capacity to better determine when slope failure may occur. Results from monitoring wells, stratigraphy, and resistivity surveys at the study site indicated the presence of a buried swale, channelizing subsurface storm flow and creating variations in groundwater. Although there was some success in defining hydrologic characteristics and response of the slope using this integrated approach, it was determined that GPR was ultimately not well suited to this site. However, the use of GPR as part of an integrated approach to study hillslope hydrology still appears to hold potential, and future work to further evaluate the applicability and potential of this approach would be warranted.

  17. Event triggered state estimation techniques for power systems with integrated variable energy resources.

    PubMed

    Francy, Reshma C; Farid, Amro M; Youcef-Toumi, Kamal

    2015-05-01

    For many decades, state estimation (SE) has been a critical technology for energy management systems utilized by power system operators. Over time, it has become a mature technology that provides an accurate representation of system state under fairly stable and well understood system operation. The integration of variable energy resources (VERs) such as wind and solar generation, however, introduces new fast frequency dynamics and uncertainties into the system. Furthermore, such renewable energy is often integrated into the distribution system thus requiring real-time monitoring all the way to the periphery of the power grid topology and not just the (central) transmission system. The conventional solution is two fold: solve the SE problem (1) at a faster rate in accordance with the newly added VER dynamics and (2) for the entire power grid topology including the transmission and distribution systems. Such an approach results in exponentially growing problem sets which need to be solver at faster rates. This work seeks to address these two simultaneous requirements and builds upon two recent SE methods which incorporate event-triggering such that the state estimator is only called in the case of considerable novelty in the evolution of the system state. The first method incorporates only event-triggering while the second adds the concept of tracking. Both SE methods are demonstrated on the standard IEEE 14-bus system and the results are observed for a specific bus for two difference scenarios: (1) a spike in the wind power injection and (2) ramp events with higher variability. Relative to traditional state estimation, the numerical case studies showed that the proposed methods can result in computational time reductions of 90%. These results were supported by a theoretical discussion of the computational complexity of three SE techniques. The work concludes that the proposed SE techniques demonstrate practical improvements to the computational complexity of

  18. Correlation techniques as applied to pose estimation in space station docking

    NASA Astrophysics Data System (ADS)

    Rollins, John M.; Juday, Richard D.; Monroe, Stanley E., Jr.

    2002-08-01

    The telerobotic assembly of space-station components has become the method of choice for the International Space Station (ISS) because it offers a safe alternative to the more hazardous option of space walks. The disadvantage of telerobotic assembly is that it does not necessarily provide for direct arbitrary views of mating interfaces for the teleoperator. Unless cameras are present very close to the interface positions, such views must be generated graphically, based on calculated pose relationships derived from images. To assist in this photogrammetric pose estimation, circular targets, or spots, of high contrast have been affixed on each connecting module at carefully surveyed positions. The appearance of a subset of spots must form a constellation of specific relative positions in the incoming image stream in order for the docking to proceed. Spot positions are expressed in terms of their apparent centroids in an image. The precision of centroid estimation is required to be as fine as 1/20th pixel, in some cases. This paper presents an approach to spot centroid estimation using cross correlation between spot images and synthetic spot models of precise centration. Techniques for obtaining sub-pixel accuracy and for shadow and lighting irregularity compensation are discussed.

  19. Correlation Techniques as Applied to Pose Estimation in Space Station Docking

    NASA Technical Reports Server (NTRS)

    Rollins, J. Michael; Juday, Richard D.; Monroe, Stanley E., Jr.

    2002-01-01

    The telerobotic assembly of space-station components has become the method of choice for the International Space Station (ISS) because it offers a safe alternative to the more hazardous option of space walks. The disadvantage of telerobotic assembly is that it does not provide for direct arbitrary views of mating interfaces for the teleoperator. Unless cameras are present very close to the interface positions, such views must be generated graphically, based on calculated pose relationships derived from images. To assist in this photogrammetric pose estimation, circular targets, or spots, of high contrast have been affixed on each connecting module at carefully surveyed positions. The appearance of a subset of spots essentially must form a constellation of specific relative positions in the incoming digital image stream in order for the docking to proceed. Spot positions are expressed in terms of their apparent centroids in an image. The precision of centroid estimation is required to be as fine as 1I20th pixel, in some cases. This paper presents an approach to spot centroid estimation using cross correlation between spot images and synthetic spot models of precise centration. Techniques for obtaining sub-pixel accuracy and for shadow, obscuration and lighting irregularity compensation are discussed.

  20. Estimation of thermospheric zonal and meridional winds using a Kalman filter technique

    NASA Astrophysics Data System (ADS)

    Lomidze, Levan; Scherliess, Ludger

    2015-11-01

    Knowledge of the thermospheric neutral wind and its horizontal components is critical for an improved understanding of F region dynamics and morphology. However, to date their reliable estimation remains a challenge because of difficulties in both measurement and modeling. We present a new method to estimate the climatology of the zonal and meridional components of thermospheric neutral wind at low and middle latitudes using a Kalman filter technique. First, the climatology of the magnetic meridional wind is obtained by assimilating seasonal maps of F region ionosphere peak parameters (NmF2 and hmF2), obtained from Constellation Observing System for Meteorology, Ionosphere, and Climate radio occultation data, into the Global Assimilation of Ionospheric Measurements Full Physics (GAIM-FP) model. GAIM-FP provides the 3-D electron density throughout the ionosphere, together with the magnetic meridional wind. Next, the global zonal and meridional wind components are estimated using a newly developed Thermospheric Wind Assimilation Model (TWAM). TWAM combines magnetic meridional wind data obtained from GAIM-FP with a physics-based 3-D thermospheric neutral wind model using an implicit Kalman filter technique. Ionospheric drag and ion diffusion velocities, needed for the wind calculation, are also taken from GAIM-FP. The obtained wind velocities are in close agreement with measurements made by interferometers and with wind values from the Horizontal Wind Model 93 (HWM93) over Millstone Hill, Arecibo, and Arequipa during December and June solstices, and March equinox. In addition, it is shown that compared to HWM93 the winds from TWAM significantly improve the accuracy of the Ionosphere/Plasmasphere Model in reproducing the observed electron density variation over the Weddell Sea Anomaly.

  1. A sensitivity/intrusion comparison of mental workload estimation techniques using a flight task emphasizing perceptual piloting activities

    NASA Technical Reports Server (NTRS)

    Casali, J. G.; Wierwille, W. W.

    1982-01-01

    In a literature review it was found that little research effort has been directly applied to the problem of specifying a viable workload estimation technique for a given pilot/aircrew problem. Furthermore, the relative sensitivity and intrusion of most techniques has not been studied. The present investigation is concerned with a comparative evaluation of eight workload estimation techniques under identical experimental conditions in a flight simulator. The objective of this comparison was to determine the relative sensitivity and intrusion of each estimation technique in applications to a piloting situation which emphasizes the use of perceptual processes. No differential intrusion could be observed, but six of the eight techniques did show sensitivity to changes in perceptual load. All significant techniques displayed monotonic increases in measured values across the three loading levels considered.

  2. Quantitative coronary angiography using image recovery techniques for background estimation in unsubtracted images

    SciTech Connect

    Wong, Jerry T.; Kamyar, Farzad; Molloi, Sabee

    2007-10-15

    Densitometry measurements have been performed previously using subtracted images. However, digital subtraction angiography (DSA) in coronary angiography is highly susceptible to misregistration artifacts due to the temporal separation of background and target images. Misregistration artifacts due to respiration and patient motion occur frequently, and organ motion is unavoidable. Quantitative densitometric techniques would be more clinically feasible if they could be implemented using unsubtracted images. The goal of this study is to evaluate image recovery techniques for densitometry measurements using unsubtracted images. A humanoid phantom and eight swine (25-35 kg) were used to evaluate the accuracy and precision of the following image recovery techniques: Local averaging (LA), morphological filtering (MF), linear interpolation (LI), and curvature-driven diffusion image inpainting (CDD). Images of iodinated vessel phantoms placed over the heart of the humanoid phantom or swine were acquired. In addition, coronary angiograms were obtained after power injections of a nonionic iodinated contrast solution in an in vivo swine study. Background signals were estimated and removed with LA, MF, LI, and CDD. Iodine masses in the vessel phantoms were quantified and compared to known amounts. Moreover, the total iodine in left anterior descending arteries was measured and compared with DSA measurements. In the humanoid phantom study, the average root mean square errors associated with quantifying iodine mass using LA and MF were approximately 6% and 9%, respectively. The corresponding average root mean square errors associated with quantifying iodine mass using LI and CDD were both approximately 3%. In the in vivo swine study, the root mean square errors associated with quantifying iodine in the vessel phantoms with LA and MF were approximately 5% and 12%, respectively. The corresponding average root mean square errors using LI and CDD were both 3%. The standard deviations

  3. Advancements in sensing and perception using structured lighting techniques :an LDRD final report.

    SciTech Connect

    Novick, David Keith; Padilla, Denise D.; Davidson, Patrick A. Jr.; Carlson, Jeffrey J.

    2005-09-01

    This report summarizes the analytical and experimental efforts for the Laboratory Directed Research and Development (LDRD) project entitled ''Advancements in Sensing and Perception using Structured Lighting Techniques''. There is an ever-increasing need for robust, autonomous ground vehicles for counterterrorism and defense missions. Although there has been nearly 30 years of government-sponsored research, it is undisputed that significant advancements in sensing and perception are necessary. We developed an innovative, advanced sensing technology for national security missions serving the Department of Energy, the Department of Defense, and other government agencies. The principal goal of this project was to develop an eye-safe, robust, low-cost, lightweight, 3D structured lighting sensor for use in broad daylight outdoor applications. The market for this technology is wide open due to the unavailability of such a sensor. Currently available laser scanners are slow, bulky and heavy, expensive, fragile, short-range, sensitive to vibration (highly problematic for moving platforms), and unreliable for outdoor use in bright sunlight conditions. Eye-safety issues are a primary concern for currently available laser-based sensors. Passive, stereo-imaging sensors are available for 3D sensing but suffer from several limitations : computationally intensive, require a lighted environment (natural or man-made light source), and don't work for many scenes or regions lacking texture or with ambiguous texture. Our approach leveraged from the advanced capabilities of modern CCD camera technology and Center 6600's expertise in 3D world modeling, mapping, and analysis, using structured lighting. We have a diverse customer base for indoor mapping applications and this research extends our current technology's lifecycle and opens a new market base for outdoor 3D mapping. Applications include precision mapping, autonomous navigation, dexterous manipulation, surveillance and

  4. Framework for the mapping of the monthly average daily solar radiation using an advanced case-based reasoning and a geostatistical technique.

    PubMed

    Lee, Minhyun; Koo, Choongwan; Hong, Taehoon; Park, Hyo Seon

    2014-04-15

    For the effective photovoltaic (PV) system, it is necessary to accurately determine the monthly average daily solar radiation (MADSR) and to develop an accurate MADSR map, which can simplify the decision-making process for selecting the suitable location of the PV system installation. Therefore, this study aimed to develop a framework for the mapping of the MADSR using an advanced case-based reasoning (CBR) and a geostatistical technique. The proposed framework consists of the following procedures: (i) the geographic scope for the mapping of the MADSR is set, and the measured MADSR and meteorological data in the geographic scope are collected; (ii) using the collected data, the advanced CBR model is developed; (iii) using the advanced CBR model, the MADSR at unmeasured locations is estimated; and (iv) by applying the measured and estimated MADSR data to the geographic information system, the MADSR map is developed. A practical validation was conducted by applying the proposed framework to South Korea. It was determined that the MADSR map developed through the proposed framework has been improved in terms of accuracy. The developed MADSR map can be used for estimating the MADSR at unmeasured locations and for determining the optimal location for the PV system installation. PMID:24635702

  5. Investigation to advance prediction techniques of the low-speed aerodynamics of V/STOL aircraft

    NASA Technical Reports Server (NTRS)

    Maskew, B.; Strash, D.; Nathman, J.; Dvorak, F. A.

    1985-01-01

    A computer program, VSAERO, has been applied to a number of V/STOL configurations with a view to advancing prediction techniques for the low-speed aerodynamic characteristics. The program couples a low-order panel method with surface streamline calculation and integral boundary layer procedures. The panel method--which uses piecewise constant source and doublet panels-includes an iterative procedure for wake shape and models boundary layer displacement effect using the source transpiration technique. Certain improvements to a basic vortex tube jet model were installed in the code prior to evaluation. Very promising results were obtained for surface pressures near a jet issuing at 90 deg from a flat plate. A solid core model was used in the initial part of the jet with a simple entrainment model. Preliminary representation of the downstream separation zone significantly improve the correlation. The program accurately predicted the pressure distribution inside the inlet on the Grumman 698-411 design at a range of flight conditions. Furthermore, coupled viscous/potential flow calculations gave very close correlation with experimentally determined operational boundaries dictated by the onset of separation inside the inlet. Experimentally observed degradation of these operational boundaries between nacelle-alone tests and tests on the full configuration were also indicated by the calculation. Application of the program to the General Dynamics STOL fighter design were equally encouraging. Very close agreement was observed between experiment and calculation for the effects of power on pressure distribution, lift and lift curve slope.

  6. Advancing the Frontiers in Nanocatalysis, Biointerfaces, and Renewable Energy Conversion by Innovations of Surface Techniques

    SciTech Connect

    Somorjai, G.A.; Frei, H.; Park, J.Y.

    2009-07-23

    The challenge of chemistry in the 21st century is to achieve 100% selectivity of the desired product molecule in multipath reactions ('green chemistry') and develop renewable energy based processes. Surface chemistry and catalysis play key roles in this enterprise. Development of in situ surface techniques such as high-pressure scanning tunneling microscopy, sum frequency generation (SFG) vibrational spectroscopy, time-resolved Fourier transform infrared methods, and ambient pressure X-ray photoelectron spectroscopy enabled the rapid advancement of three fields: nanocatalysts, biointerfaces, and renewable energy conversion chemistry. In materials nanoscience, synthetic methods have been developed to produce monodisperse metal and oxide nanoparticles (NPs) in the 0.8-10 nm range with controlled shape, oxidation states, and composition; these NPs can be used as selective catalysts since chemical selectivity appears to be dependent on all of these experimental parameters. New spectroscopic and microscopic techniques have been developed that operate under reaction conditions and reveal the dynamic change of molecular structure of catalysts and adsorbed molecules as the reactions proceed with changes in reaction intermediates, catalyst composition, and oxidation states. SFG vibrational spectroscopy detects amino acids, peptides, and proteins adsorbed at hydrophobic and hydrophilic interfaces and monitors the change of surface structure and interactions with coadsorbed water. Exothermic reactions and photons generate hot electrons in metal NPs that may be utilized in chemical energy conversion. The photosplitting of water and carbon dioxide, an important research direction in renewable energy conversion, is discussed.

  7. Updates in advanced diffusion-weighted magnetic resonance imaging techniques in the evaluation of prostate cancer

    PubMed Central

    Vargas, Hebert Alberto; Lawrence, Edward Malnor; Mazaheri, Yousef; Sala, Evis

    2015-01-01

    Diffusion-weighted magnetic resonance imaging (DW-MRI) is considered part of the standard imaging protocol for the evaluation of patients with prostate cancer. It has been proven valuable as a functional tool for qualitative and quantitative analysis of prostate cancer beyond anatomical MRI sequences such as T2-weighted imaging. This review discusses ongoing controversies in DW-MRI acquisition, including the optimal number of b-values to be used for prostate DWI, and summarizes the current literature on the use of advanced DW-MRI techniques. These include intravoxel incoherent motion imaging, which better accounts for the non-mono-exponential behavior of the apparent diffusion coefficient as a function of b-value and the influence of perfusion at low b-values. Another technique is diffusion kurtosis imaging (DKI). Metrics from DKI reflect excess kurtosis of tissues, representing its deviation from Gaussian diffusion behavior. Preliminary results suggest that DKI findings may have more value than findings from conventional DW-MRI for the assessment of prostate cancer. PMID:26339460

  8. Advanced Modeling Techniques to Study Anthropogenic Influences on Atmospheric Chemical Budgets

    NASA Technical Reports Server (NTRS)

    Mathur, Rohit

    1997-01-01

    This research work is a collaborative effort between research groups at MCNC and the University of North Carolina at Chapel Hill. The overall objective of this research is to improve the level of understanding of the processes that determine the budgets of chemically and radiatively active compounds in the atmosphere through development and application of advanced methods for calculating the chemical change in atmospheric models. The research performed during the second year of this project focused on four major aspects: (1) The continued development and refinement of multiscale modeling techniques to address the issue of the disparate scales of the physico-chemical processes that govern the fate of atmospheric pollutants; (2) Development and application of analysis methods utilizing process and mass balance techniques to increase the interpretive powers of atmospheric models and to aid in complementary analysis of model predictions and observations; (3) Development of meteorological and emission inputs for initial application of the chemistry/transport model over the north Atlantic region; and, (4) The continued development and implementation of a totally new adaptive chemistry representation that changes the details of what is represented as the underlying conditions change.

  9. Advanced system identification techniques for wind turbine structures with special emphasis on modal parameters

    SciTech Connect

    Bialasiewicz, J.T.

    1995-06-01

    The goal of this research is to develop advanced system identification techniques that can be used to accurately measure the frequency response functions of a wind-turbine structure immersed in wind noise. To allow for accurate identification, the authors have developed a special test signal called the Pseudo-Random Binary Sequence (PRBS). The Matlab program that generates this signal allows the user to interactively tailor its parameters for the frequency range of interest based on the response of the wind turbine under test. By controlling NREL`s Mobile Hydraulic Shaker System, which is attached to the wind turbine structure, the PRBS signal produces the wide-band excitation necessary to perform system identification in the presence of wind noise. The techniques presented here will enable researchers to obtain modal parameters from an operating wind turbine, including frequencies, damping coefficients, and mode shapes. More importantly, the algorithms they have developed and tested (so far using input-output data from a simulated structure) permit state-space representation of the system under test, particularly the modal state space representation. This is the only system description that reveals the internal behavior the system, such as the interaction between the physical parameters, and which, in contrast to transfer functions, is valid for non-zero initial conditions.

  10. Advanced 3D-Sonographic Imaging as a Precise Technique to Evaluate Tumor Volume

    PubMed Central

    Pflanzer, R.; Hofmann, M.; Shelke, A.; Habib, A.; Derwich, W.; Schmitz-Rixen, T.; Bernd, A.; Kaufmann, R.; Bereiter-Hahn, J.

    2014-01-01

    Determination of tumor volume in subcutaneously inoculated xenograft models is a standard procedure for clinical and preclinical evaluation of tumor response to treatment. Practitioners frequently use a hands-on caliper method in conjunction with a simplified formula to assess tumor volume. Non-invasive and more precise techniques as investigation by MR or (μ)CT exist but come with various adverse effects in terms of radiation, complex setup or elevated cost of investigations. Therefore, we propose an advanced three-dimensional sonographic imaging technique to determine small tumor volumes in xenografts with high precision and minimized observer variability. We present a study on xenograft carcinoma tumors from which volumes and shapes were calculated with the standard caliper method as well as with a clinically available three-dimensional ultrasound scanner and subsequent processing software. Statistical analysis reveals the suitability of this non-invasive approach for the purpose of a quick and precise calculation of tumor volume in small rodents. PMID:25500076

  11. Techniques for estimating peak-flow frequency relations for North Dakota streams

    USGS Publications Warehouse

    Williams-Sether, Tara

    1992-01-01

    This report presents techniques for estimating peak-flow frequency relations for North Dakota streams. In addition, a generalized skew coefficient analysis was completed for North Dakota to test the validity of using the generalized skew coefficient map in Bulletin 17B of the Hydrology Subcommittee of the Interagency Advisory Committee on Water Data, 1982, 'Guidelines for Determining Flood Flow Frequency.' The analysis indicates that the generalized skew coefficient map in Bulletin 17B provides accurate estimates of generalized skew coefficient values for natural-flow streams in North Dakota. Peak-flow records through 1988 for 192 continuous- and partial-record streamflow gaging stations that had 10 or more years of record were used in a generalized least-squares regression analysis that relates peak flows for selected recurrence intervals to selected basin characteristics. Peak-flow equations were developed for recurrence intervals of 2, 10, 15, 25, 50, 100, and 500 years for three hydrologic regions in North Dakota. The peak-flow equations are applicable to natural-flow streams that have drainage areas of less than or equal to 1,000 square miles. The standard error of estimate for the three hydrologic regions ranges from 60 to 70 percent for the 100-year peak-flow equations. Methods are presented for transferring peak-flow data from gaging stations to ungaged sites on the same stream and for determining peak flows for ungaged sites on ungaged streams. Peak-flow relations, weighted estimates of peak flow, and selected basin characteristics are tabulated for the 192 gaging stations used in the generalized skew coefficient and regression analyses. Peak-flow relations also are provided for 63 additional gaging stations that were not used in the generalized skew coefficient and regression analyses. These 63 gaging stations generally represent streams that are significantly controlled by regulation and those that have drainage areas greater than 1,000 square miles.

  12. Geochemical Estimates of Paleorecharge in the Pasco Basin: Evaluation of the Chloride Mass Balance Technique

    NASA Astrophysics Data System (ADS)

    Murphy, Ellyn M.; Ginn, Timothy R.; Phillips, Jerry L.

    1996-04-01

    The Pasco Basin in southeastern Washington State provides a unique hydrogeologic setting for evaluating the chloride mass balance technique for estimating recharge. This basin was affected by late Pleistocene catastrophic floods when glacial dams in western Montana and northern Idaho were breached. It is estimated that multiple Missoula floods occurred between ˜13,000 and 15,000 years B.P. and reached a high water elevation of ˜350 m. These floods removed accumulated chloride from the sediment profile, effectively resetting the chloride mass balance clock at the beginning of the Holocene. The rate of chloride accumulation qCl in the sediments was determined by two methods and compared. The first method measured qCl by dividing the calculated natural fallout of 36Cl by a measured ratio of 36Cl/Cl in the pore water, while the second method used the total mass of chloride in the profile divided by the length of time that atmospheric chloride had accumulated since the last flood. Although the two methods are based on different approaches, they showed close agreement. In laboratory studies the sediment to water ratio for chloride extraction was sensitive to the grain size of the sediments; low extraction ratios in silt loam sediments led to significant underestimation of pore water chloride concentration. Br/Cl ratios were useful for distinguishing nonatmospheric (e.g., rock) sources of chloride. Field studies showed little spatial variability in estimated recharge at a given site within the basin but showed significant topographic control on recharge rates in this semiarid environment. An extension of the conventional chloride mass balance model was used to evaluate chloride profiles under transient, time-varying annual precipitation conditions. This model was inverted to determine the paleorecharge history for a given soil chloride profile, and the parameters of the root extraction model required to estimate paleoprecipitation

  13. EPS in Environmental Microbial Biofilms as Examined by Advanced Imaging Techniques

    NASA Astrophysics Data System (ADS)

    Neu, T. R.; Lawrence, J. R.

    2006-12-01

    Biofilm communities are highly structured associations of cellular and polymeric components which are involved in biogenic and geogenic environmental processes. Furthermore, biofilms are also important in medical (infection), industrial (biofouling) and technological (biofilm engineering) processes. The interfacial microbial communities in a specific habitat are highly dynamic and change according to the environmental parameters affecting not only the cellular but also the polymeric constituents of the system. Through their EPS biofilms interact with dissolved, colloidal and particulate compounds from the bulk water phase. For a long time the focus in biofilm research was on the cellular constituents in biofilms and the polymer matrix in biofilms has been rather neglected. The polymer matrix is produced not only by different bacteria and archaea but also by eukaryotic micro-organisms such as algae and fungi. The mostly unidentified mixture of EPS compounds is responsible for many biofilm properties and is involved in biofilm functionality. The chemistry of the EPS matrix represents a mixture of polymers including polysaccharides, proteins, nucleic acids, neutral polymers, charged polymers, amphiphilic polymers and refractory microbial polymers. The analysis of the EPS may be done destructively by means of extraction and subsequent chemical analysis or in situ by means of specific probes in combination with advanced imaging. In the last 15 years laser scanning microscopy (LSM) has been established as an indispensable technique for studying microbial communities. LSM with 1-photon and 2-photon excitation in combination with fluorescence techniques allows 3-dimensional investigation of fully hydrated, living biofilm systems. This approach is able to reveal data on biofilm structural features as well as biofilm processes and interactions. The fluorescent probes available allow the quantitative assessment of cellular as well as polymer distribution. For this purpose

  14. PREFACE: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013)

    NASA Astrophysics Data System (ADS)

    Wang, Jianxiong

    2014-06-01

    This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF

  15. A Monte Carlo Technique to Estimate Emissions from R&D Facilities

    SciTech Connect

    Ballinger, Marcel Y.; Duchsherer, Cheryl J.

    2012-08-01

    buildings was analyzed using a statistical technique called Positive Matrix Factorization (PMF) to identify the number and composition of major contributing sources to measured stack concentrations.5 In this analysis, a method is described that uses a Monte Carlo technique to produce a distribution of emission estimates from the measured data. The method is applied to one of the chemicals previously identified as more significant than others using the ranking process.4 Variations on the method are investigated and compared to determine their effects on the emission estimates.

  16. Recent advances and on-going challenges of estimating past elevation from climate proxy data

    NASA Astrophysics Data System (ADS)

    Snell, K. E.; Peppe, D. J.; Eiler, J. M.; Wernicke, B. P.; Koch, P. L.

    2012-12-01

    The methods currently available to reconstruct paleoelevation dominantly rely on diverse sedimentary archives of past climate. The spatial and temporal distributions of these records are used to extract information about differences in elevation from site to site, and through geologic time. As such, our understanding of past elevations is only as good as our ability to understand past climate and to put these records into a reasonable chronologic framework. Currently, most techniques either exploit the difference in temperature or the difference in the hydrogen and/or oxygen isotopic composition of precipitation between high and low elevation sites. Temperature data dominantly come from leaf margin analysis of fossil plants; biomarkers preserved in sediments; and clumped isotope thermometry of paleosol and lacustrine carbonates and carbonate cements. Constraints on the isotopic composition of precipitation come from many of the same sedimentary archives: paleosol and lacustrine carbonates, carbonate cements and authigenic clays. Reconstructed gradients in temperature and isotopic composition are then compared with modern "lapse rates" to translate climate proxy data into elevation estimates. There are still many challenges in reconstructing past elevations from paleoclimate proxy data in this way. For example, modern lapse rates are generally empirical rather than based on thermodynamic principles alone, and so may vary for reasons that are not always understood. In addition, unrecognized differences in seasonal bias for the different sedimentary archives can lead to inaccurately averaged records and/or over-estimates of errors in each method. Finally, to appropriately estimate elevation, the effects of climate change must be accounted for by matching inferred high-elevation sites with known low-elevation sites of similar age and geographic location. This requires excellent chronologic control and correlation across terrestrial basins (or independent knowledge of

  17. Advanced sensing and control techniques to facilitate semi-autonomous decommissioning. 1998 annual progress report

    SciTech Connect

    Schalkoff, R.J.; Geist, R.M.; Dawson, D.M.

    1998-06-01

    'This research is intended to advance the technology of semi-autonomous teleoperated robotics as applied to Decontamination and Decommissioning (D and D) tasks. Specifically, research leading to a prototype dual-manipulator mobile work cell is underway. This cell is supported and enhanced by computer vision, virtual reality and advanced robotics technology. This report summarizes work after approximately 1.5 years of a 3-year project. The autonomous, non-contact creation of a virtual environment from an existing, real environment (virtualization) is an integral part of the workcell functionality. This requires that the virtual world be geometrically correct. To this end, the authors have encountered severe sensitivity in quadric estimation. As a result, alternative procedures for geometric rendering, iterative correction approaches, new calibration methods and associated hardware, and calibration quality examination software have been developed. Following geometric rendering, the authors have focused on improving the color and texture recognition components of the system. In particular, the authors have moved beyond first-order illumination modeling to include higher order diffuse effects. This allows us to combine the surface geometric information, obtained from the laser projection and surface recognition components of the system, with a stereo camera image. Low-level controllers for Puma 560 robotic arms were designed and implemented using QNX. The resulting QNX/PC based low-level robot control system is called QRobot. A high-level trajectory generator and application programming interface (API) as well as a new, flexible robot control API was required. Force/torque sensors and interface hardware have been identified and ordered. A simple 3-D OpenGL-based graphical Puma 560 robot simulator was developed and interfaced with ARCL and RCCL to assist in the development of robot motion programs.'

  18. 2D Flood Modelling Using Advanced Terrain Analysis Techniques And A Fully Continuous DEM-Based Rainfall-Runoff Algorithm

    NASA Astrophysics Data System (ADS)

    Nardi, F.; Grimaldi, S.; Petroselli, A.

    2012-12-01

    Remotely sensed Digital Elevation Models (DEMs), largely available at high resolution, and advanced terrain analysis techniques built in Geographic Information Systems (GIS), provide unique opportunities for DEM-based hydrologic and hydraulic modelling in data-scarce river basins paving the way for flood mapping at the global scale. This research is based on the implementation of a fully continuous hydrologic-hydraulic modelling optimized for ungauged basins with limited river flow measurements. The proposed procedure is characterized by a rainfall generator that feeds a continuous rainfall-runoff model producing flow time series that are routed along the channel using a bidimensional hydraulic model for the detailed representation of the inundation process. The main advantage of the proposed approach is the characterization of the entire physical process during hydrologic extreme events of channel runoff generation, propagation, and overland flow within the floodplain domain. This physically-based model neglects the need for synthetic design hyetograph and hydrograph estimation that constitute the main source of subjective analysis and uncertainty of standard methods for flood mapping. Selected case studies show results and performances of the proposed procedure as respect to standard event-based approaches.

  19. Inversion Technique for Estimating Emissions of Volcanic Ash from Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Pelley, Rachel; Cooke, Michael; Manning, Alistair; Thomson, David; Witham, Claire; Hort, Matthew

    2014-05-01

    When using dispersion models such as NAME (Numerical Atmospheric-dispersion Modelling Environment) to predict the dispersion of volcanic ash, a source term defining the mass release rate of ash is required. Inversion modelling using observations of the ash plume provides a method of estimating the source term for use in NAME. Our inversion technique makes use of satellite retrievals, calculated using data from the SEVIRI (Spinning Enhanced Visible and Infrared Imager) instrument on-board the MSG (Meteosat Second Generation) satellite, as the ash observations. InTEM (Inversion Technique for Emission Modelling) is the UK Met Office's inversion modelling system. Recently the capability to estimate time and height varying source terms has been implemented and applied to volcanic ash. InTEM uses a probabilistic approach to fit NAME model concentrations to satellite retrievals. This is achieved by applying Bayes Theorem to give a cost function for the source term. Source term profiles with lower costs generate model concentrations that better fit the satellite retrievals. InTEM uses the global optimisation technique, simulated annealing, to find the minimum of the cost function. The use of a probabilistic approach allows the uncertainty in the satellite retrievals to be incorporated into the inversion technique. InTEM makes use of satellite retrievals of both ash column loadings and of cloud free regions. We present a system that allows InTEM to be used during an eruption. The system is automated and can produce source term updates up to four times a day. To allow automation hourly satellite retrievals of ash are routinely produced using conservative detection limits. The conservative detection limits provide good detection of the ash plume while limiting the number of false alarms. Regions which are flagged as ash contaminated or free from cloud (both meteorological and ash) are used in the InTEM system. This approach is shown to improve the concentrations in the

  20. ROV advanced magnetic survey for revealing archaeological targets and estimating medium magnetization

    NASA Astrophysics Data System (ADS)

    Eppelbaum, Lev

    2013-04-01

    Magnetic survey is one of most applied geophysical method for searching and localization of any objects with contrast magnetic properties (for instance, in Israel detailed magneric survey has been succesfully applied at more than 60 archaeological sites (Eppelbaum, 2010, 2011; Eppelbaum et al., 2011, 2010)). However, land magnetic survey at comparatively large archaeological sites (with observation grids 0.5 x 0.5 or 1 x 1 m) may occupy 5-10 days. At the same time the new Remote Operation Vehicle (ROV) generation - small and maneuvering vehicles - can fly at levels of few (and even one) meters over the earth's surface (flowing the relief forms or straight). Such ROV with precise magnetic field measurements (with a frequency of 20-25 observations per second) may be performed during 10-30 minutes, moreover at different levels over the earth's surface. Such geophysical investigations should have an extremely low exploitation cost. Finally, measurements of geophysical fields at different observation levels could provide new unique geophysical-archaeological information (Eppelbaum, 2005; Eppelbaum and Mishne, 2011). The developed interpretation methodology for magnetic anomalies advanced analysis (Khesin et al., 1996; Eppelbaum et al., 2001; Eppelbaum et al., 2011) may be successfully applied for ROV magnetic survey for delineation of archaeological objects and estimation averaged magnetization of geological medium. This methodology includes: (1) non-conventional procedure for elimination of secondary effect of magnetic temporary variations, (2) calculation of rugged relief influence by the use of a correlation method, (3) estimation of medium magnetization, (4) application of various informational and wavelet algorithms for revealing low anomalous effects against the strong noise background, (5) advanced procedures for magnetic anomalies quantitative analysis (they are applicable in conditions of rugged relief, inclined magnetization, and an unknown level of the total

  1. Advanced fabrication techniques for hydrogen-cooled engine structures. Final report, October 1975-June 1982

    SciTech Connect

    Buchmann, O.A.; Arefian, V.V.; Warren, H.A.; Vuigner, A.A.; Pohlman, M.J.

    1985-11-01

    Described is a program for development of coolant passage geometries, material systems, and joining processes that will produce long-life hydrogen-cooled structures for scramjet applications. Tests were performed to establish basic material properties, and samples constructed and evaluated to substantiate fabrication processes and inspection techniques. Results of the study show that the basic goal of increasing the life of hydrogen-cooled structures two orders of magnitude relative to that of the Hypersonic Research Engine can be reached with available means. Estimated life is 19000 cycles for the channels and 16000 cycles for pin-fin coolant passage configurations using Nickel 201. Additional research is required to establish the fatigue characteristics of dissimilar-metal coolant passages (Nickel 201/Inconel 718) and to investigate the embrittling effects of the hydrogen coolant.

  2. Estimation of gastric emptying time (GET) in clownfish (Amphiprion ocellaris) using X-radiography technique

    SciTech Connect

    Ling, Khoo Mei; Ghaffar, Mazlan Abd.

    2014-09-03

    This study examines the movement of food item and the estimation of gastric emptying time using the X-radiography techniques, in the clownfish (Amphiprion ocellaris) fed in captivity. Fishes were voluntarily fed to satiation after being deprived of food for 72 hours, using pellets that were tampered with barium sulphate (BaSO{sub 4}). The movement of food item was monitored over different time of feeding. As a result, a total of 36 hours were needed for the food items to be evacuated completely from the stomach. Results on the modeling of meal satiation were also discussed. The size of satiation meal to body weight relationship was allometric, with the power value equal to 1.28.

  3. A field technique for estimating aquifer parameters using flow log data

    USGS Publications Warehouse

    Paillet, Frederick L.

    2000-01-01

    A numerical model is used to predict flow along intervals between producing zones in open boreholes for comparison with measurements of borehole flow. The model gives flow under quasi-steady conditions as a function of the transmissivity and hydraulic head in an arbitrary number of zones communicating with each other along open boreholes. The theory shows that the amount of inflow to or outflow from the borehole under any one flow condition may not indicate relative zone transmissivity. A unique inversion for both hydraulic-head and transmissivity values is possible if flow is measured under two different conditions such as ambient and quasi-steady pumping, and if the difference in open-borehole water level between the two flow conditions is measured. The technique is shown to give useful estimates of water levels and transmissivities of two or more water-producing zones intersecting a single interval of open borehole under typical field conditions. Although the modeling technique involves some approximation, the principle limit on the accuracy of the method under field conditions is the measurement error in the flow log data. Flow measurements and pumping conditions are usually adjusted so that transmissivity estimates are most accurate for the most transmissive zones, and relative measurement error is proportionately larger for less transmissive zones. The most effective general application of the borehole-flow model results when the data are fit to models that systematically include more production zones of progressively smaller transmissivity values until model results show that all accuracy in the data set is exhausted.A numerical model is used to predict flow along intervals between producing zones in open boreholes for comparison with measurements of borehole flow. The model gives flow under quasi-steady conditions as a function of the transmissivity and hydraulic head in an arbitrary number of zones communicating with each other along open boreholes. The

  4. Estimation of gastric emptying time (GET) in clownfish (Amphiprion ocellaris) using X-radiography technique

    NASA Astrophysics Data System (ADS)

    Ling, Khoo Mei; Ghaffar, Mazlan Abd.

    2014-09-01

    This study examines the movement of food item and the estimation of gastric emptying time using the X-radiography techniques, in the clownfish (Amphiprion ocellaris) fed in captivity. Fishes were voluntarily fed to satiation after being deprived of food for 72 hours, using pellets that were tampered with barium sulphate (BaSO4). The movement of food item was monitored over different time of feeding. As a result, a total of 36 hours were needed for the food items to be evacuated completely from the stomach. Results on the modeling of meal satiation were also discussed. The size of satiation meal to body weight relationship was allometric, with the power value equal to 1.28.

  5. A new auditory threshold estimation technique for low frequencies: Proof of concept

    PubMed Central

    Lichtenhan, Jeffery T.; Cooper, Nigel P.; Guinan, John J.

    2012-01-01

    Objectives Presently available non-behavioral methods to estimate auditory thresholds perform less well at frequencies below 1 kHz than at 1 kHz and above. For many uses, such as providing accurate infant hearing aid amplification for low-frequency vowels, we need an accurate non-behavioral method to estimate low-frequency thresholds. Here we develop a novel technique to estimate low-frequency cochlear thresholds based on the use of a previously-reported waveform. We determine how well the method works by comparing the resulting thresholds to thresholds from onset-response compound action potentials (CAPs) and single auditory-nerve (AN) fibers in cats. A long-term goal is to translate this technique for use in humans. Design An electrode near the cochlea records a combination of cochlear microphonic (CM) and neural responses. In response to low-frequency, near threshold-level tones, the CM is almost sinusoidal while the neural responses occur preferentially at one phase of the tone. If the tone is presented again but with its polarity reversed, the neural response keeps the same shape, but shifts ½ cycle in time. Averaging responses to tones presented separately at opposite polarities overlaps and interleaves the neural responses and yields a waveform in which the CM is cancelled and the neural response appears twice each tone cycle, i.e. the resulting neural response is mostly at twice the tone frequency. We call the resultant waveform “the auditory nerve overlapped waveform” (ANOW). ANOW level functions were measured in anesthetized cats from 10 to 80 dB SPL in 10 dB steps using tones between 0.3 and 1 kHz. As a response metric, we calculated the magnitude of the ANOW component at twice the tone frequency (ANOW2f). The ANOW threshold was the sound level where the interpolated ANOW2f crossed a statistical criterion that was higher than 95% of the noise floor distribution. ANOW thresholds were compared to onset-CAP thresholds from the same recordings and

  6. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    SciTech Connect

    Unal, Cetin; Williams, Brian; Mc Clure, Patrick; Nelson, Ralph A

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M&S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for cost

  7. A Technique for Estimating Intensity of Emotional Expressions and Speaking Styles in Speech Based on Multiple-Regression HSMM

    NASA Astrophysics Data System (ADS)

    Nose, Takashi; Kobayashi, Takao

    In this paper, we propose a technique for estimating the degree or intensity of emotional expressions and speaking styles appearing in speech. The key idea is based on a style control technique for speech synthesis using a multiple regression hidden semi-Markov model (MRHSMM), and the proposed technique can be viewed as the inverse of the style control. In the proposed technique, the acoustic features of spectrum, power, fundamental frequency, and duration are simultaneously modeled using the MRHSMM. We derive an algorithm for estimating explanatory variables of the MRHSMM, each of which represents the degree or intensity of emotional expressions and speaking styles appearing in acoustic features of speech, based on a maximum likelihood criterion. We show experimental results to demonstrate the ability of the proposed technique using two types of speech data, simulated emotional speech and spontaneous speech with different speaking styles. It is found that the estimated values have correlation with human perception.

  8. Air quality modelling over Bogota, Colombia: Combined techniques to estimate and evaluate emission inventories

    NASA Astrophysics Data System (ADS)

    Zárate, Erika; Carlos Belalcázar, Luis; Clappier, Alain; Manzi, Veronica; Van den Bergh, Hubert

    Two versions of the Emission Inventory (EI) are generated for the city of Bogota, Colombia. In the first version (EI-1), CORINAIR traffic emission factors (EFs) are used. In the second (EI-2), bulk traffic EFs calculated for the city, using in situ measurements and inverse modelling techniques at street level, are used. EI-2 traffic emissions are 5, 4 and 3 times bigger than the corresponding values in EI-1, for CO, PM10 and NMVOCs, respectively. The main goal of this study consists in evaluating the two versions of the EI when introduced into a mesoscale air quality model. The AOT (accumulated exposure over a threshold) index is calculated for comparison between observed and simulated concentrations of primary pollutants. Simulated concentrations using EI-2 are closer to the observed values. This comparison allows us to extract some conclusions of the methodology used to calculate the EFs. Local factors like the driving behavior, the altitude, vehicle technology and an aged fleet cannot be totally included and corrected in the standard methodologies, and seem to be more important than obtaining very detailed and precise information on the classification of the fleet or driving speeds. Under financially limited and fast changing situations, as in the case of many developing countries, a simple methodology to estimate bulk traffic EFs and to evaluate the EI, is of utmost importance. The use of combined techniques such as in situ measurements to estimate bulk traffic EFs, and further evaluation of the inventories with numerical models, proved to be a useful tool for this purpose.

  9. HPC Usage Behavior Analysis and Performance Estimation with Machine Learning Techniques

    SciTech Connect

    Zhang, Hao; You, Haihang; Hadri, Bilel; Fahey, Mark R

    2012-01-01

    Most researchers with little high performance computing (HPC) experience have difficulties productively using the supercomputing resources. To address this issue, we investigated usage behaviors of the world s fastest academic Kraken supercomputer, and built a knowledge-based recommendation system to improve user productivity. Six clustering techniques, along with three cluster validation measures, were implemented to investigate the underlying patterns of usage behaviors. Besides manually defining a category for very large job submissions, six behavior categories were identified, which cleanly separated the data intensive jobs and computational intensive jobs. Then, job statistics of each behavior category were used to develop a knowledge-based recommendation system that can provide users with instructions about choosing appropriate software packages, setting job parameter values, and estimating job queuing time and runtime. Experiments were conducted to evaluate the performance of the proposed recommendation system, which included 127 job submissions by users from different research fields. Great feedback indicated the usefulness of the provided information. The average runtime estimation accuracy of 64.2%, with 28.9% job termination rate, was achieved in the experiments, which almost doubled the average accuracy in the Kraken dataset.

  10. Estimation of seismic building structural types using multi-sensor remote sensing and machine learning techniques

    NASA Astrophysics Data System (ADS)

    Geiß, Christian; Aravena Pelizari, Patrick; Marconcini, Mattia; Sengara, Wayan; Edwards, Mark; Lakes, Tobia; Taubenböck, Hannes

    2015-06-01

    Detailed information about seismic building structural types (SBSTs) is crucial for accurate earthquake vulnerability and risk modeling as it reflects the main load-bearing structures of buildings and, thus, the behavior under seismic load. However, for numerous urban areas in earthquake prone regions this information is mostly outdated, unavailable, or simply not existent. To this purpose, we present an effective approach to estimate SBSTs by combining scarce in situ observations, multi-sensor remote sensing data and machine learning techniques. In particular, an approach is introduced, which deploys a sequential procedure comprising five main steps, namely calculation of features from remote sensing data, feature selection, outlier detection, generation of synthetic samples, and supervised classification under consideration of both Support Vector Machines and Random Forests. Experimental results obtained for a representative study area, including large parts of the city of Padang (Indonesia), assess the capabilities of the presented approach and confirm its great potential for a reliable area-wide estimation of SBSTs and an effective earthquake loss modeling based on remote sensing, which should be further explored in future research activities.

  11. New Algorithms for Estimating Spacecraft Position Using Scanning Techniques for Deep Space Network Antennas

    NASA Technical Reports Server (NTRS)

    Chen, Lingli; Fathpour, Nanaz; Mehra, Raman K.

    2005-01-01

    As more and more nonlinear estimation techniques become available, our interest is in finding out what performance improvement, if any, they can provide for practical nonlinear problems that have been traditionally solved using linear methods. In this paper we examine the problem of estimating spacecraft position using conical scan (conscan) for NASA's Deep Space Network antennas. We show that for additive disturbances on antenna power measurement, the problem can be transformed into a linear one, and we present a general solution to this problem, with the least square solution reported in literature as a special case. We also show that for additive disturbances on antenna position, the problem is a truly nonlinear one, and we present two approximate solutions based on linearization and Unscented Transformation respectively, and one 'exact' solution based on Markov Chain Monte Carlo (MCMC) method. Simulations show that, with the amount of data collected in practice, linear methods perform almost the same as MCMC methods. It is only when we artificially reduce the amount of collected data and increase the level of noise that nonlinear methods show significantly better accuracy than that achieved by linear methods, at the expense of more computation.

  12. Estimating the vibration level of an L-shaped beam using power flow techniques

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.; Mccollum, M.; Rassineux, J. L.; Gilbert, T.

    1986-01-01

    The response of one component of an L-shaped beam, with point force excitation on the other component, is estimated using the power flow method. The transmitted power from the source component to the receiver component is expressed in terms of the transfer and input mobilities at the excitation point and the joint. The response is estimated both in narrow frequency bands, using the exact geometry of the beams, and as a frequency averaged response using infinite beam models. The results using this power flow technique are compared to the results obtained using finite element analysis (FEA) of the L-shaped beam for the low frequency response and to results obtained using statistical energy analysis (SEA) for the high frequencies. The agreement between the FEA results and the power flow method results at low frequencies is very good. SEA results are in terms of frequency averaged levels and these are in perfect agreement with the results obtained using the infinite beam models in the power flow method. The narrow frequency band results from the power flow method also converge to the SEA results at high frequencies. The advantage of the power flow method is that detail of the response can be retained while reducing computation time, which will allow the narrow frequency band analysis of the response to be extended to higher frequencies.

  13. Floods in Kansas and techniques for estimating their magnitude and frequency on unregulated streams

    USGS Publications Warehouse

    Clement, R.W.

    1987-01-01

    Techniques are presented for generalizing the skewness coefficient of log-Pearson Type III distributions of annual maximum discharges and for flood magnitudes that have selected recurrence intervals from 2 to 100 yr. A weighted least-square (WLS) regression model was used to generalize the coefficients of station skewness that resulted in a root-mean-sq error of prediction of 0.35 compared to 0.55 for the skewness map published in Bulletin 17B of the U.S. Water Resources Council. Estimates of generalized skewness were computed for each of 245 streamflow gaging stations with a minimum of 10 years of record and a contributing drainage area of < 20,000 sq mi. The WLS regression model also was used to develop equations for estimating flood magnitude for selected recurrence intervals for ungaged stream locations by using data from 218 of the 245 streamflow gaging stations that had contributing-drainage areas of less than 10,000 sq mi. The errors of prediction of the most reliable WLS equations ranged from 28 to 42%. The WLS equations were compared statistically to previous developed equations and were determined to be different and more accurate than previously published equations. Flood magnitudes and frequencies for 245 streamflow gaging stations, based on data collected through the 1983 water year, are presented along with a summary of the seasonal distribution of annual maximum discharges and an analysis of the maximum observed discharges. (Author 's abstract)

  14. Characterization of corrosion pit initiation in aluminum using advanced electron microscopy techniques

    NASA Astrophysics Data System (ADS)

    Elswick, Danielle S.

    The resistance to pitting corrosion in aluminum is due to the presence of a compact thin, approximately 5 nm, oxide. Certain conditions locally attack this protective oxide layer leading to its breakdown and resulting in the formation of corrosion pits. Numerous studies have investigated the growth and propagation stages of pitting corrosion yet the initiation stage remains not clearly defined nor well understood. The presence of aggressive chemical species, such as chloride, plays a critical role in the pitting phenomenon and is explored in this investigation. This dissertation focuses on the localization of pitting corrosion in high purity aluminum in order to accurately predict where and when the pit initiation process will occur so that microstructural changes associated with pit initiation can be easily identified and characterized using electron microscopy. A comprehensive investigation into the corrosion initiation process was attempted utilizing advanced characterization techniques in the transmission electron microscope (TEM) coupled with high-resolution microanalysis. Localization of pitting was successful through use of different sample geometries that reduced the length scale for which pitting events occurred. Three geometries were investigated, each with unique features for pitting corrosion. Electropolished Al needles localized pitting to a sharp tip due to a geometric field enhancement effect, while other experiments employed an Al wire micro-electrode geometry. Both geometries minimized the area where corrosion pits initiated and were electrochemically tested using a solution that contained the chloride species. A third geometry included electron beam evaporated Al films implanted with chloride, which induced pitting corrosion in an otherwise chloride-free environment. Localization of pitting was successfully achieved using novel sample geometries that isolated the desired stages of pitting corrosion, i.e. metastable pitting, through controlled

  15. Rain estimation from satellites: An examination of the Griffith-Woodley technique

    NASA Technical Reports Server (NTRS)

    Negri, A. J.; Adler, R. F.; Wetzel, P. J.

    1983-01-01

    The Griffith-Woodley Technique (GWT) is an approach to estimating precipitation using infrared observations of clouds from geosynchronous satellites. It is examined in three ways: an analysis of the terms in the GWT equations; a case study of infrared imagery portraying convective development over Florida; and the comparison of a simplified equation set and resultant rain map to results using the GWT. The objective is to determine the dominant factors in the calculation of GWT rain estimates. Analysis of a single day's convection over Florida produced a number of significant insights into various terms in the GWT rainfall equations. Due to the definition of clouds by a threshold isotherm the majority of clouds on this day did not go through an idealized life cycle before losing their identity through merger, splitting, etc. As a result, 85% of the clouds had a defined life of 0.5 or 1 h. For these clouds the terms in the GWT which are dependent on cloud life history become essentially constant. The empirically derived ratio of radar echo area to cloud area is given a singular value (0.02) for 43% of the sample, while the rainrate term is 20.7 mmh-1 for 61% of the sample. For 55% of the sampled clouds the temperature weighting term is identically 1.0. Cloud area itself is highly correlated (r=0.88) with GWT computed rain volume. An important, discriminating parameter in the GWT is the temperature defining the coldest 10% cloud area. The analysis further shows that the two dominant parameters in rainfall estimation are the existence of cold cloud and the duration of cloud over a point.

  16. Site-effect modelling and estimation using microtremor measurements and robust H/V technique.

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Zaslavsky, Y.; Dan, H.

    2003-04-01

    Site-effect inferred from horizontal-to-vertical spectral ratio H/V of microtremor (Nakamura's technique) is wide spread, in spite of imbedded shortcuts and related restrictions. H/V is usually determined in a series of time windows and then averaged. In majority of practical studies this gives satisfactory result. However, many researchers complain that this way obtained estimates of the first mode frequency F0 and especially its amplitude A0 are unstable or in poor agreement with earthquake inferred observations. We deal with the problem in terms of Signal-to-noise ratio (SNR). According general approach we can consider microtremor recordings, obtained by vertical and horizontal seismometers as input and output signals of a linear dynamic system (LDS) observed with input and output additive noise. Theoretically, averaged H/V is a biased estimate, but tending to the real transfer function when input and output SNR is large enough. The SNR depends on a number of factors responsible for deviation of the 1D model from the true 3D process, determined by the geological and wave velocity structure as well as useful and interfering microtremor sources, their configuration, spectral intensity, remoteness etc. In practice SNR may vary significantly during short period of time, thus spoiling the H/V results. Evidently, the time periods were SNR is large are preferable, but we have no means to compute SNR from observations. However, it is possible to judge about it indirectly. For this purpose we compute H/V in time-frequency domain, which helps to determine and collect time windows, where H/V is stationary or show evidence of data clustering. The left H/V are deleted or taken with smaller weights for averaging, thus providing robust F0 and A0 estimations. The interactive and automatic procedures, using this principal have been designed and verified on a number of simulated and real data, showing good performance.

  17. Estimation of surface energy fluxes using surface renewal and flux variance techniques over an advective irrigated agricultural site

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Estimation of surface energy fluxes over irrigated agriculture is needed to monitor crop water use. Estimates are commonly done using well-established techniques such as eddy covariance (EC) and weighing lysimetry, but implementing these to collect spatially distributed observations is complex and c...

  18. A TECHNIQUE FOR ASSESSING THE ACCURACY OF SUB-PIXEL IMPERVIOUS SURFACE ESTIMATES DERIVED FROM LANDSAT TM IMAGERY

    EPA Science Inventory

    We developed a technique for assessing the accuracy of sub-pixel derived estimates of impervious surface extracted from LANDSAT TM imagery. We utilized spatially coincident
    sub-pixel derived impervious surface estimates, high-resolution planimetric GIS data, vector--to-
    r...

  19. Uncertainty in Estimation of Bioenergy Induced Lulc Change: Development of a New Change Detection Technique.

    NASA Astrophysics Data System (ADS)

    Singh, N.; Vatsavai, R. R.; Patlolla, D.; Bhaduri, B. L.; Lim, S. J.

    2015-12-01

    Recent estimates of bioenergy induced land use land cover change (LULCC) have large uncertainty due to misclassification errors in the LULC datasets used for analysis. These uncertainties are further compounded when data is modified by merging classes, aggregating pixels and change in classification methods over time. Hence the LULCC computed using these derived datasets is more a reflection of change in classification methods, change in input data and data manipulation rather than reflecting actual changes ion ground. Furthermore results are constrained by geographic extent, update frequency and resolution of the dataset. To overcome this limitation we have developed a change detection system to identify yearly as well as seasonal changes in LULC patterns. Our method uses hierarchical clustering which works by grouping objects into a hierarchy based on phenological similarity of different vegetation types. The algorithm explicitly models vegetation phenology to reduce spurious changes. We apply our technique on globally available Moderate Resolution Imaging Spectroradiometer (MODIS) NDVI data at 250-meter resolution. We analyze 10 years of bi-weekly data to predict changes in the mid-western US as a case study. The results of our analysis are presented and its advantages over existing techniques are discussed.

  20. Estimating biomass of submersed vegetation using a simple rake sampling technique

    USGS Publications Warehouse

    Kenow, K.P.; Lyon, J.E.; Hines, R.K.; Elfessi, A.

    2007-01-01

    We evaluated the use of a simple rake sampling technique for predicting the biomass of submersed aquatic vegetation. Vegetation sampled from impounded areas of the Mississippi River using a rake sampling technique, was compared with vegetation harvested from 0.33-m2 quadrats. The resulting data were used to model the relationship between rake indices and vegetation biomass (total and for individual species). We constructed linear regression models using log-transformed biomass data for sites sampled in 1999 and 2000. Data collected in 2001 were used to validate the resulting models. The coefficient of determination (R 2) for predicting total biomass was 0.82 and ranged from 0.59 (Potamogeton pectinatus) to 0.89 (Ceratophyllum demersum) for individual species. Application of the model to estimate total submersed aquatic vegetation is illustrated using data collected independent of this study. The accuracy and precision of the models tested indicate that the rake method data may be used to predict total vegetation biomass and biomass of selected species; however, the method should be tested in other regions, in other plant communities, and on other species. ?? 2006 Springer Science+Business Media B.V.

  1. Analysis of field data to evaluate performance of optical remote sensing techniques to estimate fugitive emissions

    SciTech Connect

    Paine, R.J.; Lew, F.; Zwicker, J.O.; Feldman, H.

    1999-07-01

    The American Petroleum Institute (API) has developed data sets for the evaluation of dispersion modeling and optical remote sensing (ORS) techniques. An initial field study featuring several tracer gas releases from simulated point, area, and volume sources was conducted in early 1995 at an open field site (Duke Forest, North Carolina). A second experiment (Project OPTEX) took place at an operational petrochemical facility in Texas and featured tracer releases at heights up to 41 meters from points located in an active process unit. This paper discusses the results of an analysis to evaluate the capability for remote sensing techniques to estimate the magnitude and location of emission sources in an industrial complex setting. Three major issues that the paper reports on are: (1) can ORS technology be used to determine emission rates when the source locations are known; (2) can ORS technology be used to locate sources in unknown locations, therefore promising to replace or at least streamline leak detection and repair (LDAR) programs at petrochemical facilities; and (3) what are the constraints for real-time operation, interpretation, and responsiveness involving ORS technology?

  2. Dynamic rain fade compensation techniques for the advanced communications technology satellite

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    1992-01-01

    The dynamic and composite nature of propagation impairments that are incurred on earth-space communications links at frequencies in and above the 30/20 GHz Ka band necessitate the use of dynamic statistical identification and prediction processing of the fading signal in order to optimally estimate and predict the levels of each of the deleterious attenuation components. Such requirements are being met in NASA's Advanced Communications Technology Satellite (ACTS) project by the implementation of optimal processing schemes derived through the use of the ACTS Rain Attenuation Prediction Model and nonlinear Markov filtering theory. The ACTS Rain Attenuation Prediction Model discerns climatological variations on the order of 0.5 deg in latitude and longitude in the continental U.S. The time-dependent portion of the model gives precise availability predictions for the 'spot beam' links of ACTS. However, the structure of the dynamic portion of the model, which yields performance parameters such as fade duration probabilities, is isomorphic to the state-variable approach of stochastic control theory and is amenable to the design of such statistical fade processing schemes which can be made specific to the particular climatological location at which they are employed.

  3. Quantitative Functional Imaging Using Dynamic Positron Computed Tomography and Rapid Parameter Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Koeppe, Robert Allen

    Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations

  4. Use of binary logistic regression technique with MODIS data to estimate wild fire risk

    NASA Astrophysics Data System (ADS)

    Fan, Hong; Di, Liping; Yang, Wenli; Bonnlander, Brian; Li, Xiaoyan

    2007-11-01

    Many forest fires occur across the globe each year, which destroy life and property, and strongly impact ecosystems. In recent years, wildland fires and altered fire disturbance regimes have become a significant management and science problem affecting ecosystems and wildland/urban interface cross the United States and global. In this paper, we discuss the estimation of 504 probability models for forecasting fire risk for 14 fuel types, 12 months, one day/week/month in advance, which use 19 years of historical fire data in addition to meteorological and vegetation variables. MODIS land products are utilized as a major data source, and a logistical binary regression was adopted to solve fire forecast probability. In order to better modeling the change of fire risk along with the transition of seasons, some spatial and temporal stratification strategies were applied. In order to explore the possibilities of real time prediction, the Matlab distributing computing toolbox was used to accelerate the prediction. Finally, this study give an evaluation and validation of predict based on the ground truth collected. Validating results indicate these fire risk models have achieved nearly 70% accuracy of prediction and as well MODIS data are potential data source to implement near real-time fire risk prediction.

  5. Craniospinal Irradiation Techniques: A Dosimetric Comparison of Proton Beams With Standard and Advanced Photon Radiotherapy

    SciTech Connect

    Yoon, Myonggeun; Shin, Dong Ho; Kim, Jinsung; Kim, Jong Won; Kim, Dae Woong; Park, Sung Yong; Lee, Se Byeong; Kim, Joo Young; Park, Hyeon-Jin; Park, Byung Kiu; Shin, Sang Hoon

    2011-11-01

    Purpose: To evaluate the dosimetric benefits of advanced radiotherapy techniques for craniospinal irradiation in cancer in children. Methods and Materials: Craniospinal irradiation (CSI) using three-dimensional conformal radiotherapy (3D-CRT), tomotherapy (TOMO), and proton beam treatment (PBT) in the scattering mode was planned for each of 10 patients at our institution. Dosimetric benefits and organ-specific radiation-induced cancer risks were based on comparisons of dose-volume histograms (DVHs) and on the application of organ equivalent doses (OEDs), respectively. Results: When we analyzed the organ-at-risk volumes that received 30%, 60%, and 90% of the prescribed dose (PD), we found that PBT was superior to TOMO and 3D-CRT. On average, the doses delivered by PBT to the esophagus, stomach, liver, lung, pancreas, and kidney were 19.4 Gy, 0.6 Gy, 0.3 Gy, 2.5 Gy, 0.2 Gy, and 2.2 Gy for the PD of 36 Gy, respectively, which were significantly lower than the doses delivered by TOMO (22.9 Gy, 4.5 Gy, 6.1 Gy, 4.0 Gy, 13.3 Gy, and 4.9 Gy, respectively) and 3D-CRT (34.6 Gy, 3.6 Gy, 8.0 Gy, 4.6 Gy, 22.9 Gy, and 4.3 Gy, respectively). Although the average doses delivered by PBT to the chest and abdomen were significantly lower than those of 3D-CRT or TOMO, these differences were reduced in the head-and-neck region. OED calculations showed that the risk of secondary cancers in organs such as the stomach, lungs, thyroid, and pancreas was much higher when 3D-CRT or TOMO was used than when PBT was used. Conclusions: Compared with photon techniques, PBT showed improvements in most dosimetric parameters for CSI patients, with lower OEDs to organs at risk.

  6. Application of Energy Integration Techniques to the Design of Advanced Life Support Systems

    NASA Technical Reports Server (NTRS)

    Levri, Julie; Finn, Cory

    2000-01-01

    Exchanging heat between hot and cold streams within an advanced life support system can save energy. This savings will reduce the equivalent system mass (ESM) of the system. Different system configurations are examined under steady-state conditions for various percentages of food growth and waste treatment. The scenarios investigated represent possible design options for a Mars reference mission. Reference mission definitions are drawn from the ALSS Modeling and Analysis Reference Missions Document, which includes definitions for space station evolution, Mars landers, and a Mars base. For each scenario, streams requiring heating or cooling are identified and characterized by mass flow, supply and target temperatures and heat capacities. The Pinch Technique is applied to identify good matches for energy exchange between the hot and cold streams and to calculate the minimum external heating and cooling requirements for the system. For each pair of hot and cold streams that are matched, there will be a reduction in the amount of external heating and cooling required, and the original heating and cooling equipment will be replaced with a heat exchanger. The net cost savings can be either positive or negative for each stream pairing, and the priority for implementing each pairing can be ranked according to its potential cost savings. Using the Pinch technique, a complete system heat exchange network is developed and heat exchangers are sized to allow for calculation of ESM. The energy-integrated design typically has a lower total ESM than the original design with no energy integration. A comparison of ESM savings in each of the scenarios is made to direct future Pinch Analysis efforts.

  7. Combining uncertainty-estimation techniques and cost-benefit analysis to obtain consistent design-flood estimators

    NASA Astrophysics Data System (ADS)

    Botto, A.; Ganora, D.; Laio, F.; Claps, P.

    2012-04-01

    Traditionally, flood frequency analysis has been used to assess the design discharge for hydraulic infrastructures. Unfortunately, this method involves uncertainties, be they of random or epistemic nature. Despite some success in measuring uncertainty, e.g. by means of numerical simulations, exhaustive methods for their evaluation are still an open challenge to the scientific community. The proposed method aims to improve the standard models for design flood estimation, considering the hydrological uncertainties inherent with the classic flood frequency analysis, in combination with cost-benefit analysis. Within this framework, two of the main issues related to flood risk are taken into account: on the one hand statistical flood frequency analysis is complemented with suitable uncertainty estimates; on the other hand the economic value of the flood-prone land is considered, as well as the economic losses in case of overflow. Consider a case where discharge data are available at the design site: the proposed procedure involves the following steps: (i) for a given return period T the design discharge is obtained using standard statistical inference (for example, using the GEV distribution and the method of L- moments to estimate the parameters); (ii) Monte Carlo simulations are performed to quantify the parametric uncertainty related to the design-flood estimator: 10000 triplets of L-moment values are randomly sampled from their relevant multivariate distribution, and 10000 values of the T-year discharge are obtained ; (iii) a procedure called the least total expected cost (LTEC) design approach is applied as described hereafter: linear cost and damage functions are proposed so that the ratio between the slope of the damage function and the slope of the cost function is equal to T. The expected total cost (sum of the cost plus the expected damage) is obtained for each of the 10000 design value estimators, and the estimator corresponding to the minimum total cost is

  8. Comparison of the egg flotation and egg candling techniques for estimating incubation day of Canada Goose nests

    USGS Publications Warehouse

    Reiter, M.E.; Andersen, D.E.

    2008-01-01

    Both egg flotation and egg candling have been used to estimate incubation day (often termed nest age) in nesting birds, but little is known about the relative accuracy of these two techniques. We used both egg flotation and egg candling to estimate incubation day for Canada Geese (Branta canadensis interior) nesting near Cape Churchill, Manitoba, from 2000 to 2007. We modeled variation in the difference between estimates of incubation day using each technique as a function of true incubation day, as well as, variation in error rates with each technique as a function of the true incubation day. We also evaluated the effect of error in the estimated incubation day on estimates of daily survival rate (DSR) and nest success using simulations. The mean difference between concurrent estimates of incubation day based on egg flotation minus egg candling at the same nest was 0.85 ?? 0.06 (SE) days. The positive difference in favor of egg flotation and the magnitude of the difference in estimates of incubation day did not vary as a function of true incubation day. Overall, both egg flotation and egg candling overestimated incubation day early in incubation and underestimated incubation day later in incubation. The average difference between true hatch date and estimated hatch date did not differ from zero (days) for egg flotation, but egg candling overestimated true hatch date by about 1 d (true - estimated; days). Our simulations suggested that error associated with estimating the incubation day of nests and subsequently exposure days using either egg candling or egg flotation would have minimal effects on estimates of DSR and nest success. Although egg flotation was slightly less biased, both methods provided comparable and accurate estimates of incubation day and subsequent estimates of hatch date and nest success throughout the entire incubation period. ?? 2008 Association of Field Ornithologists.

  9. Statistical Technique for Intermediate and Long-Range Estimation of 13-Month Smoothed Solar Flux and Geomagnetic Index

    NASA Technical Reports Server (NTRS)

    Niehuss, K. O.; Euler, H. C., Jr.; Vaughan, W. W.

    1996-01-01

    This report documents the Marshall Space Flight Center (MSFC) 13-month smoothed solar flux (F(sub 10.7)) and geomagnetic index (A(sub p)) intermediate (months) and long-range (years) statistical estimation technique, referred to as the MSFC Lagrangian Linear Regression Technique (MLLRT). Estimates of future solar activity are needed as updated input to upper atmosphere density models used for satellite and spacecraft orbital lifetime predictions. An assessment of the MLLRT computer program's products is provided for 5-year periods from the date estimates were made. This was accomplished for a number of past solar cycles.

  10. High-rate-long-distance fiber-optic communication based on advanced modulation techniques.

    PubMed

    Ivankovski, Y; Mendlovic, D

    1999-09-10

    The presence of fiber attenuation and chromatic dispersion is one of the major design aspects of fiber-optic communication systems when one addresses high-rate and long-distance digital data transmission. Conventional digital communication systems implement a modulation technique that generates light pulses at the fiber input end and tries to detect them at the fiber output end. Here an advanced modulation transmission system is developed based on knowledge of the exact dispersion parameters of the fiber and the principles of space-time mathematical analogy. The information encodes the phase of the input light beam (a continuous laser beam). This phase is designed such that, when the signal is transmitted through a fiber with a given chromatic dispersion, high peak pulses emerge at the output, which follows a desired bit pattern. Thus the continuous input energy is concentrated into short time intervals in which the information needs to be represented at the output. The proposed method provides a high rate-distance product even for fibers with high dispersion parameters, high power at the output, and also unique protection properties. Theoretical analysis of the proposed method, computer simulations, and some design aspects are given. PMID:18324062

  11. Advanced real-time dynamic scene generation techniques for improved performance and fidelity

    NASA Astrophysics Data System (ADS)

    Bowden, Mark H.; Buford, James A.; Mayhall, Anthony J.

    2000-07-01

    Recent advances in real-time synthetic scene generation for Hardware-in-the-loop (HWIL) testing at the U.S. Army Aviation and Missile Command (AMCOM) Aviation and Missile Research, Development, and Engineering Center (AMRDEC) improve both performance and fidelity. Modeling ground target scenarios requires tradeoffs because of limited texture memory for imagery and limited main memory for elevation data. High- resolution insets have been used in the past to provide better fidelity in specific areas, such as in the neighborhood of a target. Improvements for ground scenarios include smooth transitions for high-resolution insets to reduce high spatial frequency artifacts at the borders of the inset regions and dynamic terrain paging to support large area databases. Transport lag through the scene generation system, including sensor emulation and interface components, has been dealt with in the past through the use of sub-window extraction from oversize scenes. This compensates for spatial effects of transport lag but not temporal effects. A new system has been developed and used successfully to compensate for a flashing coded beacon in the scene. Other techniques have been developed to synchronize the scene generator with the seeker under test (SUT) and to model atmospheric effects, sensor optic and electronics, and angular emissivity attenuation.

  12. Classification of human colonic tissues using FTIR spectra and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Zwielly, A.; Argov, S.; Salman, A.; Bogomolny, E.; Mordechai, S.

    2010-04-01

    One of the major public health hazards is colon cancer. There is a great necessity to develop new methods for early detection of cancer. If colon cancer is detected and treated early, cure rate of more than 90% can be achieved. In this study we used FTIR microscopy (MSP), which has shown a good potential in the last 20 years in the fields of medical diagnostic and early detection of abnormal tissues. Large database of FTIR microscopic spectra was acquired from 230 human colonic biopsies. Five different subgroups were included in our database, normal and cancer tissues as well as three stages of benign colonic polyps, namely, mild, moderate and severe polyps which are precursors of carcinoma. In this study we applied advanced mathematical and statistical techniques including principal component analysis (PCA) and linear discriminant analysis (LDA), on human colonic FTIR spectra in order to differentiate among the mentioned subgroups' tissues. Good classification accuracy between normal, polyps and cancer groups was achieved with approximately 85% success rate. Our results showed that there is a great potential of developing FTIR-micro spectroscopy as a simple, reagent-free viable tool for early detection of colon cancer in particular the early stages of premalignancy among the benign colonic polyps.

  13. Analysis of deformation patterns through advanced DINSAR techniques in Istanbul megacity

    NASA Astrophysics Data System (ADS)

    Balik Sanli, F.; Calò, F.; Abdikan, S.; Pepe, A.; Gorum, T.

    2014-09-01

    As result of the Turkey's economic growth and heavy migration processes from rural areas, Istanbul has experienced a high urbanization rate, with severe impacts on the environment in terms of natural resources pressure, land-cover changes and uncontrolled sprawl. As a consequence, the city became extremely vulnerable to natural and man-made hazards, inducing ground deformation phenomena that threaten buildings and infrastructures and often cause significant socio-economic losses. Therefore, the detection and monitoring of such deformation patterns is of primary importance for hazard and risk assessment as well as for the design and implementation of effective mitigation strategies. Aim of this work is to analyze the spatial distribution and temporal evolution of deformations affecting the Istanbul metropolitan area, by exploiting advanced Differential SAR Interferometry (DInSAR) techniques. In particular, we apply the Small BAseline Subset (SBAS) approach to a dataset of 43 TerraSAR-X images acquired, between November 2010 and June 2012, along descending orbits with an 11-day revisit time and a 3 m × 3 m spatial resolution. The SBAS processing allowed us to remotely detect and monitor subsidence patterns over all the urban area as well as to provide detailed information at the scale of the single building. Such SBAS measurements, effectively integrated with ground-based monitoring data and thematic maps, allows to explore the relationship between the detected deformation phenomena and urbanization, contributing to improve the urban planning and management.

  14. Recent Advance in Liquid Chromatography/Mass Spectrometry Techniques for Environmental Analysis in Japan

    PubMed Central

    Suzuki, Shigeru

    2014-01-01

    The techniques and measurement methods developed in the Environmental Survey and Monitoring of Chemicals by Japan’s Ministry of the Environment, as well as a large amount of knowledge archived in the survey, have led to the advancement of environmental analysis. Recently, technologies such as non-target liquid chromatography/high resolution mass spectrometry and liquid chromatography with micro bore column have further developed the field. Here, the general strategy of a method developed for the liquid chromatography/mass spectrometry (LC/MS) analysis of environmental chemicals with a brief description is presented. Also, a non-target analysis for the identification of environmental pollutants using a provisional fragment database and “MsMsFilter,” an elemental composition elucidation tool, is presented. This analytical method is shown to be highly effective in the identification of a model chemical, the pesticide Bendiocarb. Our improved micro-liquid chromatography injection system showed substantially enhanced sensitivity to perfluoroalkyl substances, with peak areas 32–71 times larger than those observed in conventional LC/MS. PMID:26819891

  15. Irrigated rice area estimation using remote sensing techniques: Project's proposal and preliminary results. [Rio Grande do Sul, Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Deassuncao, G. V.; Moreira, M. A.; Novaes, R. A.

    1984-01-01

    The development of a methodology for annual estimates of irrigated rice crop in the State of Rio Grande do Sul, Brazil, using remote sensing techniques is proposed. The project involves interpretation, digital analysis, and sampling techniques of LANDSAT imagery. Results are discussed from a preliminary phase for identifying and evaluating irrigated rice crop areas in four counties of the State, for the crop year 1982/1983. This first phase involved just visual interpretation techniques of MSS/LANDSAT images.

  16. Age estimation standards for a Western Australian population using the dental age estimation technique developed by Kvaal et al.

    PubMed

    Karkhanis, Shalmira; Mack, Peter; Franklin, Daniel

    2014-02-01

    In the present global socio-political scenario, an increasing demand exists for age estimation in living persons, such as refugees and asylum seekers, who seldom have any documentation for proof of identity. Age estimation in the living poses significant challenges because the methods need to be non-invasive, accurate and ethically viable. Methods based on the analysis of the pulp chamber are recommended for age estimation in living adults. There is, however, a paucity of studies of this nature and population specific standards in Western Australia. The aim of the present study is therefore, to test the reliability and applicability of the method developed by Kvaal et al. (1995) for the purpose of developing age estimation standards for an adult Western Australian population. A total of 279 digital orthopantomograms (143 female; and 136 male) of Australian individuals were analysed. A subset of the total sample (50) was removed as a cross-validation (holdout) sample. Following the method described in Kvaal et al. (1995), length and width measurements of the tooth and pulp chamber were acquired in maxillary central and lateral incisors; second premolars, mandibular lateral incisors; canines and first premolars. Those measurements were then used to calculate a series of ratios (length and width), which were subsequently used to formulate age estimation regression models. The most accurate model based on a single tooth was for the maxillary central incisor (SEE ±9.367 years), followed by the maxillary second premolar (SEE ±9.525 years). Regression models based on the measurement of multiple teeth improved age prediction accuracy (SEE ±7.963 years). The regression models presented here have expected accuracy rates comparable (if not higher than) to established skeletal morphoscopic methods. This method, therefore, offers a statistically quantified methodological approach for forensic age estimation in Western Australian adults. PMID:24411636

  17. Advances in regional crop yield estimation over the United States using satellite remote sensing data

    NASA Astrophysics Data System (ADS)

    Johnson, D. M.; Dorn, M. F.; Crawford, C.

    2015-12-01

    Since the dawn of earth observation imagery, particularly from systems like Landsat and the Advanced Very High Resolution Radiometer, there has been an overarching desire to regionally estimate crop production remotely. Research efforts integrating space-based imagery into yield models to achieve this need have indeed paralleled these systems through the years, yet development of a truly useful crop production monitoring system has been arguably mediocre in coming. As a result, relatively few organizations have yet to operationalize the concept, and this is most acute in regions of the globe where there are not even alternative sources of crop production data being collected. However, the National Agricultural Statistics Service (NASS) has continued to push for this type of data source as a means to complement its long-standing, traditional crop production survey efforts which are financially costly to the government and create undue respondent burden on farmers. Corn and soybeans, the two largest field crops in the United States, have been the focus of satellite-based production monitoring by NASS for the past decade. Data from the Moderate Resolution Imaging Spectroradiometer (MODIS) has been seen as the most pragmatic input source for modeling yields primarily based on its daily revisit capabilities and reasonable ground sample resolution. The research methods presented here will be broad but provides a summary of what is useful and adoptable with satellite imagery in terms of crop yield estimation. Corn and soybeans will be of particular focus but other major staple crops like wheat and rice will also be presented. NASS will demonstrate that while MODIS provides a slew of vegetation related products, the traditional normalized difference vegetation index (NDVI) is still ideal. Results using land surface temperature products, also generated from MODIS, will also be shown. Beyond the MODIS data itself, NASS research has also focused efforts on understanding a

  18. Analysis of advanced european nuclear fuel cycle scenarios including transmutation and economical estimates

    SciTech Connect

    Merino Rodriguez, I.; Alvarez-Velarde, F.; Martin-Fuertes, F.

    2013-07-01

    In this work the transition from the existing Light Water Reactors (LWR) to the advanced reactors is analyzed, including Generation III+ reactors in a European framework. Four European fuel cycle scenarios involving transmutation options have been addressed. The first scenario (i.e., reference) is the current fleet using LWR technology and open fuel cycle. The second scenario assumes a full replacement of the initial fleet with Fast Reactors (FR) burning U-Pu MOX fuel. The third scenario is a modification of the second one introducing Minor Actinide (MA) transmutation in a fraction of the FR fleet. Finally, in the fourth scenario, the LWR fleet is replaced using FR with MOX fuel as well as Accelerator Driven Systems (ADS) for MA transmutation. All scenarios consider an intermediate period of GEN-III+ LWR deployment and they extend for a period of 200 years looking for equilibrium mass flows. The simulations were made using the TR-EVOL code, a tool for fuel cycle studies developed by CIEMAT. The results reveal that all scenarios are feasible according to nuclear resources demand (U and Pu). Concerning to no transmutation cases, the second scenario reduces considerably the Pu inventory in repositories compared to the reference scenario, although the MA inventory increases. The transmutation scenarios show that elimination of the LWR MA legacy requires on one hand a maximum of 33% fraction (i.e., a peak value of 26 FR units) of the FR fleet dedicated to transmutation (MA in MOX fuel, homogeneous transmutation). On the other hand a maximum number of ADS plants accounting for 5% of electricity generation are predicted in the fourth scenario (i.e., 35 ADS units). Regarding the economic analysis, the estimations show an increase of LCOE (Levelized cost of electricity) - averaged over the whole period - with respect to the reference scenario of 21% and 29% for FR and FR with transmutation scenarios respectively, and 34% for the fourth scenario. (authors)

  19. Development of Flight-Test Performance Estimation Techniques for Small Unmanned Aerial Systems

    NASA Astrophysics Data System (ADS)

    McCrink, Matthew Henry

    This dissertation provides a flight-testing framework for assessing the performance of fixed-wing, small-scale unmanned aerial systems (sUAS) by leveraging sub-system models of components unique to these vehicles. The development of the sub-system models, and their links to broader impacts on sUAS performance, is the key contribution of this work. The sub-system modeling and analysis focuses on the vehicle's propulsion, navigation and guidance, and airframe components. Quantification of the uncertainty in the vehicle's power available and control states is essential for assessing the validity of both the methods and results obtained from flight-tests. Therefore, detailed propulsion and navigation system analyses are presented to validate the flight testing methodology. Propulsion system analysis required the development of an analytic model of the propeller in order to predict the power available over a range of flight conditions. The model is based on the blade element momentum (BEM) method. Additional corrections are added to the basic model in order to capture the Reynolds-dependent scale effects unique to sUAS. The model was experimentally validated using a ground based testing apparatus. The BEM predictions and experimental analysis allow for a parameterized model relating the electrical power, measurable during flight, to the power available required for vehicle performance analysis. Navigation system details are presented with a specific focus on the sensors used for state estimation, and the resulting uncertainty in vehicle state. Uncertainty quantification is provided by detailed calibration techniques validated using quasi-static and hardware-in-the-loop (HIL) ground based testing. The HIL methods introduced use a soft real-time flight simulator to provide inertial quality data for assessing overall system performance. Using this tool, the uncertainty in vehicle state estimation based on a range of sensors, and vehicle operational environments is

  20. The development of optical microscopy techniques for the advancement of single-particle studies

    NASA Astrophysics Data System (ADS)

    Marchuk, Kyle

    Single particle orientation and rotational tracking (SPORT) has recently become a powerful optical microscopy tool that can expose many molecular motions. Unfortunately, there is not yet a single microscopy technique that can decipher all particle motions in all environmental conditions, thus there are limitations to current technologies. Within, the two powerful microscopy tools of total internal reflection and interferometry are advanced to determine the position, orientation, and optical properties of metallic nanoparticles in a variety of environments. Total internal reflection is an optical phenomenon that has been applied to microscopy to produce either fluorescent or scattered light. The non-invasive far-field imaging technique is coupled with a near-field illumination scheme that allows for better axial resolution than confocal microscopy and epi-fluorescence microscopy. By controlling the incident illumination angle using total internal reflection fluorescence (TIRF) microscopy, a new type of imaging probe called "non-blinking" quantum dots (NBQDs) were super-localized in the axial direction to sub-10-nm precision. These particles were also used to study the rotational motion of microtubules being propelled by the motor protein kinesin across the substrate surface. The same instrument was modified to function under total internal reflection scattering (TIRS) microscopy to study metallic anisotropic nanoparticles and their dynamic interactions with synthetic lipid bilayers. Utilizing two illumination lasers with opposite polarization directions at wavelengths corresponding to the short and long axis surface plasmon resonance (SPR) of the nanoparticles, both the in-plane and out-of-plane movements of many particles could be tracked simultaneously. When combined with Gaussian point spread function (PSF) fitting for particle super-localization, the binding status and rotational movement could be resolved without degeneracy. TIRS microscopy was also used to