Science.gov

Sample records for advanced estimation techniques

  1. Advances in parameter estimation techniques applied to flexible structures

    NASA Technical Reports Server (NTRS)

    Maben, Egbert; Zimmerman, David C.

    1994-01-01

    In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.

  2. Development of advanced techniques for rotorcraft state estimation and parameter identification

    NASA Technical Reports Server (NTRS)

    Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.

    1980-01-01

    An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.

  3. Advances in the regionalization approach: geostatistical techniques for estimating flood quantiles

    NASA Astrophysics Data System (ADS)

    Chiarello, Valentina; Caporali, Enrica; Matthies, Hermann G.

    2015-04-01

    The knowledge of peak flow discharges and associated floods is of primary importance in engineering practice for planning of water resources and risk assessment. Streamflow characteristics are usually estimated starting from measurements of river discharges at stream gauging stations. However, the lack of observations at site of interest as well as the measurement inaccuracies, bring inevitably to the necessity of developing predictive models. Regional analysis is a classical approach to estimate river flow characteristics at sites where little or no data exists. Specific techniques are needed to regionalize the hydrological variables over the considered area. Top-kriging or topological kriging, is a kriging interpolation procedure that takes into account the geometric organization and structure of hydrographic network, the catchment area and the nested nature of catchments. The continuous processes in space defined for the point variables are represented by a variogram. In Top-kriging, the measurements are not point values but are defined over a non-zero catchment area. Top-kriging is applied here over the geographical space of Tuscany Region, in Central Italy. The analysis is carried out on the discharge data of 57 consistent runoff gauges, recorded from 1923 to 2014. Top-kriging give also an estimation of the prediction uncertainty in addition to the prediction itself. The results are validated using a cross-validation procedure implemented in the package rtop of the open source statistical environment R The results are compared through different error measurement methods. Top-kriging seems to perform better in nested catchments and larger scale catchments but no for headwater or where there is a high variability for neighbouring catchments.

  4. On Advanced Estimation Techniques for Exoplanet Detection and Characterization using Ground-Based Coronagraphs

    NASA Technical Reports Server (NTRS)

    Lawson, Peter R.; Frazin, Richard; Barrett, Harrison; Caucci, Luca; Devaney, Nicholas; Furenlid, Lars; Gladysz, Szymon; Guyon, Olivier; Krist, John; Maire, Jerome; Marois, Christian; Mawet, Dimitri; Mouillet, David; Mugnier, Laurent; Perrin, Marshall; Poyneer, Lisa; Pueyo, Laurent; Savransky, Dmitry; Soummer, Remi

    2012-01-01

    The direct imaging of planets around nearby stars is exceedingly difficult. Only about 14 exoplanets have been imaged to date that have masses less than 13 times that of Jupiter. The next generation of planet-finding coronagraphs, including VLT-SPHERE, the Gemini Planet Imager, Palomar P1640, and Subaru HiCIAO have predicted contrast performance of roughly a thousand times less than would be needed to detect Earth-like planets. In this paper we review the state of the art in exoplanet imaging, most notably the method of Locally Optimized Combination of Images (LOCI), and we investigate the potential of improving the detectability of faint exoplanets through the use of advanced statistical methods based on the concepts of the ideal observer and the Hotelling observer. We provide a formal comparison of techniques through a blind data challenge and evaluate performance using the Receiver Operating Characteristic (ROC) and Localization ROC (LROC) curves. We place particular emphasis on the understanding and modeling of realistic sources of measurement noise in ground-based AO-corrected coronagraphs. The work reported in this paper is the result of interactions between the co-authors during a week-long workshop on exoplanet imaging that was held in Squaw Valley, California, in March of 2012.

  5. On Advanced Estimation Techniques for Exoplanet Detection and Characterization using Ground-based Coronagraphs

    NASA Technical Reports Server (NTRS)

    Lawson, Peter; Frazin, Richard

    2012-01-01

    The direct imaging of planets around nearby stars is exceedingly difficult. Only about 14 exoplanets have been imaged to date that have masses less than 13 times that of Jupiter. The next generation of planet-finding coronagraphs, including VLT-SPHERE, the Gemini Planet Imager, Palomar P1640, and Subaru HiCIAO have predicted contrast performance of roughly a thousand times less than would be needed to detect Earth-like planets. In this paper we review the state of the art in exoplanet imaging, most notably the method of Locally Optimized Combination of Images (LOCI), and we investigate the potential of improving the detectability of faint exoplanets through the use of advanced statistical methods based on the concepts of the ideal observer and the Hotelling observer. We propose a formal comparison of techniques using a blind data challenge with an evaluation of performance using the Receiver Operating Characteristic (ROC) and Localization ROC (LROC) curves. We place particular emphasis on the understanding and modeling of realistic sources of measurement noise in ground-based AO-corrected coronagraphs. The work reported in this paper is the result of interactions between the co-authors during a week-long workshop on exoplanet imaging that was held in Squaw Valley, California, in March of 2012

  6. On Advanced Estimation Techniques for Exoplanet Detection and Characterization Using Ground-based Coronagraphs

    PubMed Central

    Lawson, Peter R.; Poyneer, Lisa; Barrett, Harrison; Frazin, Richard; Caucci, Luca; Devaney, Nicholas; Furenlid, Lars; Gładysz, Szymon; Guyon, Olivier; Krist, John; Maire, Jérôme; Marois, Christian; Mawet, Dimitri; Mouillet, David; Mugnier, Laurent; Pearson, Iain; Perrin, Marshall; Pueyo, Laurent; Savransky, Dmitry

    2015-01-01

    The direct imaging of planets around nearby stars is exceedingly difficult. Only about 14 exoplanets have been imaged to date that have masses less than 13 times that of Jupiter. The next generation of planet-finding coronagraphs, including VLT-SPHERE, the Gemini Planet Imager, Palomar P1640, and Subaru HiCIAO have predicted contrast performance of roughly a thousand times less than would be needed to detect Earth-like planets. In this paper we review the state of the art in exoplanet imaging, most notably the method of Locally Optimized Combination of Images (LOCI), and we investigate the potential of improving the detectability of faint exoplanets through the use of advanced statistical methods based on the concepts of the ideal observer and the Hotelling observer. We propose a formal comparison of techniques using a blind data challenge with an evaluation of performance using the Receiver Operating Characteristic (ROC) and Localization ROC (LROC) curves. We place particular emphasis on the understanding and modeling of realistic sources of measurement noise in ground-based AO-corrected coronagraphs. The work reported in this paper is the result of interactions between the co-authors during a week-long workshop on exoplanet imaging that was held in Squaw Valley, California, in March of 2012. PMID:26347393

  7. Time-frequency and advanced frequency estimation techniques for the investigation of bat echolocation calls.

    PubMed

    Kopsinis, Yannis; Aboutanios, Elias; Waters, Dean A; McLaughlin, Steve

    2010-02-01

    In this paper, techniques for time-frequency analysis and investigation of bat echolocation calls are studied. Particularly, enhanced resolution techniques are developed and/or used in this specific context for the first time. When compared to traditional time-frequency representation methods, the proposed techniques are more capable of showing previously unseen features in the structure of bat echolocation calls. It should be emphasized that although the study is focused on bat echolocation recordings, the results are more general and applicable to many other types of signal. PMID:20136233

  8. Time-frequency and advanced frequency estimation techniques for the investigation of bat echolocation calls.

    PubMed

    Kopsinis, Yannis; Aboutanios, Elias; Waters, Dean A; McLaughlin, Steve

    2010-02-01

    In this paper, techniques for time-frequency analysis and investigation of bat echolocation calls are studied. Particularly, enhanced resolution techniques are developed and/or used in this specific context for the first time. When compared to traditional time-frequency representation methods, the proposed techniques are more capable of showing previously unseen features in the structure of bat echolocation calls. It should be emphasized that although the study is focused on bat echolocation recordings, the results are more general and applicable to many other types of signal.

  9. Advanced Communication Processing Techniques

    NASA Astrophysics Data System (ADS)

    Scholtz, Robert A.

    This document contains the proceedings of the workshop Advanced Communication Processing Techniques, held May 14 to 17, 1989, near Ruidoso, New Mexico. Sponsored by the Army Research Office (under Contract DAAL03-89-G-0016) and organized by the Communication Sciences Institute of the University of Southern California, the workshop had as its objective to determine those applications of intelligent/adaptive communication signal processing that have been realized and to define areas of future research. We at the Communication Sciences Institute believe that there are two emerging areas which deserve considerably more study in the near future: (1) Modulation characterization, i.e., the automation of modulation format recognition so that a receiver can reliably demodulate a signal without using a priori information concerning the signal's structure, and (2) the incorporation of adaptive coding into communication links and networks. (Encoders and decoders which can operate with a wide variety of codes exist, but the way to utilize and control them in links and networks is an issue). To support these two new interest areas, one must have both a knowledge of (3) the kinds of channels and environments in which the systems must operate, and of (4) the latest adaptive equalization techniques which might be employed in these efforts.

  10. Advanced qualification techniques

    SciTech Connect

    Winokur, P.S; Shaneyfelt, M.R.; Meisenheimer, T.L.; Fleetwood, D.M.

    1993-12-01

    This paper demonstrates use of the Qualified Manufacturers List (QML) methodology to qualify commercial and military microelectronics for use in space applications. QML ``builds in`` the hardness of product through statistical process control (SPC) of technology parameters relevant to the radiation response, test structure to integrated circuit (IC) correlations, and techniques for extrapolating laboratory test results to low-dose-rate space scenarios. Each of these elements is demonstrated and shown to be a cost-effective alternative to expensive end-of-line IC testing. Several examples of test structured-IC correlations are provided and recent work on complications arising from transistor scaling and geometry is discussed. The use of a 10-keV x-ray wafer-level test system to support SPC and establish ``process capability`` is illustrated and a comparison of 10-keV x-ray and Co{sup 60} gamma irradiations is provided for a wide range of CMOS technologies. The x-ray tester is shown to be cost-effective and its use in lot acceptance/qualification is recommended. Finally, a comparison is provided between MIL-STD-883D, Test Method 1019.4, which governs the testing of packaged semiconductor microcircuits in the DoD, and ESA/SSC Basic Specification No. 22900, Europe`s Total Dose Steady-State Irradiation Test Method. Test Method 1019.4 focuses on conservative estimates of MOS hardness for space and tactical applications, while Basic Specification 22900 focuses on improved simulation of low-dose-rate space environments.

  11. Advanced qualification techniques

    NASA Astrophysics Data System (ADS)

    Winokur, P. S.; Shaneyfelt, M. R.; Meisenheimer, T. L.; Fleetwood, D. M.

    This paper demonstrates use of the Qualified Manufacturers List (QML) methodology to qualify commercial and military microelectronics for use in space applications. QML 'builds in' the hardness of product through statistical process control (SPC) of technology parameters relevant to the radiation response, test structure to integrated circuit (IC) correlations, and techniques for extrapolating laboratory test results to low-dose-rate space scenarios. Each of these elements is demonstrated and shown to be a cost-effective alternative to expensive end-of-line IC testing. Several examples of test structured-IC correlations are provided and recent work on complications arising from transistor scaling and geometry is discussed. The use of a 10-keV x-ray wafer-level test system to support SPC and establish 'process capability' is illustrated and a comparison of 10-keV x-ray and Co-60 gamma irradiations is provided for a wide range of CMOS technologies. The x-ray tester is shown to be cost-effective and its use in lot acceptance/qualification is recommended. Finally, a comparison is provided between MIL-STD-883D, Test Method 1019.4, which governs the testing of packaged semiconductor microcircuits in the DoD, and ESA/SSC Basic Specification No. 22900, Europe's Total Dose Steady-State Irradiation Test Method. Test Method 1019.4 focuses on conservative estimates of MOS hardness for space and tactical applications, while Basic Specification 22900 focuses on improved simulation of low-dose-rate space environments.

  12. Advanced qualification techniques

    SciTech Connect

    Winokur, P.S.; Shaneyfelt, M.R.; Meisenheimer, T.L.; Fleetwood, D.M. )

    1994-06-01

    This paper demonstrates use of the Qualified Manufacturers List (QML) methodology to qualify commercial and military microelectronics for use in space applications. QML ''builds in'' the hardness of product through statistical process control (SPC) of technology parameters relevant to the radiation response, test structure to integrated circuit (IC) correlations, and techniques for extrapolating laboratory test results to low-dose-rate space scenarios. Each of these elements is demonstrated and shown to be a cost-effective alternative to expensive end-of-line IC testing. Several examples of test structure-to-IC correlations are provided and recent work on complications arising from transistor scaling and geometry is discussed. The use of a 10-keV x-ray wafer-level test system to support SPC and establish ''process capability'' is illustrated and a comparison of 10-kev x-ray wafer-level test system to support SPC and establish ''process capability'' is illustrated and a comparison of 10-keV x-ray and Co[sup 60] gamma irradiations is provided for a wide range of CMOS technologies. The x-ray tester is shown to be cost-effective and its use in lot acceptance/qualification is recommended. Finally, a comparison is provided between MIL-STD-883, Test Method 1019.4, which governs the testing of packaged semiconductor microcircuits in the DoD, and ESA/SCC Basic Specification No. 22900, Europe's Total Dose Steady-State Irradiation Test Method. Test Method 1019.4 focuses on conservative estimates of MOS hardness for space and tactical applications, while Basic Specification 22900 focuses on improved simulation of low-dose-rate space environments.

  13. Advanced qualification techniques

    NASA Astrophysics Data System (ADS)

    Winokur, P. S.; Shaneyfelt, M. R.; Meisenheimer, T. L.; Fleetwood, D. M.

    1994-06-01

    This paper demonstrates use of the Qualified Manufacturers List (QML) methodology to qualify commercial and military microelectronics for use in space applications. QML 'builds in' the hardness of product through statistical process control (SPC) of technology parameters relevant to the radiation response, test structure to integrated circuit (IC) correlations, and techniques for extrapolating laboratory test results to low-dose-rate space scenarios. Each of these elements is demonstrated and shown to be a cost-effective alternative to expensive end-of-line IC testing. Several examples of test structure-to-IC correlations are provided and recent work on complications arising from transistor scaling and geometry is discussed. The use of a 10-keV x-ray wafer-level test system to support SPC and establish 'process capability' is illustrated and a comparison of 10-keV x-ray and Co-60 gamma irradiations is provided for a wide range of CMOS technologies. The x-ray tester is shown to be cost-effective and its use in lot acceptance/qualification is recommended. Finally, a comparison is provided between MIL-STD-883, Test Method 1019.4, which governs the testing of packaged semiconductor microcircuits in the DoD, and ESA/SCC Basic Specification No. 22900, Europe's Total Dose Steady-State Irradiation Test Method. Test Method 1019.4 focuses on conservative estimates of MOS hardness for space and tactical applications, while Basic Specification 22900 focuses on improved simulation of low-dose-rate space environments.

  14. Advanced Coating Removal Techniques

    NASA Technical Reports Server (NTRS)

    Seibert, Jon

    2006-01-01

    An important step in the repair and protection against corrosion damage is the safe removal of the oxidation and protective coatings without further damaging the integrity of the substrate. Two such methods that are proving to be safe and effective in this task are liquid nitrogen and laser removal operations. Laser technology used for the removal of protective coatings is currently being researched and implemented in various areas of the aerospace industry. Delivering thousands of focused energy pulses, the laser ablates the coating surface by heating and dissolving the material applied to the substrate. The metal substrate will reflect the laser and redirect the energy to any remaining protective coating, thus preventing any collateral damage the substrate may suffer throughout the process. Liquid nitrogen jets are comparable to blasting with an ultra high-pressure water jet but without the residual liquid that requires collection and removal .As the liquid nitrogen reaches the surface it is transformed into gaseous nitrogen and reenters the atmosphere without any contamination to surrounding hardware. These innovative technologies simplify corrosion repair by eliminating hazardous chemicals and repetitive manual labor from the coating removal process. One very significant advantage is the reduction of particulate contamination exposure to personnel. With the removal of coatings adjacent to sensitive flight hardware, a benefit of each technique for the space program is that no contamination such as beads, water, or sanding residue is left behind when the job is finished. One primary concern is the safe removal of coatings from thin aluminum honeycomb face sheet. NASA recently conducted thermal testing on liquid nitrogen systems and found that no damage occurred on 1/6", aluminum substrates. Wright Patterson Air Force Base in conjunction with Boeing and NASA is currently testing the laser remOval technique for process qualification. Other applications of liquid

  15. Advanced Wavefront Control Techniques

    SciTech Connect

    Olivier, S S; Brase, J M; Avicola, K; Thompson, C A; Kartz, M W; Winters, S; Hartley, R; Wihelmsen, J; Dowla, F V; Carrano, C J; Bauman, B J; Pennington, D M; Lande, D; Sawvel, R M; Silva, D A; Cooke, J B; Brown, C G

    2001-02-21

    this project, work was performed in four areas (1) advanced modeling tools for deformable mirrors (2) low-order wavefront correctors with Alvarez lenses, (3) a direct phase measuring heterdyne wavefront sensor, and (4) high-spatial-frequency wavefront control using spatial light modulators.

  16. Techniques in Advanced Language Teaching.

    ERIC Educational Resources Information Center

    Ager, D. E.

    1967-01-01

    For ease of presentation, advanced grammar teaching techniques are briefly considered under the headings of structuralism (belief in the effectiveness of presenting grammar rules) and contextualism (belief in the maximum use by students of what they know in the target language). The structuralist's problem of establishing a syllabus is discussed…

  17. Advances in wound debridement techniques.

    PubMed

    Nazarko, Linda

    2015-06-01

    Dead and devitalised tissue interferes with the process of wound healing. Debridement is a natural process that occurs in all wounds and is crucial to healing; it reduces the bacterial burden in a wound and promotes effective inflammatory responses that encourage the formation of healthy granulation tissue (Wolcott et al, 2009). Wound care should be part of holistic patient care. Recent advances in debridement techniques include: biosurgery, hydrosurgery, mechanical debridement, and ultrasound. Biosurgery and mechanical debridement can be practiced by nonspecialist nurses and can be provided in a patient's home, thus increasing the patient's access to debridement therapy and accelerating wound healing.

  18. Advanced techniques of laser telemetry

    NASA Astrophysics Data System (ADS)

    Donati, S.; Gilardini, A.

    The relationships which govern a laser telemeter; noise sources; and measurement accuracy with pulsed and sinusoidal intensity modulation techniques are discussed. Developments in telemetry instrumention and optical detection are considered. Meteorological interferometers, geodimeters, and military telemeters are described. Propagation attenuation and signal to noise ratios are treated. It is shown that accuracy depends on the product of measurement time and received power. The frequency scanning technique of CW and long pulse telemetry; multifrequency techniques; pulse compression; and vernier technique are outlined.

  19. Splitting advancement genioplasty: a new genioplasty technique.

    PubMed

    Celik, M; Tuncer, S; Büyükçayir, I

    1999-08-01

    A new genioplasty technique has been described and performed on 16 patients since 1995. The technique has been developed to avoid some undesired results of the current osseous genioplasty techniques and to achieve a more natural appearance in advancement genioplasty. According to the authors' technique, a rectangular part of the outer table of the mentum is split away from the mandible, and is advanced and fixated to the mandible. This technique can be used for advancement cases but not for reduction genioplasty. This technique was performed on 16 patients with only minor complications, including one case of wound dehiscence, one hematoma, and one case of osteomyelitis, which was managed with systemic antibiotic therapy. Aesthetic results were found to be satisfactory according to an evaluation by the authors. When the results were evaluated using pre- and postoperative photos, lip position and projection of the mentum were found to be natural in shape appearance. During the late postoperative period, the new bone formation between the advanced segment and the mandible was demonstrated radiographically. Advantages of the technique include having more contact surfaces for bony healing, a natural position of the lower lip, more natural projection of the mentum, tridimensional movement of the mentum, and improvement in the soft tissue of the neck. The disadvantages of the technique are the potential risk of infection due to dead space from the advancement, manipulation problems during surgery, and possible mental nerve injury. Splitting advancement genioplasty was found to be a useful technique for advancement genioplasty. Splitting advancement genioplasty is a more physiological osteotomy technique than most of osseous genioplasty techniques. PMID:10454320

  20. Stitching Techniques Advance Optics Manufacturing

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Because NASA depends on the fabrication and testing of large, high-quality aspheric (nonspherical) optics for applications like the James Webb Space Telescope, it sought an improved method for measuring large aspheres. Through Small Business Innovation Research (SBIR) awards from Goddard Space Flight Center, QED Technologies, of Rochester, New York, upgraded and enhanced its stitching technology for aspheres. QED developed the SSI-A, which earned the company an R&D 100 award, and also developed a breakthrough machine tool called the aspheric stitching interferometer. The equipment is applied to advanced optics in telescopes, microscopes, cameras, medical scopes, binoculars, and photolithography."

  1. Advanced Spectroscopy Technique for Biomedicine

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhua; Zeng, Haishan

    This chapter presents an overview of the applications of optical spectroscopy in biomedicine. We focus on the optical design aspects of advanced biomedical spectroscopy systems, Raman spectroscopy system in particular. Detailed components and system integration are provided. As examples, two real-time in vivo Raman spectroscopy systems, one for skin cancer detection and the other for endoscopic lung cancer detection, and an in vivo confocal Raman spectroscopy system for skin assessment are presented. The applications of Raman spectroscopy in cancer diagnosis of the skin, lung, colon, oral cavity, gastrointestinal tract, breast, and cervix are summarized.

  2. Advanced techniques in abdominal surgery.

    PubMed Central

    Monson, J R

    1993-01-01

    Almost every abdominal organ is now amenable to laparoscopic surgery. Laparoscopic appendicectomy is a routine procedure which also permits identification of other conditions initially confused with an inflamed appendix. However, assessment of appendiceal inflammation is more difficult. Almost all colonic procedures can be performed laparoscopically, at least partly, though resection for colonic cancer is still controversial. For simple patch repair of perforated duodenal ulcers laparoscopy is ideal, and inguinal groin hernia can be repaired satisfactorily with a patch of synthetic mesh. Many upper abdominal procedures, however, still take more time than the open operations. These techniques reduce postoperative pain and the incidence of wound infections and allow a much earlier return to normal activity compared with open surgery. They have also brought new disciplines: surgeons must learn different hand-eye coordination, meticulous haemostasis is needed to maintain picture quality, and delivery of specimens may be problematic. The widespread introduction of laparoscopic techniques has emphasised the need for adequate training (operations that were straight-forward open procedures may require considerable laparoscopic expertise) and has raised questions about trainee surgeons acquiring adequate experience of open procedures. Images FIG 9 p1347-a p1347-b p1349-a p1350-a p1350-b PMID:8257893

  3. Advanced prosthetic techniques for below knee amputations.

    PubMed

    Staats, T B

    1985-02-01

    Recent advances in the evaluation of the amputation stump, the materials that are available for prosthetic application, techniques of improving socket fit, and prosthetic finishings promise to dramatically improve amputee function. Precision casting techniques for providing optimal fit of the amputation stump using materials such as alginate are described. The advantages of transparent check sockets for fitting the complicated amputation stump are described. Advances in research that promise to provide more functional prosthetic feet and faster and more reliable socket molding are the use of CAD-CAM (computer aided design-computer aided manufacturing) and the use of gait analysis techniques to aid in the alignment of the prosthesis after socket fitting. Finishing techniques to provide a more natural appearing prosthesis are described. These advances will gradually spread to the entire prosthetic profession.

  4. Simulations of motor unit number estimation techniques

    NASA Astrophysics Data System (ADS)

    Major, Lora A.; Jones, Kelvin E.

    2005-06-01

    Motor unit number estimation (MUNE) is an electrodiagnostic procedure used to evaluate the number of motor axons connected to a muscle. All MUNE techniques rely on assumptions that must be fulfilled to produce a valid estimate. As there is no gold standard to compare the MUNE techniques against, we have developed a model of the relevant neuromuscular physiology and have used this model to simulate various MUNE techniques. The model allows for a quantitative analysis of candidate MUNE techniques that will hopefully contribute to consensus regarding a standard procedure for performing MUNE.

  5. Advanced sialoendoscopy techniques, rare findings, and complications.

    PubMed

    Nahlieli, Oded

    2009-12-01

    This article presents and discusses advanced minimally invasive sialoendoscopy and combined methods: endoscopy, endoscopic-assisted techniques, and external-lithotripsy combined procedures. It also presents rare situations and complications encountered during sialoendoscopic procedures. Sialoendoscopy is a relatively novel technique, which adds significant new dimensions to the surgeon's armamentarium for management of inflammatory salivary gland diseases. Because of the rapid development in minimally invasive surgical techniques, surgeons are capable of more facilely treating complicated inflammatory and obstructive conditions of the salivary glands.

  6. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    The development of parametric cost estimating methods for advanced space systems in the conceptual design phase is discussed. The process of identifying variables which drive cost and the relationship between weight and cost are discussed. A theoretical model of cost is developed and tested using a historical data base of research and development projects.

  7. Hybrid mesh generation using advancing reduction technique

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study presents an extension of the application of the advancing reduction technique to the hybrid mesh generation. The proposed algorithm is based on a pre-generated rectangle mesh (RM) with a certain orientation. The intersection points between the two sets of perpendicular mesh lines in RM an...

  8. Recent advancement of turbulent flow measurement techniques

    NASA Technical Reports Server (NTRS)

    Battle, T.; Wang, P.; Cheng, D. Y.

    1974-01-01

    Advancements of the fluctuating density gradient cross beam laser Schlieren technique, the fluctuating line-reversal temperature measurement and the development of the two-dimensional drag-sensing probe to a three-dimensional drag-sensing probe are discussed. The three-dimensionality of the instantaneous momentum vector can shed some light on the nature of turbulence especially with swirling flow. All three measured fluctuating quantities (density, temperature, and momentum) can provide valuable information for theoreticians.

  9. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    Parametric cost estimating methods for space systems in the conceptual design phase are developed. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance, and time. The relationship between weight and cost is examined in detail. A theoretical model of cost is developed and tested statistically against a historical data base of major research and development programs. It is concluded that the technique presented is sound, but that it must be refined in order to produce acceptable cost estimates.

  10. Development of advanced acreage estimation methods

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr. (Principal Investigator)

    1980-01-01

    The use of the AMOEBA clustering/classification algorithm was investigated as a basis for both a color display generation technique and maximum likelihood proportion estimation procedure. An approach to analyzing large data reduction systems was formulated and an exploratory empirical study of spatial correlation in LANDSAT data was also carried out. Topics addressed include: (1) development of multiimage color images; (2) spectral spatial classification algorithm development; (3) spatial correlation studies; and (4) evaluation of data systems.

  11. Advanced Tools and Techniques for Formal Techniques in Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Knight, John C.

    2005-01-01

    This is the final technical report for grant number NAG-1-02101. The title of this grant was "Advanced Tools and Techniques for Formal Techniques In Aerospace Systems". The principal investigator on this grant was Dr. John C. Knight of the Computer Science Department, University of Virginia, Charlottesville, Virginia 22904-4740. This report summarizes activities under the grant during the period 7/01/2002 to 9/30/2004. This report is organized as follows. In section 2, the technical background of the grant is summarized. Section 3 lists accomplishments and section 4 lists students funded under the grant. In section 5, we present a list of presentations given at various academic and research institutions about the research conducted. Finally, a list of publications generated under this grant is included in section 6.

  12. Advanced decision aiding techniques applicable to space

    NASA Technical Reports Server (NTRS)

    Kruchten, Robert J.

    1987-01-01

    RADC has had an intensive program to show the feasibility of applying advanced technology to Air Force decision aiding situations. Some aspects of the program, such as Satellite Autonomy, are directly applicable to space systems. For example, RADC has shown the feasibility of decision aids that combine the advantages of laser disks and computer generated graphics; decision aids that interface object-oriented programs with expert systems; decision aids that solve path optimization problems; etc. Some of the key techniques that could be used in space applications are reviewed. Current applications are reviewed along with their advantages and disadvantages, and examples are given of possible space applications. The emphasis is to share RADC experience in decision aiding techniques.

  13. Multidirectional mobilities: Advanced measurement techniques and applications

    NASA Astrophysics Data System (ADS)

    Ivarsson, Lars Holger

    Today high noise-and-vibration comfort has become a quality sign of products in sectors such as the automotive industry, aircraft, components, households and manufacturing. Consequently, already in the design phase of products, tools are required to predict the final vibration and noise levels. These tools have to be applicable over a wide frequency range with sufficient accuracy. During recent decades a variety of tools have been developed such as transfer path analysis (TPA), input force estimation, substructuring, coupling by frequency response functions (FRF) and hybrid modelling. While these methods have a well-developed theoretical basis, their application combined with experimental data often suffers from a lack of information concerning rotational DOFs. In order to measure response in all 6 DOFs (including rotation), a sensor has been developed, whose special features are discussed in the thesis. This transducer simplifies the response measurements, although in practice the excitation of moments appears to be more difficult. Several excitation techniques have been developed to enable measurement of multidirectional mobilities. For rapid and simple measurement of the loaded mobility matrix, a MIMO (Multiple Input Multiple Output) technique is used. The technique has been tested and validated on several structures of different complexity. A second technique for measuring the loaded 6-by-6 mobility matrix has been developed. This technique employs a model of the excitation set-up, and with this model the mobility matrix is determined from sequential measurements. Measurements on ``real'' structures show that both techniques give results of similar quality, and both are recommended for practical use. As a further step, a technique for measuring the unloaded mobilities is presented. It employs the measured loaded mobility matrix in order to calculate compensation forces and moments, which are later applied in order to compensate for the loading of the

  14. Advanced AE Techniques in Composite Materials Research

    NASA Technical Reports Server (NTRS)

    Prosser, William H.

    1996-01-01

    Advanced, waveform based acoustic emission (AE) techniques have been successfully used to evaluate damage mechanisms in laboratory testing of composite coupons. An example is presented in which the initiation of transverse matrix cracking was monitored. In these tests, broad band, high fidelity acoustic sensors were used to detect signals which were then digitized and stored for analysis. Analysis techniques were based on plate mode wave propagation characteristics. This approach, more recently referred to as Modal AE, provides an enhanced capability to discriminate and eliminate noise signals from those generated by damage mechanisms. This technique also allows much more precise source location than conventional, threshold crossing arrival time determination techniques. To apply Modal AE concepts to the interpretation of AE on larger composite specimens or structures, the effects of modal wave propagation over larger distances and through structural complexities must be well characterized and understood. To demonstrate these effects, measurements of the far field, peak amplitude attenuation of the extensional and flexural plate mode components of broad band simulated AE signals in large composite panels are discussed. These measurements demonstrated that the flexural mode attenuation is dominated by dispersion effects. Thus, it is significantly affected by the thickness of the composite plate. Furthermore, the flexural mode attenuation can be significantly larger than that of the extensional mode even though its peak amplitude consists of much lower frequency components.

  15. Advanced flow MRI: emerging techniques and applications.

    PubMed

    Markl, M; Schnell, S; Wu, C; Bollache, E; Jarvis, K; Barker, A J; Robinson, J D; Rigsby, C K

    2016-08-01

    Magnetic resonance imaging (MRI) techniques provide non-invasive and non-ionising methods for the highly accurate anatomical depiction of the heart and vessels throughout the cardiac cycle. In addition, the intrinsic sensitivity of MRI to motion offers the unique ability to acquire spatially registered blood flow simultaneously with the morphological data, within a single measurement. In clinical routine, flow MRI is typically accomplished using methods that resolve two spatial dimensions in individual planes and encode the time-resolved velocity in one principal direction, typically oriented perpendicular to the two-dimensional (2D) section. This review describes recently developed advanced MRI flow techniques, which allow for more comprehensive evaluation of blood flow characteristics, such as real-time flow imaging, 2D multiple-venc phase contrast MRI, four-dimensional (4D) flow MRI, quantification of complex haemodynamic properties, and highly accelerated flow imaging. Emerging techniques and novel applications are explored. In addition, applications of these new techniques for the improved evaluation of cardiovascular (aorta, pulmonary arteries, congenital heart disease, atrial fibrillation, coronary arteries) as well as cerebrovascular disease (intra-cranial arteries and veins) are presented. PMID:26944696

  16. Advanced Bode Plot Techniques for Ultrasonic Transducers

    NASA Astrophysics Data System (ADS)

    DeAngelis, D. A.; Schulze, G. W.

    The Bode plot, displayed as either impedance or admittance versus frequency, is the most basic test used by ultrasonic transducer designers. With simplicity and ease-of-use, Bode plots are ideal for baseline comparisons such as spacing of parasitic modes or impedance, but quite often the subtleties that manifest as poor process control are hard to interpret or are nonexistence. In-process testing of transducers is time consuming for quantifying statistical aberrations, and assessments made indirectly via the workpiece are difficult. This research investigates the use of advanced Bode plot techniques to compare ultrasonic transducers with known "good" and known "bad" process performance, with the goal of a-priori process assessment. These advanced techniques expand from the basic constant voltage versus frequency sweep to include constant current and constant velocity interrogated locally on transducer or tool; they also include up and down directional frequency sweeps to quantify hysteresis effects like jumping and dropping phenomena. The investigation focuses solely on the common PZT8 piezoelectric material used with welding transducers for semiconductor wire bonding. Several metrics are investigated such as impedance, displacement/current gain, velocity/current gain, displacement/voltage gain and velocity/voltage gain. The experimental and theoretical research methods include Bode plots, admittance loops, laser vibrometry and coupled-field finite element analysis.

  17. Advances in nanodiagnostic techniques for microbial agents.

    PubMed

    Syed, Muhammad Ali

    2014-01-15

    Infectious diseases account for millions of sufferings and deaths in both developing as well as developed countries with a substantial economic loss. Massive increase in world population and international travel has facilitated their spread from one part of the world to other areas, making them one of the most significant global health risks. Furthermore, detection of bioterrorism agents in water, food and environmental samples as well traveler's baggage is a great challenge of the time for security purpose. Prevention strategies against infectious agents demand rapid and accurate detection and identification of the causative agents with highest sensitivity which should be equally available in different parts of the globe. Similarly, rapid and early diagnosis of infectious diseases has always been indispensable for their prompt cure and management, which has stimulated scientists to develop highly sophisticated techniques over centuries and the efforts continue unabated. Conventional diagnostic techniques are time consuming, tedious, expensive, less sensitive, and unsuitable for field situations. Nanodiagnostic assays have been promising for early, sensitive, point-of-care and cost-effective detection of microbial agents. There has been an explosive research in this area of science in last two decades yielding highly fascinating results. This review highlights some of the advancements made in the field of nanotechnology based assays for microbial detection since 2005 along with providing the basic understanding. PMID:24012709

  18. Advanced techniques in current signature analysis

    SciTech Connect

    Smith, S.F.; Castleberry, K.N.

    1992-03-01

    In general, both ac and dc motors can be characterized as weakly nonlinear systems, in which both linear and nonlinear effects occur simultaneously. Fortunately, the nonlinearities are generally well behaved and understood and an be handled via several standard mathematical techniques already well developed in the systems modeling area; examples are piecewise linear approximations and Volterra series representations. Field measurements of numerous motors and motor-driven systems confirm the rather complex nature of motor current spectra and illustrate both linear and nonlinear effects (including line harmonics and modulation components). Although previous current signature analysis (CSA) work at Oak Ridge and other sites has principally focused on the modulation mechanisms and detection methods (AM, PM, and FM), more recent studies have been conducted on linear spectral components (those appearing in the electric current at their actual frequencies and not as modulation sidebands). For example, large axial-flow compressors ({approximately}3300 hp) in the US gaseous diffusion uranium enrichment plants exhibit running-speed ({approximately}20 Hz) and high-frequency vibrational information (>1 kHz) in their motor current spectra. Several signal-processing techniques developed to facilitate analysis of these components, including specialized filtering schemes, are presented. Finally, concepts for the designs of advanced digitally based CSA units are offered, which should serve to foster the development of much more computationally capable ``smart`` CSA instrumentation in the next several years. 3 refs.

  19. Inverse lithography technique for advanced CMOS nodes

    NASA Astrophysics Data System (ADS)

    Villaret, Alexandre; Tritchkov, Alexander; Entradas, Jorge; Yesilada, Emek

    2013-04-01

    Resolution Enhancement Techniques have continuously improved over the last decade, driven by the ever growing constraints of lithography process. Despite the large number of RET applied, some hotspot configurations remain challenging for advanced nodes due to aggressive design rules. Inverse Lithography Technique (ILT) is evaluated here as a substitute to the dense OPC baseline. Indeed ILT has been known for several years for its near-to-ideal mask quality, while also being potentially more time consuming in terms of OPC run and mask processing. We chose to evaluate Mentor Graphics' ILT engine "pxOPCTM" on both lines and via hotspot configurations. These hotspots were extracted from real 28nm test cases where the dense OPC solution is not satisfactory. For both layer types, the reference OPC consists of a dense OPC engine coupled to rule-based and/or model-based assist generation method. The same CM1 model is used for the reference and the ILT OPC. ILT quality improvement is presented through Optical Rule Check (ORC) results with various adequate detectors. Several mask manufacturing rule constraints (MRC) are considered for the ILT solution and their impact on process ability is checked after mask processing. A hybrid OPC approach allowing localized ILT usage is presented in order to optimize both quality and runtime. A real mask is prepared and fabricated with this method. Finally, results analyzed on silicon are presented to compare localized ILT to reference dense OPC.

  20. Noncoherent sampling technique for communications parameter estimations

    NASA Technical Reports Server (NTRS)

    Su, Y. T.; Choi, H. J.

    1985-01-01

    This paper presents a method of noncoherent demodulation of the PSK signal for signal distortion analysis at the RF interface. The received RF signal is downconverted and noncoherently sampled for further off-line processing. Any mismatch in phase and frequency is then compensated for by the software using the estimation techniques to extract the baseband waveform, which is needed in measuring various signal parameters. In this way, various kinds of modulated signals can be treated uniformly, independent of modulation format, and additional distortions introduced by the receiver or the hardware measurement instruments can thus be eliminated. Quantization errors incurred by digital sampling and ensuing software manipulations are analyzed and related numerical results are presented also.

  1. Recent advances in DNA sequencing techniques

    NASA Astrophysics Data System (ADS)

    Singh, Rama Shankar

    2013-06-01

    Successful mapping of the draft human genome in 2001 and more recent mapping of the human microbiome genome in 2012 have relied heavily on the parallel processing of the second generation/Next Generation Sequencing (NGS) DNA machines at a cost of several millions dollars and long computer processing times. These have been mainly biochemical approaches. Here a system analysis approach is used to review these techniques by identifying the requirements, specifications, test methods, error estimates, repeatability, reliability and trends in the cost reduction. The first generation, NGS and the Third Generation Single Molecule Real Time (SMART) detection sequencing methods are reviewed. Based on the National Human Genome Research Institute (NHGRI) data, the achieved cost reduction of 1.5 times per yr. from Sep. 2001 to July 2007; 7 times per yr., from Oct. 2007 to Apr. 2010; and 2.5 times per yr. from July 2010 to Jan 2012 are discussed.

  2. Impacts of advanced manufacturing technology on parametric estimating

    NASA Astrophysics Data System (ADS)

    Hough, Paul G.

    1989-12-01

    The introduction of advanced manufacturing technology in the aerospace industry poses serious challenges for government cost analysts. Traditionally, the analysts have relied on parametric estimating techniques for both planning and budgeting. Despite its problems, this approach has proven to be a remarkably useful and robust tool for estimating new weapon system costs. However, rapid improvements in both product and process technology could exacerbate current difficulties, and diminish the utility of the parametric approach. This paper reviews some weakness associated with parametrics, then proceeds to examine how specific aspects of the factory of the future may further impact parametric estimating, and suggests avenues of research for their resolution. This paper is an extended version of Cost Estimating for the Factory of the Future. Parametric estimating is a method by which aggregated costs are derived as a function of high-level product characteristics or parameters. The resulting equations are known as cost estimating relationships (CERs). Such equations are particularly useful when detailed technical specifications are not available.

  3. Advances in procedural techniques--antegrade.

    PubMed

    Wilson, William; Spratt, James C

    2014-05-01

    There have been many technological advances in antegrade CTO PCI, but perhaps most importantly has been the evolution of the "hybrid' approach where ideally there exists a seamless interplay of antegrade wiring, antegrade dissection re-entry and retrograde approaches as dictated by procedural factors. Antegrade wire escalation with intimal tracking remains the preferred initial strategy in short CTOs without proximal cap ambiguity. More complex CTOs, however, usually require either a retrograde or an antegrade dissection re-entry approach, or both. Antegrade dissection re-entry is well suited to long occlusions where there is a healthy distal vessel and limited "interventional" collaterals. Early use of a dissection re-entry strategy will increase success rates, reduce complications, and minimise radiation exposure, contrast use as well as procedural times. Antegrade dissection can be achieved with a knuckle wire technique or the CrossBoss catheter whilst re-entry will be achieved in the most reproducible and reliable fashion by the Stingray balloon/wire. It should be avoided where there is potential for loss of large side branches. It remains to be seen whether use of newer dissection re-entry strategies will be associated with lower restenosis rates compared with the more uncontrolled subintimal tracking strategies such as STAR and whether stent insertion in the subintimal space is associated with higher rates of late stent malapposition and stent thrombosis. It is to be hoped that the algorithms, which have been developed to guide CTO operators, allow for a better transfer of knowledge and skills to increase uptake and acceptance of CTO PCI as a whole. PMID:24694104

  4. Removing baseline flame's spectrum by using advanced recovering spectrum techniques.

    PubMed

    Arias, Luis; Sbarbaro, Daniel; Torres, Sergio

    2012-09-01

    In this paper, a novel automated algorithm to estimate and remove the continuous baseline from measured flame spectra is proposed. The algorithm estimates the continuous background based on previous information obtained from a learning database of continuous flame spectra. Then, the discontinuous flame emission is calculated by subtracting the estimated continuous baseline from the measured spectrum. The key issue subtending the learning database is that the continuous flame emissions are predominant in the sooty regions, in absence of discontinuous radiation. The proposed algorithm was tested using natural gas and bio-oil flames spectra at different combustion conditions, and the goodness-of-fit coefficient (GFC) quality metric was used to quantify the performance in the estimation process. Additionally, the commonly used first derivative method (FDM) for baseline removing was applied to the same testing spectra in order to compare and to evaluate the proposed technique. The achieved results show that the proposed method is a very attractive tool for designing advanced combustion monitoring strategies of discontinuous emissions. PMID:22945158

  5. Bringing Advanced Computational Techniques to Energy Research

    SciTech Connect

    Mitchell, Julie C

    2012-11-17

    Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

  6. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1994-01-01

    NASA is responsible for developing much of the nation's future space technology. Cost estimates for new programs are required early in the planning process so that decisions can be made accurately. Because of the long lead times required to develop space hardware, the cost estimates are frequently required 10 to 15 years before the program delivers hardware. The system design in conceptual phases of a program is usually only vaguely defined and the technology used is so often state-of-the-art or beyond. These factors combine to make cost estimating for conceptual programs very challenging. This paper describes an effort to develop parametric cost estimating methods for space systems in the conceptual design phase. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance and time. The nature of the relationships between the driver variables and cost will be discussed. In particular, the relationship between weight and cost will be examined in detail. A theoretical model of cost will be developed and tested statistically against a historical database of major research and development projects.

  7. Two biased estimation techniques in linear regression: Application to aircraft

    NASA Technical Reports Server (NTRS)

    Klein, Vladislav

    1988-01-01

    Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.

  8. Advances in laparoscopic urologic surgery techniques

    PubMed Central

    Abdul-Muhsin, Haidar M.; Humphreys, Mitchell R.

    2016-01-01

    The last two decades witnessed the inception and exponential implementation of key technological advancements in laparoscopic urology. While some of these technologies thrived and became part of daily practice, others are still hindered by major challenges. This review was conducted through a comprehensive literature search in order to highlight some of the most promising technologies in laparoscopic visualization, augmented reality, and insufflation. Additionally, this review will provide an update regarding the current status of single-site and natural orifice surgery in urology. PMID:27134743

  9. Advances in laparoscopic urologic surgery techniques.

    PubMed

    Abdul-Muhsin, Haidar M; Humphreys, Mitchell R

    2016-01-01

    The last two decades witnessed the inception and exponential implementation of key technological advancements in laparoscopic urology. While some of these technologies thrived and became part of daily practice, others are still hindered by major challenges. This review was conducted through a comprehensive literature search in order to highlight some of the most promising technologies in laparoscopic visualization, augmented reality, and insufflation. Additionally, this review will provide an update regarding the current status of single-site and natural orifice surgery in urology. PMID:27134743

  10. Evaluation of gravimetric techniques to estimate the microvascular filtration coefficient.

    PubMed

    Dongaonkar, R M; Laine, G A; Stewart, R H; Quick, C M

    2011-06-01

    Microvascular permeability to water is characterized by the microvascular filtration coefficient (K(f)). Conventional gravimetric techniques to estimate K(f) rely on data obtained from either transient or steady-state increases in organ weight in response to increases in microvascular pressure. Both techniques result in considerably different estimates and neither account for interstitial fluid storage and lymphatic return. We therefore developed a theoretical framework to evaluate K(f) estimation techniques by 1) comparing conventional techniques to a novel technique that includes effects of interstitial fluid storage and lymphatic return, 2) evaluating the ability of conventional techniques to reproduce K(f) from simulated gravimetric data generated by a realistic interstitial fluid balance model, 3) analyzing new data collected from rat intestine, and 4) analyzing previously reported data. These approaches revealed that the steady-state gravimetric technique yields estimates that are not directly related to K(f) and are in some cases directly proportional to interstitial compliance. However, the transient gravimetric technique yields accurate estimates in some organs, because the typical experimental duration minimizes the effects of interstitial fluid storage and lymphatic return. Furthermore, our analytical framework reveals that the supposed requirement of tying off all draining lymphatic vessels for the transient technique is unnecessary. Finally, our numerical simulations indicate that our comprehensive technique accurately reproduces the value of K(f) in all organs, is not confounded by interstitial storage and lymphatic return, and provides corroboration of the estimate from the transient technique.

  11. Evaluation of gravimetric techniques to estimate the microvascular filtration coefficient.

    PubMed

    Dongaonkar, R M; Laine, G A; Stewart, R H; Quick, C M

    2011-06-01

    Microvascular permeability to water is characterized by the microvascular filtration coefficient (K(f)). Conventional gravimetric techniques to estimate K(f) rely on data obtained from either transient or steady-state increases in organ weight in response to increases in microvascular pressure. Both techniques result in considerably different estimates and neither account for interstitial fluid storage and lymphatic return. We therefore developed a theoretical framework to evaluate K(f) estimation techniques by 1) comparing conventional techniques to a novel technique that includes effects of interstitial fluid storage and lymphatic return, 2) evaluating the ability of conventional techniques to reproduce K(f) from simulated gravimetric data generated by a realistic interstitial fluid balance model, 3) analyzing new data collected from rat intestine, and 4) analyzing previously reported data. These approaches revealed that the steady-state gravimetric technique yields estimates that are not directly related to K(f) and are in some cases directly proportional to interstitial compliance. However, the transient gravimetric technique yields accurate estimates in some organs, because the typical experimental duration minimizes the effects of interstitial fluid storage and lymphatic return. Furthermore, our analytical framework reveals that the supposed requirement of tying off all draining lymphatic vessels for the transient technique is unnecessary. Finally, our numerical simulations indicate that our comprehensive technique accurately reproduces the value of K(f) in all organs, is not confounded by interstitial storage and lymphatic return, and provides corroboration of the estimate from the transient technique. PMID:21346245

  12. Feedback Techniques and Ecloud Instabilites - Design Estimates

    SciTech Connect

    Fox, J.D.; Mastorides, T.; Ndabashimiye, G.; Rivetta, C.; Winkle, D.Van; Byrd, J.; Vay, J-L; Hofle, W.; Rumolo, G.; Maria, R.De; /Brookhaven

    2009-05-18

    The SPS at high intensities exhibits transverse single-bunch instabilities with signatures consistent with an Ecloud driven instability. While the SPS has a coupled-bunch transverse feedback system, control of Ecloud-driven motion requires a much wider control bandwidth capable of sensing and controlling motion within each bunched beam. This paper draws beam dynamics data from the measurements and simulations of this SPS instability, and estimates system requirements for a feedback system with 2-4 GS/sec. sampling rates to damp Ecloud-driven transverse motion in the SPS at intensities desired for high-current LHC operation.

  13. Advanced Techniques for Power System Identification from Measured Data

    SciTech Connect

    Pierre, John W.; Wies, Richard; Trudnowski, Daniel

    2008-11-25

    Time-synchronized measurements provide rich information for estimating a power-system's electromechanical modal properties via advanced signal processing. This information is becoming critical for the improved operational reliability of interconnected grids. A given mode's properties are described by its frequency, damping, and shape. Modal frequencies and damping are useful indicators of power-system stress, usually declining with increased load or reduced grid capacity. Mode shape provides critical information for operational control actions. This project investigated many advanced techniques for power system identification from measured data focusing on mode frequency and damping ratio estimation. Investigators from the three universities coordinated their effort with Pacific Northwest National Laboratory (PNNL). Significant progress was made on developing appropriate techniques for system identification with confidence intervals and testing those techniques on field measured data and through simulation. Experimental data from the western area power system was provided by PNNL and Bonneville Power Administration (BPA) for both ambient conditions and for signal injection tests. Three large-scale tests were conducted for the western area in 2005 and 2006. Measured field PMU (Phasor Measurement Unit) data was provided to the three universities. A 19-machine simulation model was enhanced for testing the system identification algorithms. Extensive simulations were run with this model to test the performance of the algorithms. University of Wyoming researchers participated in four primary activities: (1) Block and adaptive processing techniques for mode estimation from ambient signals and probing signals, (2) confidence interval estimation, (3) probing signal design and injection method analysis, and (4) performance assessment and validation from simulated and field measured data. Subspace based methods have been use to improve previous results from block processing

  14. [Advanced online search techniques and dedicated search engines for physicians].

    PubMed

    Nahum, Yoav

    2008-02-01

    In recent years search engines have become an essential tool in the work of physicians. This article will review advanced search techniques from the world of information specialists, as well as some advanced search engine operators that may help physicians improve their online search capabilities, and maximize the yield of their searches. This article also reviews popular dedicated scientific and biomedical literature search engines.

  15. Advancing Methods for Estimating Cropland Area

    NASA Astrophysics Data System (ADS)

    King, L.; Hansen, M.; Stehman, S. V.; Adusei, B.; Potapov, P.; Krylov, A.

    2014-12-01

    Measurement and monitoring of complex and dynamic agricultural land systems is essential with increasing demands on food, feed, fuel and fiber production from growing human populations, rising consumption per capita, the expansion of crops oils in industrial products, and the encouraged emphasis on crop biofuels as an alternative energy source. Soybean is an important global commodity crop, and the area of land cultivated for soybean has risen dramatically over the past 60 years, occupying more than 5% of all global croplands (Monfreda et al 2008). Escalating demands for soy over the next twenty years are anticipated to be met by an increase of 1.5 times the current global production, resulting in expansion of soybean cultivated land area by nearly the same amount (Masuda and Goldsmith 2009). Soybean cropland area is estimated with the use of a sampling strategy and supervised non-linear hierarchical decision tree classification for the United States, Argentina and Brazil as the prototype in development of a new methodology for crop specific agricultural area estimation. Comparison of our 30 m2 Landsat soy classification with the National Agricultural Statistical Services Cropland Data Layer (CDL) soy map shows a strong agreement in the United States for 2011, 2012, and 2013. RapidEye 5m2 imagery was also classified for soy presence and absence and used at the field scale for validation and accuracy assessment of the Landsat soy maps, describing a nearly 1 to 1 relationship in the United States, Argentina and Brazil. The strong correlation found between all products suggests high accuracy and precision of the prototype and has proven to be a successful and efficient way to assess soybean cultivated area at the sub-national and national scale for the United States with great potential for application elsewhere.

  16. Advanced optical imaging techniques for neurodevelopment.

    PubMed

    Wu, Yicong; Christensen, Ryan; Colón-Ramos, Daniel; Shroff, Hari

    2013-12-01

    Over the past decade, developmental neuroscience has been transformed by the widespread application of confocal and two-photon fluorescence microscopy. Even greater progress is imminent, as recent innovations in microscopy now enable imaging with increased depth, speed, and spatial resolution; reduced phototoxicity; and in some cases without external fluorescent probes. We discuss these new techniques and emphasize their dramatic impact on neurobiology, including the ability to image neurons at depths exceeding 1mm, to observe neurodevelopment noninvasively throughout embryogenesis, and to visualize neuronal processes or structures that were previously too small or too difficult to target with conventional microscopy.

  17. Advanced Optical Imaging Techniques for Neurodevelopment

    PubMed Central

    Wu, Yicong; Christensen, Ryan; Colón-Ramos, Daniel; Shroff, Hari

    2013-01-01

    Over the past decade, developmental neuroscience has been transformed by the widespread application of confocal and two-photon fluorescence microscopy. Even greater progress is imminent, as recent innovations in microscopy now enable imaging with increased depth, speed, and spatial resolution; reduced phototoxicity; and in some cases without external fluorescent probes. We discuss these new techniques and emphasize their dramatic impact on neurobiology, including the ability to image neurons at depths exceeding 1 mm, to observe neurodevelopment noninvasively throughout embryogenesis, and to visualize neuronal processes or structures that were previously too small or too difficult to target with conventional microscopy. PMID:23831260

  18. Advanced ultrasonic techniques for local tumor hyperthermia.

    PubMed

    Lele, P P

    1989-05-01

    Scanned, intensity-modulated, focused ultrasound (SIMFU) presently is the modality of choice for localized, controlled heating of deep as well as superficial tumors noninvasively. With the present SIMFU system, it was possible to heat 88 per cent of deep tumors up to 12 cm in depth and 15 cm in diameter, to 43 degrees C in 3 to 4 minutes. The infiltrative tumor margins could be heated to the desired therapeutic temperature. The temperature outside the treatment field fell off sharply. Excellent objective responses were obtained without local or systemic toxicity. Multiinstitutional clinical trials of local hyperthermia by this promising technique are clearly warranted.

  19. Air pollution monitoring by advanced spectroscopic techniques.

    PubMed

    Hodgeson, J A; McClenny, W A; Hanst, P L

    1973-10-19

    The monitoring requirements related to air pollution are many and varied. The molecules of concern differ greatly in their chemical and physical properties, in the nature of their environment, and in their concentration ranges. Furthermore, the application may have specific requirements such as rapid response time, ultrasensitivity, multipollutant capability, or capability for remote measurements. For these reasons, no single spectroscopic technique appears to offer a panacea for all monitoring needs. Instead we have attempted to demonstrate in the above discussion that, regardless of the difficulty and complexity of the monitoring problems, spectroscopy offers many tools by which such problems may be solved.

  20. Techniques for estimating Space Station aerodynamic characteristics

    NASA Technical Reports Server (NTRS)

    Thomas, Richard E.

    1993-01-01

    A method was devised and calculations were performed to determine the effects of reflected molecules on the aerodynamic force and moment coefficients for a body in free molecule flow. A procedure was developed for determining the velocity and temperature distributions of molecules reflected from a surface of arbitrary momentum and energy accommodation. A system of equations, based on momentum and energy balances for the surface, incident, and reflected molecules, was solved by a numerical optimization technique. The minimization of a 'cost' function, developed from the set of equations, resulted in the determination of the defining properties of the flow reflected from the arbitrary surface. The properties used to define both the incident and reflected flows were: average temperature of the molecules in the flow, angle of the flow with respect to a vector normal to the surface, and the molecular speed ratio. The properties of the reflected flow were used to calculate the contribution of multiply reflected molecules to the force and moments on a test body in the flow. The test configuration consisted of two flat plates joined along one edge at a right angle to each other. When force and moment coefficients of this 90 deg concave wedge were compared to results that did not include multiple reflections, it was found that multiple reflections could nearly double lift and drag coefficients, with nearly a 50 percent increase in pitching moment for cases with specular or nearly specular accommodation. The cases of diffuse or nearly diffuse accommodation often had minor reductions in axial and normal forces when multiple reflections were included. There were several cases of intermediate accommodation where the addition of multiple reflection effects more than tripled the lift coefficient over the convex technique.

  1. COMPARISON OF RECURSIVE ESTIMATION TECHNIQUES FOR POSITION TRACKING RADIOACTIVE SOURCES

    SciTech Connect

    K. MUSKE; J. HOWSE

    2000-09-01

    This paper compares the performance of recursive state estimation techniques for tracking the physical location of a radioactive source within a room based on radiation measurements obtained from a series of detectors at fixed locations. Specifically, the extended Kalman filter, algebraic observer, and nonlinear least squares techniques are investigated. The results of this study indicate that recursive least squares estimation significantly outperforms the other techniques due to the severe model nonlinearity.

  2. Advanced analysis techniques for uranium assay

    SciTech Connect

    Geist, W. H.; Ensslin, Norbert; Carrillo, L. A.; Beard, C. A.

    2001-01-01

    Uranium has a negligible passive neutron emission rate making its assay practicable only with an active interrogation method. The active interrogation uses external neutron sources to induce fission events in the uranium in order to determine the mass. This technique requires careful calibration with standards that are representative of the items to be assayed. The samples to be measured are not always well represented by the available standards which often leads to large biases. A technique of active multiplicity counting is being developed to reduce some of these assay difficulties. Active multiplicity counting uses the measured doubles and triples count rates to determine the neutron multiplication (f4) and the product of the source-sample coupling ( C ) and the 235U mass (m). Since the 35U mass always appears in the multiplicity equations as the product of Cm, the coupling needs to be determined before the mass can be known. A relationship has been developed that relates the coupling to the neutron multiplication. The relationship is based on both an analytical derivation and also on empirical observations. To determine a scaling constant present in this relationship, known standards must be used. Evaluation of experimental data revealed an improvement over the traditional calibration curve analysis method of fitting the doubles count rate to the 235Um ass. Active multiplicity assay appears to relax the requirement that the calibration standards and unknown items have the same chemical form and geometry.

  3. Advanced automated char image analysis techniques

    SciTech Connect

    Tao Wu; Edward Lester; Michael Cloke

    2006-05-15

    Char morphology is an important characteristic when attempting to understand coal behavior and coal burnout. In this study, an augmented algorithm has been proposed to identify char types using image analysis. On the basis of a series of image processing steps, a char image is singled out from the whole image, which then allows the important major features of the char particle to be measured, including size, porosity, and wall thickness. The techniques for automated char image analysis have been tested against char images taken from ICCP Char Atlas as well as actual char particles derived from pyrolyzed char samples. Thirty different chars were prepared in a drop tube furnace operating at 1300{sup o}C, 1% oxygen, and 100 ms from 15 different world coals sieved into two size fractions (53-75 and 106-125 {mu}m). The results from this automated technique are comparable with those from manual analysis, and the additional detail from the automated sytem has potential use in applications such as combustion modeling systems. Obtaining highly detailed char information with automated methods has traditionally been hampered by the difficulty of automatic recognition of individual char particles. 20 refs., 10 figs., 3 tabs.

  4. Tumour dose estimation using automated TLD techniques.

    PubMed

    Ferguson, H M; Lambert, G D; Gustard, D; Harrison, R M

    1998-01-01

    Lithium fluoride (TLD-700) dosimeters were used to measure exit surface absorbed doses in external beam radiotherapy using an automated TLD reader. Delivered tumour absorbed doses were derived from these measurements for head and neck, pelvis and breast treatments. For the head and neck treatments (first fraction only), the mean percentage difference between prescribed and delivered tumour absorbed doses was -0.15 +/- 3.0% (+/- 1 SD), for the pelvic treatments -0.83 +/- 2.8% and for the breast treatments +0.26 +/- 2.9%. The spread of results is approximately +/- 3% (+/- 1 SD). This is comparable with the estimated uncertainty in a single TLD absorbed dose measurement in phantom (+/- 2%; +/- 1 SD). Thus, ICRU recommended tolerances for absorbed dose delivery of +/- 5% may not be unequivocally detectable using this method. An action level of +/- 10% is suggested, allowing investigation of possible gross errors in treatment delivery at an early stage, before the course of treatment has progressed to a point at which absorbed dose compensation is impossible.

  5. Major advances in genetic evaluation techniques.

    PubMed

    Powell, R L; Norman, H D

    2006-04-01

    The past quarter-century in genetic evaluation of dairy cattle has been marked by evolution in methodology and computer capacity, expansion in the array of evaluated traits, and globalization. Animal models replaced sire and sire-maternal grandsire models and, more recently, application of Bayesian theory has become standard. Individual test-day observations have been used more effectively in estimation of lactation yield or directly as input to evaluation models. Computer speed and storage are less limiting in choosing procedures. The increased capabilities have supported evaluation of additional traits that affect the net profitability of dairy cows. The importance of traits other than yield has increased, in a few cases due to an antagonistic relationship with yield. National evaluations combined internationally provide evaluations for bulls from all participating countries on each of the national scales, facilitating choices from among many more bulls. Selection within countries has increased inbreeding and the use of similar genetics across countries reduces the previously available outcross population. Concern about inbreeding has prompted changes in evaluation methodology and mating practices, and has promoted interest in crossbreeding. In just the past decade, distribution of genetic evaluations has gone from mailed paper or computer tapes for a limited audience to publicly accessible, request-driven distribution via the Internet. Among the distributed information is a choice of economic indices that combine an increasing array of traits into numbers reflecting breeding goals under different milk-pricing conditions. Considerable progress in genomics and the mapping of the bovine genome have identified markers for some deleterious recessive genes, but broader benefits of marker-assisted selection are still in the future. A possible exception is the proprietary use of DNA testing by semen producers to select among potential progeny test bulls. The collection

  6. Laparoscopic ureteral reimplantation: a simplified dome advancement technique.

    PubMed

    Lima, Guilherme C; Rais-Bahrami, Soroush; Link, Richard E; Kavoussi, Louis R

    2005-12-01

    Laparoscopic Boari flap reimplantation has been used to treat long distal ureteral strictures. This technique requires extensive bladder mobilization and complex intracorporeal suturing. This demonstrates a novel laparoscopic bladder dome advancement approach for ureteral reimplantation. This technique obviates the need for bladder pedicle dissection and simplifies the required suturing.

  7. Evaluation of Advanced Retrieval Techniques in an Experimental Online Catalog.

    ERIC Educational Resources Information Center

    Larson, Ray R.

    1992-01-01

    Discusses subject searching problems in online library catalogs; explains advanced information retrieval (IR) techniques; and describes experiments conducted on a test collection database, CHESHIRE (California Hybrid Extended SMART for Hypertext and Information Retrieval Experimentation), which was created to evaluate IR techniques in online…

  8. Innovative Tools Advance Revolutionary Weld Technique

    NASA Technical Reports Server (NTRS)

    2009-01-01

    The iconic, orange external tank of the space shuttle launch system not only contains the fuel used by the shuttle s main engines during liftoff but also comprises the shuttle s backbone, supporting the space shuttle orbiter and solid rocket boosters. Given the tank s structural importance and the extreme forces (7.8 million pounds of thrust load) and temperatures it encounters during launch, the welds used to construct the tank must be highly reliable. Variable polarity plasma arc welding, developed for manufacturing the external tank and later employed for building the International Space Station, was until 1994 the best process for joining the aluminum alloys used during construction. That year, Marshall Space Flight Center engineers began experimenting with a relatively new welding technique called friction stir welding (FSW), developed in 1991 by The Welding Institute, of Cambridge, England. FSW differs from traditional fusion welding in that it is a solid-state welding technique, using frictional heat and motion to join structural components without actually melting any of the material. The weld is created by a shouldered pin tool that is plunged into the seam of the materials to be joined. The tool traverses the line while rotating at high speeds, generating friction that heats and softens but does not melt the metal. (The heat produced approaches about 80 percent of the metal s melting temperature.) The pin tool s rotation crushes and stirs the plasticized metal, extruding it along the seam as the tool moves forward. The material cools and consolidates, resulting in a weld with superior mechanical properties as compared to those weld properties of fusion welds. The innovative FSW technology promises a number of attractive benefits. Because the welded materials are not melted, many of the undesirables associated with fusion welding porosity, cracking, shrinkage, and distortion of the weld are minimized or avoided. The process is more energy efficient, safe

  9. Positional estimation techniques for an autonomous mobile robot

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Aggarwal, J. K.

    1990-01-01

    Techniques for positional estimation of a mobile robot navigation in an indoor environment are described. A comprehensive review of the various positional estimation techniques studied in the literature is first presented. The techniques are divided into four different types and each of them is discussed briefly. Two different kinds of environments are considered for positional estimation; mountainous natural terrain and an urban, man-made environment with polyhedral buildings. In both cases, the robot is assumed to be equipped with single visual camera that can be panned and tilted and also a 3-D description (world model) of the environment is given. Such a description could be obtained from a stereo pair of aerial images or from the architectural plans of the buildings. Techniques for positional estimation using the camera input and the world model are presented.

  10. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  11. Advances in gamma titanium aluminides and their manufacturing techniques

    NASA Astrophysics Data System (ADS)

    Kothari, Kunal; Radhakrishnan, Ramachandran; Wereley, Norman M.

    2012-11-01

    Gamma titanium aluminides display attractive properties for high temperature applications. For over a decade in the 1990s, the attractive properties of titanium aluminides were outweighed by difficulties encountered in processing and machining at room temperature. But advances in manufacturing technologies, deeper understanding of titanium aluminides microstructure, deformation mechanisms, and advances in micro-alloying, has led to the production of gamma titanium aluminide sheets. An in-depth review of key advances in gamma titanium aluminides is presented, including microstructure, deformation mechanisms, and alloy development. Traditional manufacturing techniques such as ingot metallurgy and investment casting are reviewed and advances via powder metallurgy based manufacturing techniques are discussed. Finally, manufacturing challenges facing gamma titanium aluminides, as well as avenues to overcome them, are discussed.

  12. 75 FR 44015 - Certain Semiconductor Products Made by Advanced Lithography Techniques and Products Containing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-27

    ... COMMISSION Certain Semiconductor Products Made by Advanced Lithography Techniques and Products Containing... importation of certain semiconductor products made by advanced lithography techniques and products containing... certain semiconductor products made by advanced lithography techniques or products containing same...

  13. Advanced liner-cooling techniques for gas turbine combustors

    NASA Technical Reports Server (NTRS)

    Norgren, C. T.; Riddlebaugh, S. M.

    1985-01-01

    Component research for advanced small gas turbine engines is currently underway at the NASA Lewis Research Center. As part of this program, a basic reverse-flow combustor geometry was being maintained while different advanced liner wall cooling techniques were investigated. Performance and liner cooling effectiveness of the experimental combustor configuration featuring counter-flow film-cooled panels is presented and compared with two previously reported combustors featuring: splash film-cooled liner walls; and transpiration cooled liner walls (Lamilloy).

  14. Advanced regenerative-cooling techniques for future space transportation systems

    NASA Technical Reports Server (NTRS)

    Wagner, W. R.; Shoji, J. M.

    1975-01-01

    A review of regenerative-cooling techniques applicable to advanced planned engine designs for space booster and orbit transportation systems has developed the status of the key elements of this cooling mode. This work is presented in terms of gas side, coolant side, wall conduction heat transfer, and chamber life fatigue margin considerations. Described are preliminary heat transfer and trade analyses performed using developed techniques combining channel wall construction with advanced, high-strength, high-thermal-conductivity materials (NARloy-Z or Zr-Cu alloys) in high heat flux regions, combined with lightweight steel tubular nozzle wall construction. Advanced cooling techniques such as oxygen cooling and dual-mode hydrocarbon/hydrogen fuel operation and their limitations are indicated for the regenerative cooling approach.

  15. Accuracy of selected techniques for estimating ice-affected streamflow

    USGS Publications Warehouse

    Walker, John F.

    1991-01-01

    This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.

  16. Bi-maxillary advancement surgery: Technique, indications and results.

    PubMed

    Olivi, Pierre; Garcia, Claude

    2014-06-01

    Esthetic analysis of the face in some patients presenting a dental Class II can reveal the need for maxillo-mandibular advancement surgery. In these cases, mandibular advancement alone would provide a result which was satisfactory from the occlusal viewpoint but esthetically displeasing. Using bi-maxillary advancement, the impact of nasal volume is reduced and the nasolabial relationship is corrected. The sub-mandibular length is increased, thus creating a better-defined cervico-mental angle. This treatment technique involving a prior mandibular procedure has the advantage of restoring patients' dental occlusion while optimizing their facial esthetics.

  17. A comparison of sampling techniques to estimate number of wetlands

    USGS Publications Warehouse

    Johnson, R.R.; Higgins, K.F.; Naugle, D.E.; Jenks, J.A.

    1999-01-01

    Service use annual estimates of the number of ponded wetlands to estimate duck production and establish duck hunting regulations. Sampling techniques that minimize bias may provide more reliable estimates of annual duck production. Using a wetland geographic information system (GIS), we estimated number of wetlands using standard counting protocol with belt transects and samples of square plots. Estimates were compared to the known number of wetlands in the GIS to determine bias. Bias in transect-derived estimates ranged from +67-87% of the known number of wetlands, compared to bias of +3-6% in estimates from samples of 10.24-km2 plots. We recommend using samples of 10.24-km2 plots stratified by wetland density to decrease bias.

  18. Advanced composites sizing guide for preliminary weight estimates

    NASA Astrophysics Data System (ADS)

    Burns, J. W.

    During the preliminary design and proposal phases, it is necessary for the mass properties engineer to make weight estimates that require preliminary rough estimates to improve or verify Level I and Level II estimates and to support trade studies for various types of construction, materials substitution, wing t/c, and design criteria changes. The purpose of this paper is to provide a simple and easy to understand, preliminary sizing guide and present some numeric examples that will aid the mass properties engineer that is inexperienced with advanced composites analysis.

  19. Comparative evaluation of workload estimation techniques in piloting tasks

    NASA Technical Reports Server (NTRS)

    Wierwille, W. W.

    1983-01-01

    Techniques to measure operator workload in a wide range of situations and tasks were examined. The sensitivity and intrusion of a wide variety of workload assessment techniques in simulated piloting tasks were investigated. Four different piloting tasks, psychomotor, perceptual, mediational, and communication aspects of piloting behavior were selected. Techniques to determine relative sensitivity and intrusion were applied. Sensitivity is the relative ability of a workload estimation technique to discriminate statistically significant differences in operator loading. High sensitivity requires discriminable changes in score means as a function of load level and low variation of the scores about the means. Intrusion is an undesirable change in the task for which workload is measured, resulting from the introduction of the workload estimation technique or apparatus.

  20. Advanced Marketing Core Curriculum. Test Items and Assessment Techniques.

    ERIC Educational Resources Information Center

    Smith, Clifton L.; And Others

    This document contains duties and tasks, multiple-choice test items, and other assessment techniques for Missouri's advanced marketing core curriculum. The core curriculum begins with a list of 13 suggested textbook resources. Next, nine duties with their associated tasks are given. Under each task appears one or more citations to appropriate…

  1. Performance and Weight Estimates for an Advanced Open Rotor Engine

    NASA Technical Reports Server (NTRS)

    Hendricks, Eric S.; Tong, Michael T.

    2012-01-01

    NASA s Environmentally Responsible Aviation Project and Subsonic Fixed Wing Project are focused on developing concepts and technologies which may enable dramatic reductions to the environmental impact of future generation subsonic aircraft. The open rotor concept (also historically referred to an unducted fan or advanced turboprop) may allow for the achievement of this objective by reducing engine fuel consumption. To evaluate the potential impact of open rotor engines, cycle modeling and engine weight estimation capabilities have been developed. The initial development of the cycle modeling capabilities in the Numerical Propulsion System Simulation (NPSS) tool was presented in a previous paper. Following that initial development, further advancements have been made to the cycle modeling and weight estimation capabilities for open rotor engines and are presented in this paper. The developed modeling capabilities are used to predict the performance of an advanced open rotor concept using modern counter-rotating propeller designs. Finally, performance and weight estimates for this engine are presented and compared to results from a previous NASA study of advanced geared and direct-drive turbofans.

  2. Technique for estimating depth of 100-year floods in Tennessee

    USGS Publications Warehouse

    Gamble, Charles R.; Lewis, James G.

    1977-01-01

    Preface: A method is presented for estimating the depth of the loo-year flood in four hydrologic areas in Tennessee. Depths at 151 gaging stations on streams that were not significantly affected by man made changes were related to basin characteristics by multiple regression techniques. Equations derived from the analysis can be used to estimate the depth of the loo-year flood if the size of the drainage basin is known.

  3. The Rayleigh-Ritz Technique for Estimating Eigenvalues

    NASA Astrophysics Data System (ADS)

    Schnack, Dalton D.

    The energy principle provides a powerful technique for determining the stability or instability of a magneto-fluid system without resorting to the solution of a differential equation. Instead, one makes an educated guess at the minimizing displacement and then examines the sign of the resulting eigenvalue. This approach is made even more powerful, and put on a solid theoretical footing, by application of the Rayleigh-Ritz technique for estimating the eigenvalues of a self-adjoint operator.

  4. Congestion estimation technique in the optical network unit registration process.

    PubMed

    Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk

    2016-07-01

    We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results.

  5. Quantitative study of single molecule location estimation techniques.

    PubMed

    Abraham, Anish V; Ram, Sripad; Chao, Jerry; Ward, E S; Ober, Raimund J

    2009-12-21

    Estimating the location of single molecules from microscopy images is a key step in many quantitative single molecule data analysis techniques. Different algorithms have been advocated for the fitting of single molecule data, particularly the nonlinear least squares and maximum likelihood estimators. Comparisons were carried out to assess the performance of these two algorithms in different scenarios. Our results show that both estimators, on average, are able to recover the true location of the single molecule in all scenarios we examined. However, in the absence of modeling inaccuracies and low noise levels, the maximum likelihood estimator is more accurate than the nonlinear least squares estimator, as measured by the standard deviations of its estimates, and attains the best possible accuracy achievable for the sets of imaging and experimental conditions that were tested. Although neither algorithm is consistently superior to the other in the presence of modeling inaccuracies or misspecifications, the maximum likelihood algorithm emerges as a robust estimator producing results with consistent accuracy across various model mismatches and misspecifications. At high noise levels, relative to the signal from the point source, neither algorithm has a clear accuracy advantage over the other. Comparisons were also carried out for two localization accuracy measures derived previously. Software packages with user-friendly graphical interfaces developed for single molecule location estimation (EstimationTool) and limit of the localization accuracy calculations (FandPLimitTool) are also discussed.

  6. Cost estimate guidelines for advanced nuclear power technologies

    SciTech Connect

    Delene, J.G.; Hudson, C.R. II

    1993-05-01

    Several advanced power plant concepts are currently under development. These include the Modular High Temperature Gas Cooled Reactors, the Advanced Liquid Metal Reactor and the Advanced Light Water Reactors. One measure of the attractiveness of a new concept is its cost. Invariably, the cost of a new type of power plant will be compared with other alternative forms of electrical generation. This report provides a common starting point, whereby the cost estimates for the various power plants to be considered are developed with common assumptions and ground rules. Comparisons can then be made on a consistent basis. This is the second update of these cost estimate guidelines. Changes have been made to make the guidelines more current (January 1, 1992) and in response to suggestions made as a result of the use of the previous report. The principal changes are that the reference site has been changed from a generic Northeast (Middletown) site to a more central site (EPRI`s East/West Central site) and that reference bulk commodity prices and labor productivity rates have been added. This report is designed to provide a framework for the preparation and reporting of costs. The cost estimates will consist of the overnight construction cost, the total plant capital cost, the operation and maintenance costs, the fuel costs, decommissioning costs and the power production or busbar generation cost.

  7. Cubic spline approximation techniques for parameter estimation in distributed systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Crowley, J. M.; Kunisch, K.

    1983-01-01

    Approximation schemes employing cubic splines in the context of a linear semigroup framework are developed for both parabolic and hyperbolic second-order partial differential equation parameter estimation problems. Convergence results are established for problems with linear and nonlinear systems, and a summary of numerical experiments with the techniques proposed is given.

  8. A nonparametric clustering technique which estimates the number of clusters

    NASA Technical Reports Server (NTRS)

    Ramey, D. B.

    1983-01-01

    In applications of cluster analysis, one usually needs to determine the number of clusters, K, and the assignment of observations to each cluster. A clustering technique based on recursive application of a multivariate test of bimodality which automatically estimates both K and the cluster assignments is presented.

  9. Advanced Packaging Materials and Techniques for High Power TR Module: Standard Flight vs. Advanced Packaging

    NASA Technical Reports Server (NTRS)

    Hoffman, James Patrick; Del Castillo, Linda; Miller, Jennifer; Jenabi, Masud; Hunter, Donald; Birur, Gajanana

    2011-01-01

    The higher output power densities required of modern radar architectures, such as the proposed DESDynI [Deformation, Ecosystem Structure, and Dynamics of Ice] SAR [Synthetic Aperture Radar] Instrument (or DSI) require increasingly dense high power electronics. To enable these higher power densities, while maintaining or even improving hardware reliability, requires advances in integrating advanced thermal packaging technologies into radar transmit/receive (TR) modules. New materials and techniques have been studied and compared to standard technologies.

  10. Tracking closely spaced multiple sources via spectral-estimation techniques

    NASA Astrophysics Data System (ADS)

    Gabriel, W. F.

    1982-06-01

    Modern spectral-estimation techniques have achieved a level of performance that attracts interest in applications area such as the tracking of multiple spatial sources. In addition to the original "superresolution' capability, these techniques offer an apparent 'absence of sidelobes' characteristic and some reasonable solutions to the difficult radar coherent-source problem that involves a phase-dependent SNR (signal-to-noise ratio) penalty. This report reviews the situation briefly, and it discusses a few of the techniques that have been found useful, including natural or synthetic doppler shifts, non-Toeplitz forward-backward subaperture-shift processing, and recent eigenvalue/eigenvector analysis algorithms. The techniques are applied to multiple-source situations that include mixtures of coherent and noncoherent sources of unequal strengths, with either an 8-or a 12-element linear-array sampling aperture. The first test case involves the estimation of six sources, two of which are 95% correlated. The second test case involves a tracking-simulation display example of four moving sources: three are -10dB coherent sources 95% correlated, and the other is a strong 20-dB noncoherent source. These test cases demonstrate the remarkable improvements obtained with the recent estimation techniques, and they point to the possibilities for real-world applications.

  11. An Advanced Time Averaging Modelling Technique for Power Electronic Circuits

    NASA Astrophysics Data System (ADS)

    Jankuloski, Goce

    For stable and efficient performance of power converters, a good mathematical model is needed. This thesis presents a new modelling technique for DC/DC and DC/AC Pulse Width Modulated (PWM) converters. The new model is more accurate than the existing modelling techniques such as State Space Averaging (SSA) and Discrete Time Modelling. Unlike the SSA model, the new modelling technique, the Advanced Time Averaging Model (ATAM) includes the averaging dynamics of the converter's output. In addition to offering enhanced model accuracy, application of linearization techniques to the ATAM enables the use of conventional linear control design tools. A controller design application demonstrates that a controller designed based on the ATAM outperforms one designed using the ubiquitous SSA model. Unlike the SSA model, ATAM for DC/AC augments the system's dynamics with the dynamics needed for subcycle fundamental contribution (SFC) calculation. This allows for controller design that is based on an exact model.

  12. Recovering depth from focus using iterative image estimation techniques

    SciTech Connect

    Vitria, J.; Llacer, J.

    1993-09-01

    In this report we examine the possibility of using linear and nonlinear image estimation techniques to build a depth map of a three dimensional scene from a sequence of partially focused images. In particular, the techniques proposed to solve the problem of construction of a depth map are: (1) linear methods based on regularization procedures and (2) nonlinear methods based on statistical modeling. In the first case, we have implemented a matrix-oriented method to recover the point spread function (PSF) of a sequence of partially defocused images. In the second case, the chosen method has been a procedure based on image estimation by means of the EM algorithm, a well known technique in image reconstruction in medical applications. This method has been generalized to deal with optically defocused image sequences.

  13. Technology development of fabrication techniques for advanced solar dynamic concentrators

    NASA Technical Reports Server (NTRS)

    Richter, Scott W.

    1991-01-01

    The objective of the advanced concentrator program is to develop the technology that will lead to lightweight, highly reflective, accurate, scaleable, and long lived space solar dynamic concentrators. The advanced concentrator program encompasses new and innovative concepts, fabrication techniques, materials selection, and simulated space environmental testing. Fabrication techniques include methods of fabricating the substrates and coating substrate surfaces to produce high-quality optical surfaces, acceptable for further coating with vapor deposited optical films. The selected materials to obtain a high quality optical surface include microsheet glass and Eccocoat EP-3 epoxy, with DC-93-500 selected as a candidate silicone adhesive and levelizing layer. The following procedures are defined: cutting, cleaning, forming, and bonding microsheet glass. Procedures are also defined for surface cleaning, and EP-3 epoxy application. The results and analyses from atomic oxygen and thermal cycling tests are used to determine the effects of orbital conditions in a space environment.

  14. Technology development of fabrication techniques for advanced solar dynamic concentrators

    NASA Technical Reports Server (NTRS)

    Richter, Scott W.

    1991-01-01

    The objective of the advanced concentrator program is to develop the technology that will lead to lightweight, highly reflective, accurate, scaleable, and long lived space solar dynamic concentrators. The advanced concentrator program encompasses new and innovative concepts, fabrication techniques, materials selection, and simulated space environmental testing. Fabrication techniques include methods of fabricating the substrates and coating substrate surfaces to produce high quality optical surfaces, acceptable for further coating with vapor deposited optical films. The selected materials to obtain a high quality optical surface include microsheet glass and Eccocoat EP-3 epoxy, with DC-93-500 selected as a candidate silicone adhesive and levelizing layer. The following procedures are defined: cutting, cleaning, forming, and bonding microsheet glass. Procedures are also defined for surface cleaning, and EP-3 epoxy application. The results and analyses from atomic oxygen and thermal cycling tests are used to determine the effects of orbital conditions in a space environment.

  15. Advanced Morphological and Functional Magnetic Resonance Techniques in Glaucoma

    PubMed Central

    Mastropasqua, Rodolfo; Agnifili, Luca; Mattei, Peter A.; Caulo, Massimo; Fasanella, Vincenzo; Navarra, Riccardo; Mastropasqua, Leonardo; Marchini, Giorgio

    2015-01-01

    Glaucoma is a multifactorial disease that is the leading cause of irreversible blindness. Recent data documented that glaucoma is not limited to the retinal ganglion cells but that it also extends to the posterior visual pathway. The diagnosis is based on the presence of signs of glaucomatous optic neuropathy and consistent functional visual field alterations. Unfortunately these functional alterations often become evident when a significant amount of the nerve fibers that compose the optic nerve has been irreversibly lost. Advanced morphological and functional magnetic resonance (MR) techniques (morphometry, diffusion tensor imaging, arterial spin labeling, and functional connectivity) may provide a means for observing modifications induced by this fiber loss, within the optic nerve and the visual cortex, in an earlier stage. The aim of this systematic review was to determine if the use of these advanced MR techniques could offer the possibility of diagnosing glaucoma at an earlier stage than that currently possible. PMID:26167474

  16. Advanced computer graphic techniques for laser range finder (LRF) simulation

    NASA Astrophysics Data System (ADS)

    Bedkowski, Janusz; Jankowski, Stanislaw

    2008-11-01

    This paper show an advanced computer graphic techniques for laser range finder (LRF) simulation. The LRF is the common sensor for unmanned ground vehicle, autonomous mobile robot and security applications. The cost of the measurement system is extremely high, therefore the simulation tool is designed. The simulation gives an opportunity to execute algorithm such as the obstacle avoidance[1], slam for robot localization[2], detection of vegetation and water obstacles in surroundings of the robot chassis[3], LRF measurement in crowd of people[1]. The Axis Aligned Bounding Box (AABB) and alternative technique based on CUDA (NVIDIA Compute Unified Device Architecture) is presented.

  17. Three-dimensional hybrid grid generation using advancing front techniques

    NASA Technical Reports Server (NTRS)

    Steinbrenner, John P.; Noack, Ralph W.

    1995-01-01

    A new 3-dimensional hybrid grid generation technique has been developed, based on ideas of advancing fronts for both structured and unstructured grids. In this approach, structured grids are first generate independently around individual components of the geometry. Fronts are initialized on these structure grids, and advanced outward so that new cells are extracted directly from the structured grids. Employing typical advancing front techniques, cells are rejected if they intersect the existing front or fail other criteria When no more viable structured cells exist further cells are advanced in an unstructured manner to close off the overall domain, resulting in a grid of 'hybrid' form. There are two primary advantages to the hybrid formulation. First, generating blocks with limited regard to topology eliminates the bottleneck encountered when a multiple block system is used to fully encapsulate a domain. Individual blocks may be generated free of external constraints, which will significantly reduce the generation time. Secondly, grid points near the body (presumably with high aspect ratio) will still maintain a structured (non-triangular or tetrahedral) character, thereby maximizing grid quality and solution accuracy near the surface.

  18. Full Endoscopic Spinal Surgery Techniques: Advancements, Indications, and Outcomes

    PubMed Central

    Yue, James J.; Long, William

    2015-01-01

    Advancements in both surgical instrumentation and full endoscopic spine techniques have resulted in positive clinical outcomes in the treatment of cervical, thoracic, and lumbar spine pathologies. Endoscopic techniques impart minimal approach related disruption of non-pathologic spinal anatomy and function while concurrently maximizing functional visualization and correction of pathological tissues. An advanced understanding of the applicable functional neuroanatomy, in particular the neuroforamen, is essential for successful outcomes. Additionally, an understanding of the varying types of disc prolapse pathology in relation to the neuroforamen will result in more optimal surgical outcomes. Indications for lumbar endoscopic spine surgery include disc herniations, spinal stenosis, infections, medial branch rhizotomy, and interbody fusion. Limitations are based on both non spine and spine related findings. A high riding iliac wing, a more posteriorly located retroperitoneal cavity, an overly distal or proximally migrated herniated disc are all relative contra-indications to lumbar endoscopic spinal surgery techniques. Modifications in scope size and visual field of view angulation have enabled both anterior and posterior cervical decompression. Endoscopic burrs, electrocautery, and focused laser technology allow for the least invasive spinal surgical techniques in all age groups and across varying body habitus. Complications include among others, dural tears, dysesthsia, nerve injury, and infection. PMID:26114086

  19. Full Endoscopic Spinal Surgery Techniques: Advancements, Indications, and Outcomes.

    PubMed

    Yue, James J; Long, William

    2015-01-01

    Advancements in both surgical instrumentation and full endoscopic spine techniques have resulted in positive clinical outcomes in the treatment of cervical, thoracic, and lumbar spine pathologies. Endoscopic techniques impart minimal approach related disruption of non-pathologic spinal anatomy and function while concurrently maximizing functional visualization and correction of pathological tissues. An advanced understanding of the applicable functional neuroanatomy, in particular the neuroforamen, is essential for successful outcomes. Additionally, an understanding of the varying types of disc prolapse pathology in relation to the neuroforamen will result in more optimal surgical outcomes. Indications for lumbar endoscopic spine surgery include disc herniations, spinal stenosis, infections, medial branch rhizotomy, and interbody fusion. Limitations are based on both non spine and spine related findings. A high riding iliac wing, a more posteriorly located retroperitoneal cavity, an overly distal or proximally migrated herniated disc are all relative contra-indications to lumbar endoscopic spinal surgery techniques. Modifications in scope size and visual field of view angulation have enabled both anterior and posterior cervical decompression. Endoscopic burrs, electrocautery, and focused laser technology allow for the least invasive spinal surgical techniques in all age groups and across varying body habitus. Complications include among others, dural tears, dysesthsia, nerve injury, and infection. PMID:26114086

  20. Technique for estimating depths of 100-year floods in Pennsylvania

    USGS Publications Warehouse

    Flippo, Herbert N., Jr.

    1990-01-01

    Techniques are developed for estimating 100-year flood depths in natural channels of unregulated Pennsylvania streams that drain less than 2,200 square miles. Equations and graphs are presented relating the depth of the 100-year flood above median stage and drainage area in five defined hydrologic areas in the State. Another graph defines the relation between drainage area and median depth of flow over the low point of riffles. Thus 100-year depths on riffles can be estimated by summing depth values derived from two simple relations.

  1. Using the Acoustic Emission Technique for Estimating Body Composition

    NASA Astrophysics Data System (ADS)

    González-Solís, J. L.; Sanchis-Sabater, A.; Sosa-Aquino, M.; Gutiérrez-Juárez, G.; Vargas-Luna, M.; Bernal-Alvarado, J.; Huerta-Franco, R.

    2003-09-01

    This work proposes a new technique for estimation of body composition by using acoustic emission. A simple apparatus for the acoustic emission is proposed.The estimation of the body composition is made by analyzing the correlation between a set of acoustic resonance and skinfold measurements. One device was designed to measure the position and width of the acoustic resonances and a caliper was used to measure the skinfolds. The results show the plausibility of application of the method to measurement the human body fat.

  2. A TRMM-Calibrated Infrared Technique for Global Rainfall Estimation

    NASA Technical Reports Server (NTRS)

    Negri, Andrew J.; Adler, Robert F.

    2002-01-01

    The development of a satellite infrared (IR) technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall on a global scale is presented. The Convective-Stratiform Technique (CST), calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), is applied over the global tropics during 2001. The technique is calibrated separately over land and ocean, making ingenious use of the IR data from the TRMM Visible/Infrared Scanner (VIRS) before application to global geosynchronous satellite data. The low sampling rate of TRMM PR imposes limitations on calibrating IR-based techniques; however, our research shows that PR observations can be applied to improve IR-based techniques significantly by selecting adequate calibration areas and calibration length. The diurnal cycle of rainfall, as well as the division between convective and stratiform rainfall will be presented. The technique is validated using available data sets and compared to other global rainfall products such as Global Precipitation Climatology Project (GPCP) IR product, calibrated with TRMM Microwave Imager (TMI) data. The calibrated CST technique has the advantages of high spatial resolution (4 km), filtering of non-raining cirrus clouds, and the stratification of the rainfall into its convective and stratiform components, the latter being important for the calculation of vertical profiles of latent heating.

  3. A TRMM-Calibrated Infrared Technique for Global Rainfall Estimation

    NASA Technical Reports Server (NTRS)

    Negri, Andrew J.; Adler, Robert F.; Xu, Li-Ming

    2003-01-01

    This paper presents the development of a satellite infrared (IR) technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall on a global scale. The Convective-Stratiform Technique (CST), calibrated by coincident, physically retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR), is applied over the global tropics during summer 2001. The technique is calibrated separately over land and ocean, making ingenious use of the IR data from the TRMM Visible/Infrared Scanner (VIRS) before application to global geosynchronous satellite data. The low sampling rate of TRMM PR imposes limitations on calibrating IR- based techniques; however, our research shows that PR observations can be applied to improve IR-based techniques significantly by selecting adequate calibration areas and calibration length. The diurnal cycle of rainfall, as well as the division between convective and t i f m rainfall will be presented. The technique is validated using available data sets and compared to other global rainfall products such as Global Precipitation Climatology Project (GPCP) IR product, calibrated with TRMM Microwave Imager (TMI) data. The calibrated CST technique has the advantages of high spatial resolution (4 km), filtering of non-raining cirrus clouds, and the stratification of the rainfall into its convective and stratiform components, the latter being important for the calculation of vertical profiles of latent heating.

  4. Techniques for estimating flood hydrographs for ungaged urban watersheds

    SciTech Connect

    Stricker, V.A.; Sauer, V.B.

    1982-04-01

    The Clark Method, modified slightly, was used to develop a synthetic dimensionless hydrograph that can be used to estimate flood hydrographs for ungaged urban watersheds. Application of the technique results in a typical (average) flood hydrograph for a given peak discharge. Input necessary to apply the technique is an estimate of basin lagtime and the recurrence interval peak discharge. Equations for this purpose were obtained from a recent nationwide study on flood frequency in urban watersheds. A regression equation was developed which relates flood volumes to drainage area size, basin lagtime, and peak discharge. This equation is useful where storage of floodwater may be a part of design or flood prevention. 6 refs., 17 figs., 5 tabs.

  5. A low tritium hydride bed inventory estimation technique

    SciTech Connect

    Klein, J.E.; Shanahan, K.L.; Baker, R.A.; Foster, P.J.

    2015-03-15

    Low tritium hydride beds were developed and deployed into tritium service in Savannah River Site. Process beds to be used for low concentration tritium gas were not fitted with instrumentation to perform the steady-state, flowing gas calorimetric inventory measurement method. Low tritium beds contain less than the detection limit of the IBA (In-Bed Accountability) technique used for tritium inventory. This paper describes two techniques for estimating tritium content and uncertainty for low tritium content beds to be used in the facility's physical inventory (PI). PI are performed periodically to assess the quantity of nuclear material used in a facility. The first approach (Mid-point approximation method - MPA) assumes the bed is half-full and uses a gas composition measurement to estimate the tritium inventory and uncertainty. The second approach utilizes the bed's hydride material pressure-composition-temperature (PCT) properties and a gas composition measurement to reduce the uncertainty in the calculated bed inventory.

  6. The origins of bioethics: advances in resuscitations techniques.

    PubMed

    Niebroj, L

    2008-12-01

    During the last years there has been an increasing interest in meta-bioethical issues. This turn in the research focus is regarded as a sign of the maturation of bioethics as a distinct area of an academic inquiry. The role of historic-philosophical reflection is often emphasized. It should be noted that there is a rather common agreement that the future of bioethics lies in the critical reflection on its past, in particular, on the very origins of this discipline. Sharing Caplan's opinion, advances in medicine technologies, especially the introduction of respirators and artificial heart machines, is considered as one of the main issues that started bioethics. Using methods of historical as well as meta-ethical research, this article aims at describing the role of advances in resuscitation techniques in the emergence of bioethics and at exploring how bioethical reflection has been shaped by technological developments. A brief historical analysis permits to say that there is a close bond between the emergence of bioethics and the introduction of sophisticated resuscitation technologies into medical practice. The meta-ethical reflection reveals that advances in resuscitation techniques not only initiated bioethics in the second half of the 20(th) century but influenced its evolution by (i) posing a question of justice in health care, (ii) altering commonly accepted ontological notions of human corporeality, and (iii) reconsidering the very purpose of medicine.

  7. Indications and general techniques for lasers in advanced operative laparoscopy.

    PubMed

    Dorsey, J H

    1991-09-01

    Lasers are but one of the several energy delivery systems used by the operative laparoscopist in the performance of advanced operative laparoscopy. Safety is a key factor in the selection of a laser because the tissue damage produced by this instrument is absolutely predictable. The surgeon must be totally familiar with the chosen wavelength and its tissue reaction if this safety factor is to be realized. Other instruments complement the use of lasers in advanced operative laparoscopy, and without thorough knowledge of all available techniques and instruments, the operative laparoscopist will not achieve the full potential of this specialty. It is beyond the scope of this issue on gynecologic laser surgery to present all of the useful nonlaser techniques. Suffice it to say that we often use laser, loop ligature, sutures, hemoclips, bipolar electricity, hydrodissection, and endocoagulation during the course of a day in the operating room and sometimes during one case. As enthusiasm for advanced operative laparoscopy grows and endoscopic capability increases, more complicated and prolonged surgical feats are reported. Radical hysterectomy and lymphadenectomy have been performed by the laparoscopic route, and endoscopic management of ovarian tumors also has been reported. At this moment, these must be viewed as "show and tell" procedures unsupported by statistics to demonstrate any advantage (or disadvantage) when compared with conventional surgical methods. The time required of advanced operative laparoscopy for any given procedure is certainly an important factor. Prolonged operative and anesthesia time certainly can negate the supposed benefit of small incisions and minimally invasive surgery. What goes on inside the abdomen is certainly the most important part of advanced operative laparoscopy. Good surgeons must recognize their own limitations and the limitations of available technology. The operative laparoscopist must know when to quit and institute a

  8. Simplified thermal estimation techniques for large space structures

    NASA Technical Reports Server (NTRS)

    Brogren, E. W.; Barclay, D. L.; Straayer, J. W.

    1977-01-01

    A tool for making rapid estimates of the response of space structures to thermal environments encountered in earth orbits is provided for the designer of these structures. Charts giving heating rates and temperatures for certain typical large spacecraft structural elements are provided. Background information for spacecraft thermal design considerations is presented. Environments, requirements, thermal control techniques, design guidelines, and approaches available for more detailed thermal response analysis are discussed.

  9. Linear Frequency Estimation Technique for Reducing Frequency Based Signals

    PubMed Central

    Woodbridge, Jonathan; Bui, Alex; Sarrafzadeh, Majid

    2016-01-01

    This paper presents a linear frequency estimation (LFE) technique for data reduction of frequency-based signals. LFE converts a signal to the frequency domain by utilizing the Fourier transform and estimates both the real and imaginary parts with a series of vectors much smaller than the original signal size. The estimation is accomplished by selecting optimal points from the frequency domain and interpolating data between these points with a first order approximation. The difficulty of such a problem lies in determining which points are most significant. LFE is unique in the fact that it is generic to a wide variety of frequency-based signals such as electromyography (EMG), voice, and electrocardiography (ECG). The only requirement is that spectral coefficients are spatially correlated. This paper presents the algorithm and results from both EMG and voice data. We complete the paper with a description of how this method can be applied to pattern types of recognition, signal indexing, and compression.

  10. Estimation of base station position using timing advance measurements

    NASA Astrophysics Data System (ADS)

    Raitoharju, Matti; Ali-Löytty, Simo; Wirola, Lauri

    2011-10-01

    Timing Advance is used in TDMA (Time Division Multiple Access) systems, such as GSM and LTE, to synchronize the mobile phone to the cellular BS (Base Station). Mobile phone positioning can use TA measurements if BS positions are known, but in many cases BS positions are not in the public domain. In this work we study how to use a set of TA measurements taken by mobile phones at known positions to estimate the position of a BS. This paper describes two methods -- GMF (Gaussian Mixture Filter) and PMF (Point Mass Filter) for estimation of the BS position. Positioning performance is evaluated using simulated and real measurements. In suburban field tests, TA measurements suffice to determine BS position with an error comparable to the TA granularity (550m). GMF computes BS position much faster than PMF and is only slightly less accurate.

  11. Advanced aeroservoelastic stabilization techniques for hypersonic flight vehicles

    NASA Technical Reports Server (NTRS)

    Chan, Samuel Y.; Cheng, Peter Y.; Myers, Thomas T.; Klyde, David H.; Magdaleno, Raymond E.; Mcruer, Duane T.

    1992-01-01

    Advanced high performance vehicles, including Single-Stage-To-Orbit (SSTO) hypersonic flight vehicles, that are statically unstable, require higher bandwidth flight control systems to compensate for the instability resulting in interactions between the flight control system, the engine/propulsion dynamics, and the low frequency structural modes. Military specifications, such as MIL-F-9490D and MIL-F-87242, tend to limit treatment of structural modes to conventional gain stabilization techniques. The conventional gain stabilization techniques, however, introduce low frequency effective time delays which can be troublesome from a flying qualities standpoint. These time delays can be alleviated by appropriate blending of gain and phase stabilization techniques (referred to as Hybrid Phase Stabilization or HPS) for the low frequency structural modes. The potential of using HPS for compensating structural mode interaction was previously explored. It was shown that effective time delay was significantly reduced with the use of HPS; however, the HPS design was seen to have greater residual response than a conventional gain stablized design. Additional work performed to advance and refine the HPS design procedure, to further develop residual response metrics as a basis for alternative structural stability specifications, and to develop strategies for validating HPS design and specification concepts in manned simulation is presented. Stabilization design sensitivity to structural uncertainties and aircraft-centered requirements are also assessed.

  12. Advanced computer modeling techniques expand belt conveyor technology

    SciTech Connect

    Alspaugh, M.

    1998-07-01

    Increased mining production is continuing to challenge engineers and manufacturers to keep up. The pressure to produce larger and more versatile equipment is increasing. This paper will show some recent major projects in the belt conveyor industry that have pushed the limits of design and engineering technology. Also, it will discuss the systems engineering discipline and advanced computer modeling tools that have helped make these achievements possible. Several examples of technologically advanced designs will be reviewed. However, new technology can sometimes produce increased problems with equipment availability and reliability if not carefully developed. Computer modeling techniques that help one design larger equipment can also compound operational headaches if engineering processes and algorithms are not carefully analyzed every step of the way.

  13. Testing aspects of advanced coherent electron cooling technique

    SciTech Connect

    Litvinenko, V.; Jing, Y.; Pinayev, I.; Wang, G.; Samulyak, R.; Ratner, D.

    2015-05-03

    An advanced version of the Coherent-electron Cooling (CeC) based on the micro-bunching instability was proposed. This approach promises significant increase in the bandwidth of the CeC system and, therefore, significant shortening of cooling time in high-energy hadron colliders. In this paper we present our plans of simulating and testing the key aspects of this proposed technique using the set-up of the coherent-electron-cooling proof-of-principle experiment at BNL.

  14. Age estimation based on Kvaal's technique using digital panoramic radiographs

    PubMed Central

    Mittal, Samta; Nagendrareddy, Suma Gundareddy; Sharma, Manisha Lakhanpal; Agnihotri, Poornapragna; Chaudhary, Sunil; Dhillon, Manu

    2016-01-01

    Introduction: Age estimation is important for administrative and ethical reasons and also because of legal consequences. Dental pulp undergoes regression in size with increasing age due to secondary dentin deposition and can be used as a parameter of age estimation even beyond 25 years of age. Kvaal et al. developed a method for chronological age estimation based on the pulp size using periapical dental radiographs. There is a need for testing this method of age estimation in the Indian population using simple tools like digital imaging on living individuals not requiring extraction of teeth. Aims and Objectives: Estimation of the chronological age of subjects by Kvaal's method using digital panoramic radiographs and also testing the validity of regression equations as given by Kvaal et al. Materials and Methods: The study sample included a total of 152 subjects in the age group of 14-60 years. Measurements were performed on the standardized digital panoramic radiographs based on Kvaal's method. Different regression formulae were derived and the age was assessed. The assessed age was then correlated to the actual age of the patient using Student's t-test. Results: No significant difference between the mean of the chronological age and the estimated age was observed. However, the values of the mean age estimated by using regression equations as given previously in the study of Kvaal et al. significantly underestimated the chronological age in the present study sample. Conclusion: The results of the study give an inference for the feasibility of this technique by calculation of regression equations on digital panoramic radiographs. However, it negates the applicability of same regression equations as given by Kvaal et al. on the study population. PMID:27555738

  15. Recent advances in UHV techniques for particle accelerators

    SciTech Connect

    M. G. Rao

    1995-01-01

    The ultrahigh vacuum (UHV) requirements for storage rings and accelerators, and the development of the science and technology of UHV for particle accelerators and magnetic fusion devices have been recently reviewed by N.B. Mistry and H.F. Dylla respectively. In this paper, the latest developments in the advancement of UHV techniques for the vacuum integrity of Continuous Electron Beam Accelerator Facility (CEBAF) and for successfully dealing with the synchrotron radiation related beam line vacuum problem encountered in the design of the SSC are reviewed: the review includes developments in extreme sensitivity He leak detection technique based on the dynamic adsorption and desorption of He, operation of ionization gauges at Lhe temperatures, metal sponges for the effective cryopumping of H{sup 2} and He to pressures better than 10{sup -14} torr, and low cost and high He sensitivity RGA's. The details of a new extreme sensitivity He leak detector system are also discussed here.

  16. Recent Advances in Techniques for Hyperspectral Image Processing

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; Benediktsson, Jon Atli; Boardman, Joseph W.; Brazile, Jason; Bruzzone, Lorenzo; Camps-Valls, Gustavo; Chanussot, Jocelyn; Fauvel, Mathieu; Gamba, Paolo; Gualtieri, Anthony; Marconcini, Mattia; Tilton, James C.; Trianni, Giovanna

    2009-01-01

    Imaging spectroscopy, also known as hyperspectral imaging, has been transformed in less than 30 years from being a sparse research tool into a commodity product available to a broad user community. Currently, there is a need for standardized data processing techniques able to take into account the special properties of hyperspectral data. In this paper, we provide a seminal view on recent advances in techniques for hyperspectral image processing. Our main focus is on the design of techniques able to deal with the highdimensional nature of the data, and to integrate the spatial and spectral information. Performance of the discussed techniques is evaluated in different analysis scenarios. To satisfy time-critical constraints in specific applications, we also develop efficient parallel implementations of some of the discussed algorithms. Combined, these parts provide an excellent snapshot of the state-of-the-art in those areas, and offer a thoughtful perspective on future potentials and emerging challenges in the design of robust hyperspectral imaging algorithms

  17. Advanced bronchoscopic techniques in diagnosis and staging of lung cancer.

    PubMed

    Zaric, Bojan; Stojsic, Vladimir; Sarcev, Tatjana; Stojanovic, Goran; Carapic, Vladimir; Perin, Branislav; Zarogoulidis, Paul; Darwiche, Kaid; Tsakiridis, Kosmas; Karapantzos, Ilias; Kesisis, Georgios; Kougioumtzi, Ioanna; Katsikogiannis, Nikolaos; Machairiotis, Nikolaos; Stylianaki, Aikaterini; Foroulis, Christophoros N; Zarogoulidis, Konstantinos

    2013-09-01

    The role of advanced brochoscopic diagnostic techniques in detection and staging of lung cancer has steeply increased in recent years. Bronchoscopic imaging techniques became widely available and easy to use. Technical improvement led to merging in technologies making autofluorescence or narrow band imaging incorporated into one bronchoscope. New tools, such as autofluorescence imagining (AFI), narrow band imaging (NBI) or fuji intelligent chromo endoscopy (FICE), found their place in respiratory endoscopy suites. Development of endobronchial ultrasound (EBUS) improved minimally invasive mediastinal staging and diagnosis of peripheral lung lesions. Linear EBUS proven to be complementary to mediastinoscopy. This technique is now available in almost all high volume centers performing bronchoscopy. Radial EBUS with mini-probes and guiding sheaths provides accurate diagnosis of peripheral pulmonary lesions. Combining EBUS guided procedures with rapid on site cytology (ROSE) increases diagnostic yield even more. Electromagnetic navigation technology (EMN) is also widely used for diagnosis of peripheral lesions. Future development will certainly lead to new improvements in technology and creation of new sophisticated tools for research in respiratory endoscopy. Broncho-microscopy, alveoloscopy, optical coherence tomography are some of the new research techniques emerging for rapid technological development.

  18. Mass estimating techniques for earth-to-orbit transports with various configuration factors and technologies applied

    NASA Technical Reports Server (NTRS)

    Klich, P. J.; Macconochie, I. O.

    1979-01-01

    A study of an array of advanced earth-to-orbit space transportation systems with a focus on mass properties and technology requirements is presented. Methods of estimating weights of these vehicles differ from those used for commercial and military aircraft; the new techniques emphasizing winged horizontal and vertical takeoff advanced systems are described utilizing the space shuttle subsystem data base for the weight estimating equations. The weight equations require information on mission profile, the structural materials, the thermal protection system, and the ascent propulsion system, allowing for the type of construction and various propellant tank shapes. The overall system weights are calculated using this information and incorporated into the Systems Engineering Mass Properties Computer Program.

  19. Real time estimation of ship motions using Kalman filtering techniques

    NASA Technical Reports Server (NTRS)

    Triantafyllou, M. S.; Bodson, M.; Athans, M.

    1983-01-01

    The estimation of the heave, pitch, roll, sway, and yaw motions of a DD-963 destroyer is studied, using Kalman filtering techniques, for application in VTOL aircraft landing. The governing equations are obtained from hydrodynamic considerations in the form of linear differential equations with frequency dependent coefficients. In addition, nonminimum phase characteristics are obtained due to the spatial integration of the water wave forces. The resulting transfer matrix function is irrational and nonminimum phase. The conditions for a finite-dimensional approximation are considered and the impact of the various parameters is assessed. A detailed numerical application for a DD-963 destroyer is presented and simulations of the estimations obtained from Kalman filters are discussed.

  20. Aerodynamic parameter estimation via Fourier modulating function techniques

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1995-01-01

    Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.

  1. Multichannel SAR Interferometry via Classical and Bayesian Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Budillon, Alessandra; Ferraiuolo, Giancarlo; Pascazio, Vito; Schirinzi, Gilda

    2005-12-01

    Some multichannel synthetic aperture radar interferometric configurations are analyzed. Both across-track and along-track interferometric systems, allowing to recover the height profile of the ground or the moving target radial velocities, respectively, are considered. The joint use of multichannel configurations, which can be either multifrequency or multi-baseline, and of classical or Bayesian statistical estimation techniques allows to obtain very accurate solutions and to overcome the limitations due to the presence of ambiguous solutions, intrinsic in the single-channel configurations. The improved performance of the multichannel-based methods with respect to the corresponding single-channel ones has been tested with numerical experiments on simulated data.

  2. Estimation and filtering techniques for high-accuracy GPS applications

    NASA Technical Reports Server (NTRS)

    Lichten, S. M.

    1989-01-01

    Techniques for determination of very precise orbits for satellites of the Global Positioning System (GPS) are currently being studied and demonstrated. These techniques can be used to make cm-accurate measurements of station locations relative to the geocenter, monitor earth orientation over timescales of hours, and provide tropospheric and clock delay calibrations during observations made with deep space radio antennas at sites where the GPS receivers have been collocated. For high-earth orbiters, meter-level knowledge of position will be available from GPS, while at low altitudes, sub-decimeter accuracy will be possible. Estimation of satellite orbits and other parameters such as ground station positions is carried out with a multi-satellite batch sequential pseudo-epoch state process noise filter. Both square-root information filtering (SRIF) and UD-factorized covariance filtering formulations are implemented in the software.

  3. Tools and techniques for estimating high intensity RF effects

    NASA Technical Reports Server (NTRS)

    Zacharias, Richard L.; Pennock, Steve T.; Poggio, Andrew J.; Ray, Scott L.

    1992-01-01

    Tools and techniques for estimating and measuring coupling and component disturbance for avionics and electronic controls are described. A finite-difference-time-domain (FD-TD) modeling code, TSAR, used to predict coupling is described. This code can quickly generate a mesh model to represent the test object. Some recent applications as well as the advantages and limitations of using such a code are described. Facilities and techniques for making low-power coupling measurements and for making direct injection test measurements of device disturbance are also described. Some scaling laws for coupling and device effects are presented. A method for extrapolating these low-power test results to high-power full-system effects are presented.

  4. Precise estimation of tropospheric path delays with GPS techniques

    NASA Technical Reports Server (NTRS)

    Lichten, S. M.

    1990-01-01

    Tropospheric path delays are a major source of error in deep space tracking. However, the tropospheric-induced delay at tracking sites can be calibrated using measurements of Global Positioning System (GPS) satellites. A series of experiments has demonstrated the high sensitivity of GPS to tropospheric delays. A variety of tests and comparisons indicates that current accuracy of the GPS zenith tropospheric delay estimates is better than 1-cm root-mean-square over many hours, sampled continuously at intervals of six minutes. These results are consistent with expectations from covariance analyses. The covariance analyses also indicate that by the mid-1990s, when the GPS constellation is complete and the Deep Space Network is equipped with advanced GPS receivers, zenith tropospheric delay accuracy with GPS will improve further to 0.5 cm or better.

  5. Improved Battery State Estimation Using Novel Sensing Techniques

    NASA Astrophysics Data System (ADS)

    Abdul Samad, Nassim

    Lithium-ion batteries have been considered a great complement or substitute for gasoline engines due to their high energy and power density capabilities among other advantages. However, these types of energy storage devices are still yet not widespread, mainly because of their relatively high cost and safety issues, especially at elevated temperatures. This thesis extends existing methods of estimating critical battery states using model-based techniques augmented by real-time measurements from novel temperature and force sensors. Typically, temperature sensors are located near the edge of the battery, and away from the hottest core cell regions, which leads to slower response times and increased errors in the prediction of core temperatures. New sensor technology allows for flexible sensor placement at the cell surface between cells in a pack. This raises questions about the optimal locations of these sensors for best observability and temperature estimation. Using a validated model, which is developed and verified using experiments in laboratory fixtures that replicate vehicle pack conditions, it is shown that optimal sensor placement can lead to better and faster temperature estimation. Another equally important state is the state of health or the capacity fading of the cell. This thesis introduces a novel method of using force measurements for capacity fade estimation. Monitoring capacity is important for defining the range of electric vehicles (EVs) and plug-in hybrid electric vehicles (PHEVs). Current capacity estimation techniques require a full discharge to monitor capacity. The proposed method can complement or replace current methods because it only requires a shallow discharge, which is especially useful in EVs and PHEVs. Using the accurate state estimation accomplished earlier, a method for downsizing a battery pack is shown to effectively reduce the number of cells in a pack without compromising safety. The influence on the battery performance (e

  6. Estimation of Insulator Contaminations by Means of Remote Sensing Technique

    NASA Astrophysics Data System (ADS)

    Han, Ge; Gong, Wei; Cui, Xiaohui; Zhang, Miao; Chen, Jun

    2016-06-01

    The accurate estimation of deposits adhering on insulators is critical to prevent pollution flashovers which cause huge costs worldwide. The traditional evaluation method of insulator contaminations (IC) is based sparse manual in-situ measurements, resulting in insufficient spatial representativeness and poor timeliness. Filling that gap, we proposed a novel evaluation framework of IC based on remote sensing and data mining. Varieties of products derived from satellite data, such as aerosol optical depth (AOD), digital elevation model (DEM), land use and land cover and normalized difference vegetation index were obtained to estimate the severity of IC along with the necessary field investigation inventory (pollution sources, ambient atmosphere and meteorological data). Rough set theory was utilized to minimize input sets under the prerequisite that the resultant set is equivalent to the full sets in terms of the decision ability to distinguish severity levels of IC. We found that AOD, the strength of pollution source and the precipitation are the top 3 decisive factors to estimate insulator contaminations. On that basis, different classification algorithm such as mahalanobis minimum distance, support vector machine (SVM) and maximum likelihood method were utilized to estimate severity levels of IC. 10-fold cross-validation was carried out to evaluate the performances of different methods. SVM yielded the best overall accuracy among three algorithms. An overall accuracy of more than 70% was witnessed, suggesting a promising application of remote sensing in power maintenance. To our knowledge, this is the first trial to introduce remote sensing and relevant data analysis technique into the estimation of electrical insulator contaminations.

  7. Advanced Techniques for Removal of Retrievable Inferior Vena Cava Filters

    SciTech Connect

    Iliescu, Bogdan; Haskal, Ziv J.

    2012-08-15

    Inferior vena cava (IVC) filters have proven valuable for the prevention of primary or recurrent pulmonary embolism in selected patients with or at high risk for venous thromboembolic disease. Their use has become commonplace, and the numbers implanted increase annually. During the last 3 years, in the United States, the percentage of annually placed optional filters, i.e., filters than can remain as permanent filters or potentially be retrieved, has consistently exceeded that of permanent filters. In parallel, the complications of long- or short-term filtration have become increasingly evident to physicians, regulatory agencies, and the public. Most filter removals are uneventful, with a high degree of success. When routine filter-retrieval techniques prove unsuccessful, progressively more advanced tools and skill sets must be used to enhance filter-retrieval success. These techniques should be used with caution to avoid damage to the filter or cava during IVC retrieval. This review describes the complex techniques for filter retrieval, including use of additional snares, guidewires, angioplasty balloons, and mechanical and thermal approaches as well as illustrates their specific application.

  8. Effective wind speed estimation: Comparison between Kalman Filter and Takagi-Sugeno observer techniques.

    PubMed

    Gauterin, Eckhard; Kammerer, Philipp; Kühn, Martin; Schulte, Horst

    2016-05-01

    Advanced model-based control of wind turbines requires knowledge of the states and the wind speed. This paper benchmarks a nonlinear Takagi-Sugeno observer for wind speed estimation with enhanced Kalman Filter techniques: The performance and robustness towards model-structure uncertainties of the Takagi-Sugeno observer, a Linear, Extended and Unscented Kalman Filter are assessed. Hence the Takagi-Sugeno observer and enhanced Kalman Filter techniques are compared based on reduced-order models of a reference wind turbine with different modelling details. The objective is the systematic comparison with different design assumptions and requirements and the numerical evaluation of the reconstruction quality of the wind speed. Exemplified by a feedforward loop employing the reconstructed wind speed, the benefit of wind speed estimation within wind turbine control is illustrated.

  9. Effective wind speed estimation: Comparison between Kalman Filter and Takagi-Sugeno observer techniques.

    PubMed

    Gauterin, Eckhard; Kammerer, Philipp; Kühn, Martin; Schulte, Horst

    2016-05-01

    Advanced model-based control of wind turbines requires knowledge of the states and the wind speed. This paper benchmarks a nonlinear Takagi-Sugeno observer for wind speed estimation with enhanced Kalman Filter techniques: The performance and robustness towards model-structure uncertainties of the Takagi-Sugeno observer, a Linear, Extended and Unscented Kalman Filter are assessed. Hence the Takagi-Sugeno observer and enhanced Kalman Filter techniques are compared based on reduced-order models of a reference wind turbine with different modelling details. The objective is the systematic comparison with different design assumptions and requirements and the numerical evaluation of the reconstruction quality of the wind speed. Exemplified by a feedforward loop employing the reconstructed wind speed, the benefit of wind speed estimation within wind turbine control is illustrated. PMID:26725505

  10. Investigation of Models and Estimation Techniques for GPS Attitude Determination

    NASA Technical Reports Server (NTRS)

    Garrick, J.

    1996-01-01

    Much work has been done in the Flight Dynamics Analysis Branch (FDAB) in developing algorithms to met the new and growing field of attitude determination using the Global Positioning SYstem (GPS) constellation of satellites. Flight Dynamics has the responsibility to investigate any new technology and incorporate the innovations in the attitude ground support systems developed to support future missions. The work presented here is an investigative analysis that will produce the needed adaptation to allow the Flight Dynamics Support System (FDSS) to incorporate GPS phase measurements and produce observation measurements compatible with the FDSS. A simulator was developed to produce the necessary measurement data to test the models developed for the different estimation techniques used by FDAB. This paper gives an overview of the current modeling capabilities of the simulator models and algorithms for the adaptation of GPS measurement data and results from each of the estimation techniques. Future analysis efforts to evaluate the simulator and models against inflight GPS measurement data are also outlined.

  11. Estimation of Alpine Skier Posture Using Machine Learning Techniques

    PubMed Central

    Nemec, Bojan; Petrič, Tadej; Babič, Jan; Supej, Matej

    2014-01-01

    High precision Global Navigation Satellite System (GNSS) measurements are becoming more and more popular in alpine skiing due to the relatively undemanding setup and excellent performance. However, GNSS provides only single-point measurements that are defined with the antenna placed typically behind the skier's neck. A key issue is how to estimate other more relevant parameters of the skier's body, like the center of mass (COM) and ski trajectories. Previously, these parameters were estimated by modeling the skier's body with an inverted-pendulum model that oversimplified the skier's body. In this study, we propose two machine learning methods that overcome this shortcoming and estimate COM and skis trajectories based on a more faithful approximation of the skier's body with nine degrees-of-freedom. The first method utilizes a well-established approach of artificial neural networks, while the second method is based on a state-of-the-art statistical generalization method. Both methods were evaluated using the reference measurements obtained on a typical giant slalom course and compared with the inverted-pendulum method. Our results outperform the results of commonly used inverted-pendulum methods and demonstrate the applicability of machine learning techniques in biomechanical measurements of alpine skiing. PMID:25313492

  12. Techniques for developing approximate optimal advanced launch system guidance

    NASA Technical Reports Server (NTRS)

    Feeley, Timothy S.; Speyer, Jason L.

    1991-01-01

    An extension to the authors' previous technique used to develop a real-time guidance scheme for the Advanced Launch System is presented. The approach is to construct an optimal guidance law based upon an asymptotic expansion associated with small physical parameters, epsilon. The trajectory of a rocket modeled as a point mass is considered with the flight restricted to an equatorial plane while reaching an orbital altitude at orbital injection speeds. The dynamics of this problem can be separated into primary effects due to thrust and gravitational forces, and perturbation effects which include the aerodynamic forces and the remaining inertial forces. An analytic solution to the reduced-order problem represented by the primary dynamics is possible. The Hamilton-Jacobi-Bellman or dynamic programming equation is expanded in an asymptotic series where the zeroth-order term (epsilon = 0) can be obtained in closed form.

  13. Neurocysticercosis: evaluation with advanced magnetic resonance techniques and atypical forms.

    PubMed

    do Amaral, Lázaro Luís Faria; Ferreira, Rafael Martins; da Rocha, Antônio José; Ferreira, Nelson Paes Diniz Fortes

    2005-04-01

    Neurocysticercosis (NCC) is the most common helminthic infection of the central nervous system, but its diagnosis remains difficult. The purpose of this article is to perform a critical analysis of the literature and show our experience in the evaluation of NCC. We discuss the advanced MR technique applications such as diffusion and perfusion-weighted imaging, spectroscopy, cisternography with FLAIR, and supplemental O2 and 3D-CISS. The typical manifestations of NCC are described; emphasis is given to the unusual presentations. The atypical forms of neurocysticercosis were divided into: intraventricular, subarachnoid, spinal, orbital, and intraparenchymatous. Special attention was also given to reactivation of previously calcified lesions and neurocysticercosis associated with mesial temporal sclerosis.

  14. COAL AND CHAR STUDIES BY ADVANCED EMR TECHNIQUES

    SciTech Connect

    R. Linn Belford; Robert B. Clarkson; Mark J. Nilges; Boris M. Odintsov; Alex I. Smirnov

    2001-04-30

    Advanced electronic magnetic resonance (EMR) as well as nuclear magnetic resonance (NMR) methods have been used to examine properties of coals, chars, and molecular species related to constituents of coal. During the span of this grant, progress was made on construction and applications to coals and chars of two high frequency EMR systems particularly appropriate for such studies--48 GHz and 95 GHz electron magnetic resonance spectrometer, on new low-frequency dynamic nuclear polarization (DNP) experiments to examine the interaction between water and the surfaces of suspended char particulates in slurries, and on a variety of proton nuclear magnetic resonance (NMR) techniques to measure characteristics of the water directly in contact with the surfaces and pore spaces of carbonaceous particulates.

  15. Advanced Fibre Bragg Grating and Microfibre Bragg Grating Fabrication Techniques

    NASA Astrophysics Data System (ADS)

    Chung, Kit Man

    Fibre Bragg gratings (FBGs) have become a very important technology for communication systems and fibre optic sensing. Typically, FBGs are less than 10-mm long and are fabricated using fused silica uniform phase masks which become more expensive for longer length or non-uniform pitch. Generally, interference UV laser beams are employed to make long or complex FBGs, and this technique introduces critical precision and control issues. In this work, we demonstrate an advanced FBG fabrication system that enables the writing of long and complex gratings in optical fibres with virtually any apodisation profile, local phase and Bragg wavelength using a novel optical design in which the incident angles of two UV beams onto an optical fibre can be adjusted simultaneously by moving just one optical component, instead of two optics employed in earlier configurations, to vary the grating pitch. The key advantage of the grating fabrication system is that complex gratings can be fabricated by controlling the linear movements of two translation stages. In addition to the study of advanced grating fabrication technique, we also focus on the inscription of FBGs written in optical fibres with a cladding diameter of several ten's of microns. Fabrication of microfibres was investigated using a sophisticated tapering method. We also proposed a simple but practical technique to filter out the higher order modes reflected from the FBG written in microfibres via a linear taper region while the fundamental mode re-couples to the core. By using this technique, reflection from the microfibre Bragg grating (MFBG) can be effectively single mode, simplifying the demultiplexing and demodulation processes. MFBG exhibits high sensitivity to contact force and an MFBG-based force sensor was also constructed and tested to investigate their suitability for use as an invasive surgery device. Performance of the contact force sensor packaged in a conforming elastomer material compares favourably to one

  16. Multiple advanced surgical techniques to treat acquired seminal duct obstruction

    PubMed Central

    Jiang, Hong-Tao; Yuan, Qian; Liu, Yu; Liu, Zeng-Qin; Zhou, Zhen-Yu; Xiao, Ke-Feng; Yang, Jiang-Gen

    2014-01-01

    The aim of this study was to evaluate the outcomes of multiple advanced surgical treatments (i.e. microsurgery, laparoscopic surgery and endoscopic surgery) for acquired obstructive azoospermia. We analyzed the surgical outcomes of 51 patients with suspected acquired obstructive azoospermia consecutively who enrolled at our center between January 2009 and May 2013. Modified vasoepididymostomy, laparoscopically assisted vasovasostomy and transurethral incision of the ejaculatory duct with holmium laser were chosen and performed based on the different obstruction sites. The mean postoperative follow-up time was 22 months (range: 9 months to 52 months). Semen analyses were initiated at four postoperative weeks, followed by trimonthly (months 3, 6, 9 and 12) semen analyses, until no sperm was found at 12 months or until pregnancy was achieved. Patency was defined as >10,000 sperm ml−1 of semen. The obstruction sites, postoperative patency and natural pregnancy rate were recorded. Of 51 patients, 47 underwent bilateral or unilateral surgical reconstruction; the other four patients were unable to be treated with surgical reconstruction because of pelvic vas or intratesticular tubules obstruction. The reconstruction rate was 92.2% (47/51), and the patency rate and natural pregnancy rate were 89.4% (42/47) and 38.1% (16/42), respectively. No severe complications were observed. Using multiple advanced surgical techniques, more extensive range of seminal duct obstruction was accessible and correctable; thus, a favorable patency and pregnancy rate can be achieved. PMID:25337841

  17. Using support vector machines in the multivariate state estimation technique

    SciTech Connect

    Zavaljevski, N.; Gross, K.C.

    1999-07-01

    One approach to validate nuclear power plant (NPP) signals makes use of pattern recognition techniques. This approach often assumes that there is a set of signal prototypes that are continuously compared with the actual sensor signals. These signal prototypes are often computed based on empirical models with little or no knowledge about physical processes. A common problem of all data-based models is their limited ability to make predictions on the basis of available training data. Another problem is related to suboptimal training algorithms. Both of these potential shortcomings with conventional approaches to signal validation and sensor operability validation are successfully resolved by adopting a recently proposed learning paradigm called the support vector machine (SVM). The work presented here is a novel application of SVM for data-based modeling of system state variables in an NPP, integrated with a nonlinear, nonparametric technique called the multivariate state estimation technique (MSET), an algorithm developed at Argonne National Laboratory for a wide range of nuclear plant applications.

  18. Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.

  19. Advances in the Rising Bubble Technique for discharge measurement

    NASA Astrophysics Data System (ADS)

    Hilgersom, Koen; Luxemburg, Willem; Willemsen, Geert; Bussmann, Luuk

    2014-05-01

    Already in the 19th century, d'Auria described a discharge measurement technique that applies floats to find the depth-integrated velocity (d'Auria, 1882). The basis of this technique was that the horizontal distance that the float travels on its way to the surface is the image of the integrated velocity profile over depth. Viol and Semenov (1964) improved this method by using air bubbles as floats, but still distances were measured manually until Sargent (1981) introduced a technique that could derive the distances from two photographs simultaneously taken from each side of the river bank. Recently, modern image processing techniques proved to further improve the applicability of the method (Hilgersom and Luxemburg, 2012). In the 2012 article, controlling and determining the rising velocity of an air bubble still appeared a major challenge for the application of this method. Ever since, laboratory experiments with different nozzle and tube sizes lead to advances in our self-made equipment enabling us to produce individual air bubbles with a more constant rising velocity. Also, we introduced an underwater camera to on-site determine the rising velocity, which is dependent on the water temperature and contamination, and therefore is site-specific. Camera measurements of the rising velocity proved successful in a laboratory and field setting, although some improvements to the setup are necessary to capture the air bubbles also at depths where little daylight penetrates. References D'Auria, L.: Velocity of streams; A new method to determine correctly the mean velocity of any perpendicular in rivers and canals, (The) American Engineers, 3, 1882. Hilgersom, K.P. and Luxemburg, W.M.J.: Technical Note: How image processing facilitates the rising bubble technique for discharge measurement, Hydrology and Earth System Sciences, 16(2), 345-356, 2012. Sargent, D.: Development of a viable method of stream flow measurement using the integrating float technique, Proceedings of

  20. Carrier Estimation Using Classic Spectral Estimation Techniques for the Proposed Demand Assignment Multiple Access Service

    NASA Technical Reports Server (NTRS)

    Scaife, Bradley James

    1999-01-01

    In any satellite communication, the Doppler shift associated with the satellite's position and velocity must be calculated in order to determine the carrier frequency. If the satellite state vector is unknown then some estimate must be formed of the Doppler-shifted carrier frequency. One elementary technique is to examine the signal spectrum and base the estimate on the dominant spectral component. If, however, the carrier is spread (as in most satellite communications) this technique may fail unless the chip rate-to-data rate ratio (processing gain) associated with the carrier is small. In this case, there may be enough spectral energy to allow peak detection against a noise background. In this thesis, we present a method to estimate the frequency (without knowledge of the Doppler shift) of a spread-spectrum carrier assuming a small processing gain and binary-phase shift keying (BPSK) modulation. Our method relies on an averaged discrete Fourier transform along with peak detection on spectral match filtered data. We provide theory and simulation results indicating the accuracy of this method. In addition, we will describe an all-digital hardware design based around a Motorola DSP56303 and high-speed A/D which implements this technique in real-time. The hardware design is to be used in NMSU's implementation of NASA's demand assignment, multiple access (DAMA) service.

  1. Soil Moisture Estimation under Vegetation Applying Polarimetric Decomposition Techniques

    NASA Astrophysics Data System (ADS)

    Jagdhuber, T.; Schön, H.; Hajnsek, I.; Papathanassiou, K. P.

    2009-04-01

    Polarimetric decomposition techniques and inversion algorithms are developed and applied on the OPAQUE data set acquired in spring 2007 to investigate their potential and limitations for soil moisture estimation. A three component model-based decomposition is used together with an eigenvalue decomposition in a combined approach to invert for soil moisture over bare and vegetated soils at L-band. The applied approach indicates a feasible capability to invert soil moisture after decomposing volume and ground scattering components over agricultural land surfaces. But there are still deficiencies in modeling the volume disturbance. The results show a root mean square error below 8.5vol.-% for the winter crop fields (winter wheat, winter triticale and winter barley) and below 11.5Vol-% for the summer crop field (summer barley) whereas all fields have a distinct volume layer of 55-85cm height.

  2. Tools and techniques for estimating high intensity RF effects

    SciTech Connect

    Zacharias, R.; Pennock, S.; Poggio, A.; Ray, S.

    1991-07-01

    With the ever-increasing dependence of modern aircraft on sophisticated avionics and electronic controls, the need to assure aircraft survivatality when exposed to high Intensity RF (HIRF) signals has become of great Interest. Advisory regulation is currently being proposed which would require testing and/or analysis to assure RF hardness of installed flight critical and flight essential equipment. While full-aircraft, full-threat testing may be the most thorough manner to assure survivability, it is not generally practical in loins of cost. Various combinations of limited full-aircraft testing, box-level testing, modeling, and analysis are also being considered as methods to achieve compliance. Modeling, analysis, and low power measurements may hold the key to making full-system survivability estimates at reasonable cost. In this paper we will describe some of the tools and techniques we use for estimating and measuring coupling and component disturbance. A finite difference time domain modeling code, TSAR, used to predict coupling will be described. This code has the capability to quickly generate a mesh model to represent the test object. Some recent applications as well as the advantages and limitations of using such a code will be described. We will also describe some of the facilities and techniques we have developed for making low power coupling measurements and for making direct injection test measurements of device disturbance. Some scaling laws for coupling and device effects will be presented. A method to extrapolate these low-power test results to high-power full-system effects will be presented.

  3. Robust quantitative parameter estimation by advanced CMP measurements for vadose zone hydrological studies

    NASA Astrophysics Data System (ADS)

    Koyama, C.; Wang, H.; Khuut, T.; Kawai, T.; Sato, M.

    2015-12-01

    Soil moisture plays a crucial role in the understanding of processes in the vadose zone hydrology. In the last two decades ground penetrating radar (GPR) has been widely discussed has nondestructive measurement technique for soil moisture data. Especially the common mid-point (CMP) technique, which has been used in both seismic and GPR surveys to investigate the vertical velocity profiles, has a very high potential for quantitaive obervsations from the root zone to the ground water aquifer. However, the use is still rather limited today and algorithms for robust quantitative paramter estimation are lacking. In this study we develop an advanced processing scheme for operational soil moisture reetrieval at various depth. Using improved signal processing, together with a semblance - non-normalized cross-correlation sum combined stacking approach and the Dix formula, the interval velocities for multiple soil layers are obtained from the RMS velocities allowing for more accurate estimation of the permittivity at the reflecting point. Where the presence of a water saturated layer, like a groundwater aquifer, can be easily identified by its RMS velocity due to the high contrast compared to the unsaturated zone. By using a new semi-automated measurement technique the acquisition time for a full CMP gather with 1 cm intervals along a 10 m profile can be reduced significantly to under 2 minutes. The method is tested and validated under laboratory conditions in a sand-pit as well as on agricultural fields and beach sand in the Sendai city area. Comparison between CMP estimates and TDR measurements yield a very good agreement with RMSE of 1.5 Vol.-%. The accuracy of depth estimation is validated with errors smaller than 2%. Finally, we demonstrate application of the method in a test site in semi-arid Mongolia, namely the Orkhon River catchment in Bulgan, using commercial 100 MHz and 500 MHz RAMAC GPR antennas. The results demonstrate the suitability of the proposed method for

  4. Advanced fabrication techniques for hydrogen-cooled engine structures

    NASA Technical Reports Server (NTRS)

    Buchmann, O. A.; Arefian, V. V.; Warren, H. A.; Vuigner, A. A.; Pohlman, M. J.

    1985-01-01

    Described is a program for development of coolant passage geometries, material systems, and joining processes that will produce long-life hydrogen-cooled structures for scramjet applications. Tests were performed to establish basic material properties, and samples constructed and evaluated to substantiate fabrication processes and inspection techniques. Results of the study show that the basic goal of increasing the life of hydrogen-cooled structures two orders of magnitude relative to that of the Hypersonic Research Engine can be reached with available means. Estimated life is 19000 cycles for the channels and 16000 cycles for pin-fin coolant passage configurations using Nickel 201. Additional research is required to establish the fatigue characteristics of dissimilar-metal coolant passages (Nickel 201/Inconel 718) and to investigate the embrittling effects of the hydrogen coolant.

  5. A review of hemorheology: Measuring techniques and recent advances

    NASA Astrophysics Data System (ADS)

    Sousa, Patrícia C.; Pinho, Fernando T.; Alves, Manuel A.; Oliveira, Mónica S. N.

    2016-02-01

    Significant progress has been made over the years on the topic of hemorheology, not only in terms of the development of more accurate and sophisticated techniques, but also in terms of understanding the phenomena associated with blood components, their interactions and impact upon blood properties. The rheological properties of blood are strongly dependent on the interactions and mechanical properties of red blood cells, and a variation of these properties can bring further insight into the human health state and can be an important parameter in clinical diagnosis. In this article, we provide both a reference for hemorheological research and a resource regarding the fundamental concepts in hemorheology. This review is aimed at those starting in the field of hemodynamics, where blood rheology plays a significant role, but also at those in search of the most up-to-date findings (both qualitative and quantitative) in hemorheological measurements and novel techniques used in this context, including technical advances under more extreme conditions such as in large amplitude oscillatory shear flow or under extensional flow, which impose large deformations comparable to those found in the microcirculatory system and in diseased vessels. Given the impressive rate of increase in the available knowledge on blood flow, this review is also intended to identify areas where current knowledge is still incomplete, and which have the potential for new, exciting and useful research. We also discuss the most important parameters that can lead to an alteration of blood rheology, and which as a consequence can have a significant impact on the normal physiological behavior of blood.

  6. Advances in Poly(4-aminodiphenylaniline) Nanofibers Preparation by Electrospinning Technique.

    PubMed

    Della Pina, C; Busacca, C; Frontera, P; Antonucci, P L; Scarpino, L A; Sironi, A; Falletta, E

    2016-05-01

    Polyaniline (PANI) nanofibers are drawing a great deal of interest from academia and industry due to their multiple applications, especially in biomedical field. PANI nanofibers were successfully electrospun for the first time by MacDiarmid and co-workers at the beginning of the millennium and since then many efforts have been addressed to improve their quality. However, traditional PANI prepared from aniline monomer shows some drawbacks, such as presence of toxic (i.e., benzidine) and inorganic (salts and metals) co-products, that complicate polymer post-treatment, and low solubility in common organic solvents, making hard its processing by electrospinning technique. Some industrial sectors, such as medical and biomedical, need to employ materials free from toxic and polluting species. In this regard, the oxidative polymerization of N-(4-aminophenyl)aniline, aniline dimer, to produce poly(4-aminodiphenylaniline), P4ADA, a kind of PANI, represents an innovative alternative to the traditional synthesis because the obtained polymer results free from carcinogenic and/or polluting co-products, and, moreover, more soluble than traditional PANI. This latter feature can be exploited to obtain P4ADA nanofibers by electrospinning technique. In this paper we report the advances obtained in the P4ADA nanofibers electrospinnig. A comparison among polyethylene oxide (PEO), polymethyl methacrylate (PMMA) and polystyrene (PS), as the second polymer to facilitate the electrospinning process, is shown. In order to increase the conductivity of P4ADA nanofibers, two strategies were adopted and compared: selective insulating binder removal from electrospun nanofibers by a rinsing tratment, afterwards optimizing the minimum amount of binder necessary for the electrospinning process. Moreover, the effect of PEO/P4ADA weight ratio on the fibers morphology and conductivity was highlighted. PMID:27483933

  7. Advances in Poly(4-aminodiphenylaniline) Nanofibers Preparation by Electrospinning Technique.

    PubMed

    Della Pina, C; Busacca, C; Frontera, P; Antonucci, P L; Scarpino, L A; Sironi, A; Falletta, E

    2016-05-01

    Polyaniline (PANI) nanofibers are drawing a great deal of interest from academia and industry due to their multiple applications, especially in biomedical field. PANI nanofibers were successfully electrospun for the first time by MacDiarmid and co-workers at the beginning of the millennium and since then many efforts have been addressed to improve their quality. However, traditional PANI prepared from aniline monomer shows some drawbacks, such as presence of toxic (i.e., benzidine) and inorganic (salts and metals) co-products, that complicate polymer post-treatment, and low solubility in common organic solvents, making hard its processing by electrospinning technique. Some industrial sectors, such as medical and biomedical, need to employ materials free from toxic and polluting species. In this regard, the oxidative polymerization of N-(4-aminophenyl)aniline, aniline dimer, to produce poly(4-aminodiphenylaniline), P4ADA, a kind of PANI, represents an innovative alternative to the traditional synthesis because the obtained polymer results free from carcinogenic and/or polluting co-products, and, moreover, more soluble than traditional PANI. This latter feature can be exploited to obtain P4ADA nanofibers by electrospinning technique. In this paper we report the advances obtained in the P4ADA nanofibers electrospinnig. A comparison among polyethylene oxide (PEO), polymethyl methacrylate (PMMA) and polystyrene (PS), as the second polymer to facilitate the electrospinning process, is shown. In order to increase the conductivity of P4ADA nanofibers, two strategies were adopted and compared: selective insulating binder removal from electrospun nanofibers by a rinsing tratment, afterwards optimizing the minimum amount of binder necessary for the electrospinning process. Moreover, the effect of PEO/P4ADA weight ratio on the fibers morphology and conductivity was highlighted.

  8. Nanocrystalline materials: recent advances in crystallographic characterization techniques.

    PubMed

    Ringe, Emilie

    2014-11-01

    Most properties of nanocrystalline materials are shape-dependent, providing their exquisite tunability in optical, mechanical, electronic and catalytic properties. An example of the former is localized surface plasmon resonance (LSPR), the coherent oscillation of conduction electrons in metals that can be excited by the electric field of light; this resonance frequency is highly dependent on both the size and shape of a nanocrystal. An example of the latter is the marked difference in catalytic activity observed for different Pd nanoparticles. Such examples highlight the importance of particle shape in nanocrystalline materials and their practical applications. However, one may ask 'how are nanoshapes created?', 'how does the shape relate to the atomic packing and crystallography of the material?', 'how can we control and characterize the external shape and crystal structure of such small nanocrystals?'. This feature article aims to give the reader an overview of important techniques, concepts and recent advances related to these questions. Nucleation, growth and how seed crystallography influences the final synthesis product are discussed, followed by shape prediction models based on seed crystallography and thermodynamic or kinetic parameters. The crystallographic implications of epitaxy and orientation in multilayered, core-shell nanoparticles are overviewed, and, finally, the development and implications of novel, spatially resolved analysis tools are discussed.

  9. Achieving miniature sensor systems via advanced packaging techniques

    NASA Astrophysics Data System (ADS)

    Hartup, David C.; Bobier, Kevin; Demmin, Jeffrey

    2005-05-01

    Demands for miniaturized networked sensors that can be deployed in large quantities dictate that the packages be small and cost effective. In order to accomplish these objectives, system developers generally apply advanced packaging techniques to proven systems. A partnership of Nova Engineering and Tessera begins with a baseline of Nova's Unattended Ground Sensors (UGS) technology and utilizes Tessera's three-dimensional (3D) Chip-Scale Packaging (CSP), Multi-Chip Packaging (MCP), and System-in-Package (SIP) innovations to enable novel methods for fabricating compact, vertically integrated sensors utilizing digital, RF, and micro-electromechanical systems (MEMS) devices. These technologies, applied to a variety of sensors and integrated radio architectures, enable diverse multi-modal sensing networks with wireless communication capabilities. Sensors including imaging, accelerometers, acoustical, inertial measurement units, and gas and pressure sensors can be utilized. The greatest challenge to high density, multi-modal sensor networks is the ability to test each component prior to integration, commonly called Known Good Die (KGD) testing. In addition, the mix of multi-sourcing and high technology magnifies the challenge of testing at the die level. Utilizing Tessera proprietary CSP, MCP, and SIP interconnection methods enables fully testable, low profile stacking to create multi-modal sensor radios with high yield.

  10. Development of advanced strain diagnostic techniques for reactor environments.

    SciTech Connect

    Fleming, Darryn D.; Holschuh, Thomas Vernon,; Miller, Timothy J.; Hall, Aaron Christopher; Urrea, David Anthony,; Parma, Edward J.,

    2013-02-01

    The following research is operated as a Laboratory Directed Research and Development (LDRD) initiative at Sandia National Laboratories. The long-term goals of the program include sophisticated diagnostics of advanced fuels testing for nuclear reactors for the Department of Energy (DOE) Gen IV program, with the future capability to provide real-time measurement of strain in fuel rod cladding during operation in situ at any research or power reactor in the United States. By quantifying the stress and strain in fuel rods, it is possible to significantly improve fuel rod design, and consequently, to improve the performance and lifetime of the cladding. During the past year of this program, two sets of experiments were performed: small-scale tests to ensure reliability of the gages, and reactor pulse experiments involving the most viable samples in the Annulated Core Research Reactor (ACRR), located onsite at Sandia. Strain measurement techniques that can provide useful data in the extreme environment of a nuclear reactor core are needed to characterize nuclear fuel rods. This report documents the progression of solutions to this issue that were explored for feasibility in FY12 at Sandia National Laboratories, Albuquerque, NM.

  11. Nanocrystalline materials: recent advances in crystallographic characterization techniques

    PubMed Central

    Ringe, Emilie

    2014-01-01

    Most properties of nanocrystalline materials are shape-dependent, providing their exquisite tunability in optical, mechanical, electronic and catalytic properties. An example of the former is localized surface plasmon resonance (LSPR), the coherent oscillation of conduction electrons in metals that can be excited by the electric field of light; this resonance frequency is highly dependent on both the size and shape of a nanocrystal. An example of the latter is the marked difference in catalytic activity observed for different Pd nanoparticles. Such examples highlight the importance of particle shape in nanocrystalline materials and their practical applications. However, one may ask ‘how are nanoshapes created?’, ‘how does the shape relate to the atomic packing and crystallography of the material?’, ‘how can we control and characterize the external shape and crystal structure of such small nanocrystals?’. This feature article aims to give the reader an overview of important techniques, concepts and recent advances related to these questions. Nucleation, growth and how seed crystallography influences the final synthesis product are discussed, followed by shape prediction models based on seed crystallography and thermodynamic or kinetic parameters. The crystallographic implications of epitaxy and orientation in multilayered, core-shell nanoparticles are overviewed, and, finally, the development and implications of novel, spatially resolved analysis tools are discussed. PMID:25485133

  12. Hybrid inverse lithography techniques for advanced hierarchical memories

    NASA Astrophysics Data System (ADS)

    Xiao, Guangming; Hooker, Kevin; Irby, Dave; Zhang, Yunqiang; Ward, Brian; Cecil, Tom; Hall, Brett; Lee, Mindy; Kim, Dave; Lucas, Kevin

    2014-03-01

    Traditional segment-based model-based OPC methods have been the mainstream mask layout optimization techniques in volume production for memory and embedded memory devices for many device generations. These techniques have been continually optimized over time to meet the ever increasing difficulties of memory and memory periphery patterning. There are a range of difficult issues for patterning embedded memories successfully. These difficulties include the need for a very high level of symmetry and consistency (both within memory cells themselves and between cells) due to circuit effects such as noise margin requirements in SRAMs. Memory cells and access structures consume a large percentage of area in embedded devices so there is a very high return from shrinking the cell area as much as possible. This aggressive scaling leads to very difficult resolution, 2D CD control and process window requirements. Additionally, the range of interactions between mask synthesis corrections of neighboring areas can extend well beyond the size of the memory cell, making it difficult to fully take advantage of the inherent designed cell hierarchy in mask pattern optimization. This is especially true for non-traditional (i.e., less dependent on geometric rule) OPC/RET methods such as inverse lithography techniques (ILT) which inherently have more model-based decisions in their optimizations. New inverse methods such as model-based SRAF placement and ILT are, however, well known to have considerable benefits in finding flexible mask pattern solutions to improve process window, improve 2D CD control, and improve resolution in ultra-dense memory patterns. They also are known to reduce recipe complexity and provide native MRC compliant mask pattern solutions. Unfortunately, ILT is also known to be several times slower than traditional OPC methods due to the increased computational lithographic optimizations it performs. In this paper, we describe and present results for a methodology to

  13. Development and validation of a MRgHIFU non-invasive tissue acoustic property estimation technique.

    PubMed

    Johnson, Sara L; Dillon, Christopher; Odéen, Henrik; Parker, Dennis; Christensen, Douglas; Payne, Allison

    2016-11-01

    MR-guided high-intensity focussed ultrasound (MRgHIFU) non-invasive ablative surgeries have advanced into clinical trials for treating many pathologies and cancers. A remaining challenge of these surgeries is accurately planning and monitoring tissue heating in the face of patient-specific and dynamic acoustic properties of tissues. Currently, non-invasive measurements of acoustic properties have not been implemented in MRgHIFU treatment planning and monitoring procedures. This methods-driven study presents a technique using MR temperature imaging (MRTI) during low-temperature HIFU sonications to non-invasively estimate sample-specific acoustic absorption and speed of sound values in tissue-mimicking phantoms. Using measured thermal properties, specific absorption rate (SAR) patterns are calculated from the MRTI data and compared to simulated SAR patterns iteratively generated via the Hybrid Angular Spectrum (HAS) method. Once the error between the simulated and measured patterns is minimised, the estimated acoustic property values are compared to the true phantom values obtained via an independent technique. The estimated values are then used to simulate temperature profiles in the phantoms, and compared to experimental temperature profiles. This study demonstrates that trends in acoustic absorption and speed of sound can be non-invasively estimated with average errors of 21% and 1%, respectively. Additionally, temperature predictions using the estimated properties on average match within 1.2 °C of the experimental peak temperature rises in the phantoms. The positive results achieved in tissue-mimicking phantoms presented in this study indicate that this technique may be extended to in vivo applications, improving HIFU sonication temperature rise predictions and treatment assessment.

  14. Development and validation of a MRgHIFU non-invasive tissue acoustic property estimation technique.

    PubMed

    Johnson, Sara L; Dillon, Christopher; Odéen, Henrik; Parker, Dennis; Christensen, Douglas; Payne, Allison

    2016-11-01

    MR-guided high-intensity focussed ultrasound (MRgHIFU) non-invasive ablative surgeries have advanced into clinical trials for treating many pathologies and cancers. A remaining challenge of these surgeries is accurately planning and monitoring tissue heating in the face of patient-specific and dynamic acoustic properties of tissues. Currently, non-invasive measurements of acoustic properties have not been implemented in MRgHIFU treatment planning and monitoring procedures. This methods-driven study presents a technique using MR temperature imaging (MRTI) during low-temperature HIFU sonications to non-invasively estimate sample-specific acoustic absorption and speed of sound values in tissue-mimicking phantoms. Using measured thermal properties, specific absorption rate (SAR) patterns are calculated from the MRTI data and compared to simulated SAR patterns iteratively generated via the Hybrid Angular Spectrum (HAS) method. Once the error between the simulated and measured patterns is minimised, the estimated acoustic property values are compared to the true phantom values obtained via an independent technique. The estimated values are then used to simulate temperature profiles in the phantoms, and compared to experimental temperature profiles. This study demonstrates that trends in acoustic absorption and speed of sound can be non-invasively estimated with average errors of 21% and 1%, respectively. Additionally, temperature predictions using the estimated properties on average match within 1.2 °C of the experimental peak temperature rises in the phantoms. The positive results achieved in tissue-mimicking phantoms presented in this study indicate that this technique may be extended to in vivo applications, improving HIFU sonication temperature rise predictions and treatment assessment. PMID:27441427

  15. Implementation of MASW and waveform inversion techniques for new seismic hazard estimation technique

    NASA Astrophysics Data System (ADS)

    el-aziz abd el-aal, abd; Kamal, heba

    2016-04-01

    In this contribution, an integrated multi-channel analysis of Surface Waves (MASW) technique is applied to explore the geotechnical parameters of subsurface layers at the Zafarana Wind Farm site. The study area includes many active fault systems along the Gulf of Suez that cause many moderate and large earthquakes. Overall, the seismic activity of the area has recently become better understood following the use of waveform inversion method and software to develop accurate focal mechanism solutions for recent recorded earthquakes around the studied area. These earthquakes resulted in major stress-drops in the Eastern Desert and the Gulf of Suez area. These findings have helped to reshape the understanding of the seismotectonic environment of the Gulf of Suez area, which is a perplexing tectonic domain. Based on the collected new information and data, this study uses new an extended stochastic technique to re-examine the seismic hazard for the Gulf of Suez region, particularly the wind turbine towers sites at Zafarana Wind Farm and its vicinity. The essential characteristics of the extended stochastic technique are to obtain and simulate ground motion in order to minimize future earthquake consequences. The first step of this technique is defining the seismic sources which mostly affect the study area. Then, the maximum expected magnitude is defined for each of these seismic sources. It is followed by estimating the ground motion using an empirical attenuation relationship. Finally, the site amplification is implemented in calculating the peak ground acceleration (PGA) at each site of interest. Key words: MASW, waveform inversion, extended stochastic technique, Zafarana Wind Farm

  16. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    SciTech Connect

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    Nonconventional wells, which include horizontal, deviated, multilateral and ''smart'' wells, offer great potential for the efficient management of oil and gas reservoirs. These wells are able to contact larger regions of the reservoir than conventional wells and can also be used to target isolated hydrocarbon accumulations. The use of nonconventional wells instrumented with downhole inflow control devices allows for even greater flexibility in production. Because nonconventional wells can be very expensive to drill, complete and instrument, it is important to be able to optimize their deployment, which requires the accurate prediction of their performance. However, predictions of nonconventional well performance are often inaccurate. This is likely due to inadequacies in some of the reservoir engineering and reservoir simulation tools used to model and optimize nonconventional well performance. A number of new issues arise in the modeling and optimization of nonconventional wells. For example, the optimal use of downhole inflow control devices has not been addressed for practical problems. In addition, the impact of geological and engineering uncertainty (e.g., valve reliability) has not been previously considered. In order to model and optimize nonconventional wells in different settings, it is essential that the tools be implemented into a general reservoir simulator. This simulator must be sufficiently general and robust and must in addition be linked to a sophisticated well model. Our research under this five year project addressed all of the key areas indicated above. The overall project was divided into three main categories: (1) advanced reservoir simulation techniques for modeling nonconventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and for coupling the well to the simulator (which includes the accurate calculation of well index and the modeling of multiphase flow in the wellbore

  17. Advanced Techniques for Simulating the Behavior of Sand

    NASA Astrophysics Data System (ADS)

    Clothier, M.; Bailey, M.

    2009-12-01

    research is to simulate the look and behavior of sand, this work will go beyond simple particle collision. In particular, we can continue to use our parallel algorithms not only on single particles but on particle “clumps” that consist of multiple combined particles. Since sand is typically not spherical in nature, these particle “clumps” help to simulate the coarse nature of sand. In a simulation environment, multiple combined particles could be used to simulate the polygonal and granular nature of sand grains. Thus, a diversity of sand particles can be generated. The interaction between these particles can then be parallelized using GPU hardware. As such, this research will investigate different graphics and physics techniques and determine the tradeoffs in performance and visual quality for sand simulation. An enhanced sand model through the use of high performance computing and GPUs has great potential to impact research for both earth and space scientists. Interaction with JPL has provided an opportunity for us to refine our simulation techniques that can ultimately be used for their vehicle simulator. As an added benefit of this work, advancements in simulating sand can also benefit scientists here on earth, especially in regard to understanding landslides and debris flows.

  18. Weldability and joining techniques for advanced fossil energy system alloys

    SciTech Connect

    Lundin, C.D.; Qiao, C.Y.P.; Liu, W.; Yang, D.; Zhou, G.; Morrison, M.

    1998-05-01

    The efforts represent the concerns for the basic understanding of the weldability and fabricability of the advanced high temperature alloys so necessary to affect increases in the efficiency of the next generation Fossil Energy Power Plants. The effort was divided into three tasks with the first effort dealing with the welding and fabrication behavior of 310HCbN (HR3C), the second task details the studies aimed at understanding the weldability of a newly developed 310TaN high temperature stainless (a modification of 310 stainless) and Task 3 addressed the cladding of austenitic tubing with Iron-Aluminide using the GTAW process. Task 1 consisted of microstructural studies on 310HCbN and the development of a Tube Weldability test which has applications to production welding techniques as well as laboratory weldability assessments. In addition, the evaluation of ex-service 310HCbN which showed fireside erosion and cracking at the attachment weld locations was conducted. Task 2 addressed the behavior of the newly developed 310 TaN modification of standard 310 stainless steel and showed that the weldability was excellent and that the sensitization potential was minimal for normal welding and fabrication conditions. The microstructural evolution during elevated temperature testing was characterized and the second phase particles evolved upon aging were identified. Task 3 details the investigation undertaken to clad 310HCbN tubing with Iron Aluminide and developed welding conditions necessary to provide a crack free cladding. The work showed that both a preheat and a post-heat was necessary for crack free deposits and the effect of a third element on the cracking potential was defined together with the effect of the aluminum level for optimum weldability.

  19. Investigation of joining techniques for advanced austenitic alloys

    SciTech Connect

    Lundin, C.D.; Qiao, C.Y.P.; Kikuchi, Y.; Shi, C.; Gill, T.P.S.

    1991-05-01

    Modified Alloys 316 and 800H, designed for high temperature service, have been developed at Oak Ridge National Laboratory. Assessment of the weldability of the advanced austenitic alloys has been conducted at the University of Tennessee. Four aspects of weldability of the advanced austenitic alloys were included in the investigation.

  20. Estimation of detection thresholds for redirected walking techniques.

    PubMed

    Steinicke, Frank; Bruder, Gerd; Jerald, Jason; Frenz, Harald; Lappe, Markus

    2010-01-01

    In immersive virtual environments (IVEs), users can control their virtual viewpoint by moving their tracked head and walking through the real world. Usually, movements in the real world are mapped one-to-one to virtual camera motions. With redirection techniques, the virtual camera is manipulated by applying gains to user motion so that the virtual world moves differently than the real world. Thus, users can walk through large-scale IVEs while physically remaining in a reasonably small workspace. In psychophysical experiments with a two-alternative forced-choice task, we have quantified how much humans can unknowingly be redirected on physical paths that are different from the visually perceived paths. We tested 12 subjects in three different experiments: (E1) discrimination between virtual and physical rotations, (E2) discrimination between virtual and physical straightforward movements, and (E3) discrimination of path curvature. In experiment E1, subjects performed rotations with different gains, and then had to choose whether the visually perceived rotation was smaller or greater than the physical rotation. In experiment E2, subjects chose whether the physical walk was shorter or longer than the visually perceived scaled travel distance. In experiment E3, subjects estimate the path curvature when walking a curved path in the real world while the visual display shows a straight path in the virtual world. Our results show that users can be turned physically about 49 percent more or 20 percent less than the perceived virtual rotation, distances can be downscaled by 14 percent and upscaled by 26 percent, and users can be redirected on a circular arc with a radius greater than 22 m while they believe that they are walking straight.

  1. Basic parameter estimation of binary neutron star systems by the advanced LIGO/Vigro network

    SciTech Connect

    Rodriguez, Carl L.; Farr, Benjamin; Raymond, Vivien; Farr, Will M.; Littenberg, Tyson B.; Fazi, Diego; Kalogera, Vicky

    2014-04-01

    Within the next five years, it is expected that the Advanced LIGO/Virgo network will have reached a sensitivity sufficient to enable the routine detection of gravitational waves. Beyond the initial detection, the scientific promise of these instruments relies on the effectiveness of our physical parameter estimation capabilities. A major part of this effort has been toward the detection and characterization of gravitational waves from compact binary coalescence, e.g., the coalescence of binary neutron stars. While several previous studies have investigated the accuracy of parameter estimation with advanced detectors, the majority have relied on approximation techniques such as the Fisher Matrix which are insensitive to the non-Gaussian nature of the gravitational wave posterior distribution function. Here we report average statistical uncertainties that will be achievable for strong detection candidates (S/N = 20) over a comprehensive sample of source parameters. We use the Markov Chain Monte Carlo based parameter estimation software developed by the LIGO/Virgo Collaboration with the goal of updating the previously quoted Fisher Matrix bounds. We find the recovery of the individual masses to be fractionally within 9% (15%) at the 68% (95%) credible intervals for equal-mass systems, and within 1.9% (3.7%) for unequal-mass systems. We also find that the Advanced LIGO/Virgo network will constrain the locations of binary neutron star mergers to a median uncertainty of 5.1 deg{sup 2} (13.5 deg{sup 2}) on the sky. This region is improved to 2.3 deg{sup 2} (6 deg{sup 2}) with the addition of the proposed LIGO India detector to the network. We also report the average uncertainties on the luminosity distances and orbital inclinations of strong detections that can be achieved by different network configurations.

  2. Modified Multilook Cross Correlation technique for Doppler centroid estimation in SAR image signal processing

    NASA Astrophysics Data System (ADS)

    Bee Cheng, Sew

    Synthetic Aperture Radar (SAR) is one of the widely used remote sensing sensors which produces high resolution image by using advance signal processing technique. SAR managed to operate in all sorts of weather and cover wide range of area. To produce a high-quality image, accurate parameters such as Doppler centroid are required for precise SAR signal processing. In the azimuth matched filtering of SAR signal processing, Doppler centroid is an important azimuth parameter that helps to focus the image pixels. Doppler centroid has always been overlooked during SAR signal processing. It is due to the fact that estimation of Doppler centroid involved complicated calculation and increased computational load. Therefore, researcher used to apply only the approximate Doppler value which is not precise and cause defocus effort in the generated SAR image. In this study, several conventional Doppler centroid estimation algorithms are reviewed and developed using Matlab software program to extract the Doppler parameter from received SAR data, namely Spectrum Fit Algorithm, Wavelength Diversity Algorithm (WDA), Multilook Cross Correlation Algorithm (MLCC), and Multilook Beat Frequency Algorithm (MLBF). Two sets of SAR data are employed to evaluate the performance of each estimator, i.e. simulated point target data and RADARSAT-1 Vancouver scene raw data. These experiments gave a sense of accuracy for the estimated results together with computational time consumption. Point target is simulated to generate ideal case SAR data with pre-defined SAR system parameters.

  3. Technique for estimating depth of floods in Tennessee

    USGS Publications Warehouse

    Gamble, C.R.

    1983-01-01

    Estimates of flood depths are needed for design of roadways across flood plains and for other types of construction along streams. Equations for estimating flood depths in Tennessee were derived using data for 150 gaging stations. The equations are based on drainage basin size and can be used to estimate depths of the 10-year and 100-year floods for four hydrologic areas. A method also was developed for estimating depth of floods having recurrence intervals between 10 and 100 years. Standard errors range from 22 to 30 percent for the 10-year depth equations and from 23 to 30 percent for the 100-year depth equations. (USGS)

  4. Recent advances in biosensor techniques for environmental monitoring.

    PubMed

    Rogers, K R

    2006-05-24

    Biosensors for environmental applications continue to show advances and improvements in areas such as sensitivity, selectivity and simplicity. In addition to detecting and measuring specific compounds or compound classes such as pesticides, hazardous industrial chemicals, toxic metals, and pathogenic bacteria, biosensors and bioanalytical assays have been designed to measure biological effects such as cytotoxicity, genotoxicity, biological oxygen demand, pathogenic bacteria, and endocrine disruption effects. This article is intended to discuss recent advances in the area of biosensors for environmental applications.

  5. Software risk estimation and management techniques at JPL

    NASA Technical Reports Server (NTRS)

    Hihn, J.; Lum, K.

    2002-01-01

    In this talk we will discuss how uncertainty has been incorporated into the JPL software model, probabilistic-based estimates, and how risk is addressed, how cost risk is currently being explored via a variety of approaches, from traditional risk lists, to detailed WBS-based risk estimates to the Defect Detection and Prevention (DDP) tool.

  6. A Simple Technique for Estimating Latent Trait Mental Test Parameters

    ERIC Educational Resources Information Center

    Jensema, Carl

    1976-01-01

    A simple and economical method for estimating initial parameter values for the normal ogive or logistic latent trait mental test model is outlined. The accuracy of the method in comparison with maximum likelihood estimation is investigated through the use of Monte-Carlo data. (Author)

  7. Hybrid estimation technique for predicting butene concentration in polyethylene reactor

    NASA Astrophysics Data System (ADS)

    Mohd Ali, Jarinah; Hussain, M. A.

    2016-03-01

    A component of artificial intelligence (AI), which is fuzzy logic, is combined with the so-called conventional sliding mode observer (SMO) to establish a hybrid type estimator to predict the butene concentration in the polyethylene production reactor. Butene or co-monomer concentration is another significant parameter in the polymerization process since it will affect the molecular weight distribution of the polymer produced. The hybrid estimator offers straightforward formulation of SMO and its combination with the fuzzy logic rules. The error resulted from the SMO estimation will be manipulated using the fuzzy rules to enhance the performance, thus improved on the convergence rate. This hybrid estimation is able to estimate the butene concentration satisfactorily despite the present of noise in the process.

  8. Advanced Skills for Chapter 1 Mathematics: Estimation. Workshop Leader's Guide.

    ERIC Educational Resources Information Center

    Advanced Technology, Inc., Indianapolis, IN.

    This Workshop Leader's Guide contains step-by-step procedures for preparing, organizing, and presenting 1-hour and 3-hour workshops on estimation in mathematics. It was designed to assist Technical Assistance Center staff members and other inservice providers in conducting successful workshops on estimation in mathematics for administrators,…

  9. Advanced techniques for array processing. Final report, 1 Mar 89-30 Apr 91

    SciTech Connect

    Friedlander, B.

    1991-05-30

    Array processing technology is expected to be a key element in communication systems designed for the crowded and hostile environment of the future battlefield. While advanced array processing techniques have been under development for some time, their practical use has been very limited. This project addressed some of the issues which need to be resolved for a successful transition of these promising techniques from theory into practice. The main problem which was studied was that of finding the directions of multiple co-channel transmitters from measurements collected by an antenna array. Two key issues related to high-resolution direction finding were addressed: effects of system calibration errors, and effects of correlation between the received signals due to multipath propagation. A number of useful theoretical performance analysis results were derived, and computationally efficient direction estimation algorithms were developed. These results include: self-calibration techniques for antenna arrays, sensitivity analysis for high-resolution direction finding, extensions of the root-MUSIC algorithm to arbitrary arrays and to arrays with polarization diversity, and new techniques for direction finding in the presence of multipath based on array interpolation. (Author)

  10. Advanced Millimeter-Wave Security Portal Imaging Techniques

    SciTech Connect

    Sheen, David M.; Bernacki, Bruce E.; McMakin, Douglas L.

    2012-04-01

    Millimeter-wave imaging is rapidly gaining acceptance for passenger screening at airports and other secured facilities. This paper details a number of techniques developed over the last several years including novel image reconstruction and display techniques, polarimetric imaging techniques, array switching schemes, as well as high frequency high bandwidth techniques. Implementation of some of these methods will increase the cost and complexity of the mm-wave security portal imaging systems. RF photonic methods may provide new solutions to the design and development of the sequentially switched linear mm-wave arrays that are the key element in the mm-wave portal imaging systems.

  11. A technique for estimating life expectancy with crude vital rates.

    PubMed

    McCann, J C

    1976-05-01

    This paper describes a method of estimating life expectancy at birth on the basis of crude vital rates. The method is derived from stable population theory and it furnishes good estimates insofar as the current crude vital rates of a population are close to its intrinsic rates. This condition is generally met in closed populations which have not experienced sharp movements in fertility. The method is useful for estimating life expectancy in developing nations with good sample registration systems but for which information on age is of poor quality. It is also useful for estimating the movement of life expectancy in certain European nations in the period prior to regular census taking. There are a number of nations and regions in Europe for which long series of birth and death rates are available but for which census age counts are widely spaced.

  12. Weight estimation techniques for composite airplanes in general aviation industry

    NASA Technical Reports Server (NTRS)

    Paramasivam, T.; Horn, W. J.; Ritter, J.

    1986-01-01

    Currently available weight estimation methods for general aviation airplanes were investigated. New equations with explicit material properties were developed for the weight estimation of aircraft components such as wing, fuselage and empennage. Regression analysis was applied to the basic equations for a data base of twelve airplanes to determine the coefficients. The resulting equations can be used to predict the component weights of either metallic or composite airplanes.

  13. Recent advances in microscopic techniques for visualizing leukocytes in vivo

    PubMed Central

    Jain, Rohit; Tikoo, Shweta; Weninger, Wolfgang

    2016-01-01

    Leukocytes are inherently motile and interactive cells. Recent advances in intravital microscopy approaches have enabled a new vista of their behavior within intact tissues in real time. This brief review summarizes the developments enabling the tracking of immune responses in vivo. PMID:27239292

  14. Bricklaying Curriculum: Advanced Bricklaying Techniques. Instructional Materials. Revised.

    ERIC Educational Resources Information Center

    Turcotte, Raymond J.; Hendrix, Laborn J.

    This curriculum guide is designed to assist bricklaying instructors in providing performance-based instruction in advanced bricklaying. Included in the first section of the guide are units on customized or architectural masonry units; glass block; sills, lintels, and copings; and control (expansion) joints. The next two units deal with cut,…

  15. Advanced NDE techniques for quantitative characterization of aircraft

    NASA Technical Reports Server (NTRS)

    Heyman, Joseph S.; Winfree, William P.

    1990-01-01

    Recent advances in nondestructive evaluation (NDE) at NASA Langley Research Center and their applications that have resulted in quantitative assessment of material properties based on thermal and ultrasonic measurements are reviewed. Specific applications include ultrasonic determination of bolt tension, ultrasonic and thermal characterization of bonded layered structures, characterization of composite materials, and disbonds in aircraft skins.

  16. A review of the most commonly used dental age estimation techniques.

    PubMed

    Willems, G

    2001-06-01

    This review of literature provides an overview of the most commonly used dental age estimation techniques and focuses on dental age estimation scoring systems in children and adults. In order to obtain a more reliable and reproducible age estimation the forensic odontologist should use several of these available methods whenever an age estimation in the living or dead is required. PMID:11494678

  17. Recent advances on techniques and theories of feedforward networks with supervised learning

    NASA Astrophysics Data System (ADS)

    Xu, Lei; Klasa, Stan

    1992-07-01

    The rediscovery and popularization of the back propagation training technique for multilayer perceptrons as well as the invention of the Boltzmann Machine learning algorithm has given a new boost to the study of supervised learning networks. In recent years, besides the widely spread applications and the various further improvements of the classical back propagation technique, many new supervised learning models, techniques as well as theories, have also been proposed in a vast number of publications. This paper tries to give a rather systematical review on the recent advances on supervised learning techniques and theories for static feedforward networks. We summarize a great number of developments into four aspects: (1) Various improvements and variants made on the classical back propagation techniques for multilayer (static) perceptron nets, for speeding up training, avoiding local minima, increasing the generalization ability, as well as for many other interesting purposes. (2) A number of other learning methods for training multilayer (static) perceptron, such as derivative estimation by perturbation, direct weight update by perturbation, genetic algorithms, recursive least square estimate and extended Kalman filter, linear programming, the policy of fixing one layer while updating another, constructing networks by converting decision tree classifiers, and others. (3) Various other feedforward models which are also able to implement function approximation, probability density estimation and classification, including various models of basis function expansion (e.g., radial basis functions, restricted coulomb energy, multivariate adaptive regression splines, trigonometric and polynomial bases, projection pursuit, basis function tree, and may others), and several other supervised learning models. (4) Models with complex structures, e.g., modular architecture, hierarchy architecture, and others. (5) A number of theoretical issues involving the universal

  18. Evaluating noninvasive genetic sampling techniques to estimate large carnivore abundance.

    PubMed

    Mumma, Matthew A; Zieminski, Chris; Fuller, Todd K; Mahoney, Shane P; Waits, Lisette P

    2015-09-01

    Monitoring large carnivores is difficult because of intrinsically low densities and can be dangerous if physical capture is required. Noninvasive genetic sampling (NGS) is a safe and cost-effective alternative to physical capture. We evaluated the utility of two NGS methods (scat detection dogs and hair sampling) to obtain genetic samples for abundance estimation of coyotes, black bears and Canada lynx in three areas of Newfoundland, Canada. We calculated abundance estimates using program capwire, compared sampling costs, and the cost/sample for each method relative to species and study site, and performed simulations to determine the sampling intensity necessary to achieve abundance estimates with coefficients of variation (CV) of <10%. Scat sampling was effective for both coyotes and bears and hair snags effectively sampled bears in two of three study sites. Rub pads were ineffective in sampling coyotes and lynx. The precision of abundance estimates was dependent upon the number of captures/individual. Our simulations suggested that ~3.4 captures/individual will result in a < 10% CV for abundance estimates when populations are small (23-39), but fewer captures/individual may be sufficient for larger populations. We found scat sampling was more cost-effective for sampling multiple species, but suggest that hair sampling may be less expensive at study sites with limited road access for bears. Given the dependence of sampling scheme on species and study site, the optimal sampling scheme is likely to be study-specific warranting pilot studies in most circumstances.

  19. Asteroid mass estimation using Markov-Chain Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Siltala, Lauri; Granvik, Mikael

    2016-10-01

    Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to a 13-dimensional inverse problem where the aim is to derive the mass of the perturbing asteroid and six orbital elements for both the perturbing asteroid and the test asteroid using astrometric observations. We have developed and implemented three different mass estimation algorithms utilizing asteroid-asteroid perturbations into the OpenOrb asteroid-orbit-computation software: the very rough 'marching' approximation, in which the asteroid orbits are fixed at a given epoch, reducing the problem to a one-dimensional estimation of the mass, an implementation of the Nelder-Mead simplex method, and most significantly, a Markov-Chain Monte Carlo (MCMC) approach. We will introduce each of these algorithms with particular focus on the MCMC algorithm, and present example results for both synthetic and real data. Our results agree with the published mass estimates, but suggest that the published uncertainties may be misleading as a consequence of using linearized mass-estimation methods. Finally, we discuss remaining challenges with the algorithms as well as future plans, particularly in connection with ESA's Gaia mission.

  20. Estimation of the elastic Earth parameters from the SLR technique

    NASA Astrophysics Data System (ADS)

    Rutkowska, Milena

    ABSTRACT. The global elastic parameters (Love and Shida numbers) associated with the tide variations for satellite and stations are estimated from the Satellite Laser Ranging (SLR) data. The study is based on satellite observations taken by the global network of the ground stations during the period from January 1, 2005 until January 1, 2007 for monthly orbital arcs of Lageos 1 satellite. The observation equations contain unknown for orbital arcs, some constants and elastic Earth parameters which describe tide variations. The adjusted values are discussed and compared with geophysical estimations of Love numbers. All computations were performed employing the NASA software GEODYN II (eddy et al. 1990).

  1. Backscattered Electron Microscopy as an Advanced Technique in Petrography.

    ERIC Educational Resources Information Center

    Krinsley, David Henry; Manley, Curtis Robert

    1989-01-01

    Three uses of this method with sandstone, desert varnish, and granite weathering are described. Background information on this technique is provided. Advantages of this type of microscopy are stressed. (CW)

  2. Electroextraction and electromembrane extraction: Advances in hyphenation to analytical techniques

    PubMed Central

    Oedit, Amar; Ramautar, Rawi; Hankemeier, Thomas

    2016-01-01

    Electroextraction (EE) and electromembrane extraction (EME) are sample preparation techniques that both require an electric field that is applied over a liquid‐liquid system, which enables the migration of charged analytes. Furthermore, both techniques are often used to pre‐concentrate analytes prior to analysis. In this review an overview is provided of the body of literature spanning April 2012–November 2015 concerning EE and EME, focused on hyphenation to analytical techniques. First, the theoretical aspects of concentration enhancement in EE and EME are discussed to explain extraction recovery and enrichment factor. Next, overviews are provided of the techniques based on their hyphenation to LC, GC, CE, and direct detection. These overviews cover the compounds and matrices, experimental aspects (i.e. donor volume, acceptor volume, extraction time, extraction voltage, and separation time) and the analytical aspects (i.e. limit of detection, enrichment factor, and extraction recovery). Techniques that were either hyphenated online to analytical techniques or show high potential with respect to online hyphenation are highlighted. Finally, the potential future directions of EE and EME are discussed. PMID:26864699

  3. Advanced millimeter-wave security portal imaging techniques

    NASA Astrophysics Data System (ADS)

    Sheen, David M.; Bernacki, Bruce E.; McMakin, Douglas L.

    2012-03-01

    Millimeter-wave (mm-wave) imaging is rapidly gaining acceptance as a security tool to augment conventional metal detectors and baggage x-ray systems for passenger screening at airports and other secured facilities. This acceptance indicates that the technology has matured; however, many potential improvements can yet be realized. The authors have developed a number of techniques over the last several years including novel image reconstruction and display techniques, polarimetric imaging techniques, array switching schemes, and high-frequency high-bandwidth techniques. All of these may improve the performance of new systems; however, some of these techniques will increase the cost and complexity of the mm-wave security portal imaging systems. Reducing this cost may require the development of novel array designs. In particular, RF photonic methods may provide new solutions to the design and development of the sequentially switched linear mm-wave arrays that are the key element in the mm-wave portal imaging systems. Highfrequency, high-bandwidth designs are difficult to achieve with conventional mm-wave electronic devices, and RF photonic devices may be a practical alternative. In this paper, the mm-wave imaging techniques developed at PNNL are reviewed and the potential for implementing RF photonic mm-wave array designs is explored.

  4. Metamodels for Ozone: Comparison of Three Estimation Techniques

    EPA Science Inventory

    A metamodel for ozone is a mathematical relationship between the inputs and outputs of an air quality modeling experiment, permitting calculation of outputs for scenarios of interest without having to run the model again. In this study we compare three metamodel estimation techn...

  5. Estimating Returns to Education Using Different Natural Experiment Techniques

    ERIC Educational Resources Information Center

    Leigh, Andrew; Ryan, Chris

    2008-01-01

    How much do returns to education differ across different natural experiment methods? To test this, we estimate the rate of return to schooling in Australia using two different instruments for schooling: month of birth and changes in compulsory schooling laws. With annual pre-tax income as our measure of income, we find that the naive ordinary…

  6. The estimation technique of the airframe design for manufacturability

    NASA Astrophysics Data System (ADS)

    Govorkov, A.; Zhilyaev, A.

    2016-04-01

    This paper discusses the method of quantitative estimation of a design for manufacturability of the parts of the airframe. The method is based on the interaction of individual indicators considering the weighting factor. The authors of the paper introduce the algorithm of the design for manufacturability of parts based on its 3D model

  7. DEVELOPING SEASONAL AMMONIA EMISSION ESTIMATES WITH AN INVERSE MODELING TECHNIQUE

    EPA Science Inventory

    Significant uncertainty exists in magnitude and variability of ammonia (NH3) emissions, which are needed for air quality modeling of aerosols and deposition of nitrogen compounds. Approximately 85% of NH3 emissions are estimated to come from agricultural non-point sources. We sus...

  8. Evaluating noninvasive genetic sampling techniques to estimate large carnivore abundance.

    PubMed

    Mumma, Matthew A; Zieminski, Chris; Fuller, Todd K; Mahoney, Shane P; Waits, Lisette P

    2015-09-01

    Monitoring large carnivores is difficult because of intrinsically low densities and can be dangerous if physical capture is required. Noninvasive genetic sampling (NGS) is a safe and cost-effective alternative to physical capture. We evaluated the utility of two NGS methods (scat detection dogs and hair sampling) to obtain genetic samples for abundance estimation of coyotes, black bears and Canada lynx in three areas of Newfoundland, Canada. We calculated abundance estimates using program capwire, compared sampling costs, and the cost/sample for each method relative to species and study site, and performed simulations to determine the sampling intensity necessary to achieve abundance estimates with coefficients of variation (CV) of <10%. Scat sampling was effective for both coyotes and bears and hair snags effectively sampled bears in two of three study sites. Rub pads were ineffective in sampling coyotes and lynx. The precision of abundance estimates was dependent upon the number of captures/individual. Our simulations suggested that ~3.4 captures/individual will result in a < 10% CV for abundance estimates when populations are small (23-39), but fewer captures/individual may be sufficient for larger populations. We found scat sampling was more cost-effective for sampling multiple species, but suggest that hair sampling may be less expensive at study sites with limited road access for bears. Given the dependence of sampling scheme on species and study site, the optimal sampling scheme is likely to be study-specific warranting pilot studies in most circumstances. PMID:25693632

  9. Parameter estimation techniques and application in aircraft flight testing

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Technical papers presented at the symposium by selected representatives from industry, universities, and various Air Force, Navy, and NASA installations are given. The topics covered include the newest developments in identification techniques, the most recent flight-test experience, and the projected potential for the near future.

  10. Nondestructive Evaluation of Thick Concrete Using Advanced Signal Processing Techniques

    SciTech Connect

    Clayton, Dwight A; Barker, Alan M; Santos-Villalobos, Hector J; Albright, Austin P; Hoegh, Kyle; Khazanovich, Lev

    2015-09-01

    The purpose of the U.S. Department of Energy Office of Nuclear Energy’s Light Water Reactor Sustainability (LWRS) Program is to develop technologies and other solutions that can improve the reliability, sustain the safety, and extend the operating lifetimes of nuclear power plants (NPPs) beyond 60 years [1]. Since many important safety structures in an NPP are constructed of concrete, inspection techniques must be developed and tested to evaluate the internal condition. In-service containment structures generally do not allow for the destructive measures necessary to validate the accuracy of these inspection techniques. This creates a need for comparative testing of the various nondestructive evaluation (NDE) measurement techniques on concrete specimens with known material properties, voids, internal microstructure flaws, and reinforcement locations.

  11. Brain development in preterm infants assessed using advanced MRI techniques.

    PubMed

    Tusor, Nora; Arichi, Tomoki; Counsell, Serena J; Edwards, A David

    2014-03-01

    Infants who are born preterm have a high incidence of neurocognitive and neurobehavioral abnormalities, which may be associated with impaired brain development. Advanced magnetic resonance imaging (MRI) approaches, such as diffusion MRI (d-MRI) and functional MRI (fMRI), provide objective and reproducible measures of brain development. Indices derived from d-MRI can be used to provide quantitative measures of preterm brain injury. Although fMRI of the neonatal brain is currently a research tool, future studies combining d-MRI and fMRI have the potential to assess the structural and functional properties of the developing brain and its response to injury.

  12. Application of advanced coating techniques to rocket engine components

    NASA Technical Reports Server (NTRS)

    Verma, S. K.

    1988-01-01

    The materials problem in the space shuttle main engine (SSME) is reviewed. Potential coatings and the method of their application for improved life of SSME components are discussed. A number of advanced coatings for turbine blade components and disks are being developed and tested in a multispecimen thermal fatigue fluidized bed facility at IIT Research Institute. This facility is capable of producing severe strains of the degree present in blades and disk components of the SSME. The potential coating systems and current efforts at IITRI being taken for life extension of the SSME components are summarized.

  13. Transcranial Doppler: Techniques and advanced applications: Part 2

    PubMed Central

    Sharma, Arvind K.; Bathala, Lokesh; Batra, Amit; Mehndiratta, Man Mohan; Sharma, Vijay K.

    2016-01-01

    Transcranial Doppler (TCD) is the only diagnostic tool that can provide continuous information about cerebral hemodynamics in real time and over extended periods. In the previous paper (Part 1), we have already presented the basic ultrasound physics pertaining to TCD, insonation methods, and various flow patterns. This article describes various advanced applications of TCD such as detection of right-to-left shunt, emboli monitoring, vasomotor reactivity (VMR), monitoring of vasospasm in subarachnoid hemorrhage (SAH), monitoring of intracranial pressure, its role in stoke prevention in sickle cell disease, and as a supplementary test for confirmation of brain death. PMID:27011639

  14. In Situ Techniques for Monitoring Electrochromism: An Advanced Laboratory Experiment

    ERIC Educational Resources Information Center

    Saricayir, Hakan; Uce, Musa; Koca, Atif

    2010-01-01

    This experiment employs current technology to enhance and extend existing lab content. The basic principles of spectroscopic and electroanalytical techniques and their use in determining material properties are covered in some detail in many undergraduate chemistry programs. However, there are limited examples of laboratory experiments with in…

  15. Advances in reduction techniques for tire contact problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1995-01-01

    Some recent developments in reduction techniques, as applied to predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities, are reviewed. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of reduction techniques, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface. Also, the research topics which have high potential for enhancing the effectiveness of reduction techniques are outlined.

  16. Benefits of advanced software techniques for mission planning systems

    NASA Technical Reports Server (NTRS)

    Gasquet, A.; Parrod, Y.; Desaintvincent, A.

    1994-01-01

    The increasing complexity of modern spacecraft, and the stringent requirement for maximizing their mission return, call for a new generation of Mission Planning Systems (MPS). In this paper, we discuss the requirements for the Space Mission Planning and the benefits which can be expected from Artificial Intelligence techniques through examples of applications developed by Matra Marconi Space.

  17. A new technique for direction of arrival estimation for ionospheric multipath channels

    NASA Astrophysics Data System (ADS)

    Guldogan, Mehmet B.; Arıkan, Orhan; Arıkan, Feza

    2009-09-01

    A novel array signal processing technique is proposed to estimate HF channel parameters including number of paths, their respective direction of arrivals (DOA), delays, Doppler shifts and amplitudes. The proposed technique utilizes the Cross Ambiguity Function (CAF), hence, called as the CAF-DF technique. The CAF-DF technique iteratively processes the array output data and provides reliable estimates for DOA, delay, Doppler shift and amplitude corresponding to each impinging HF propagated wave onto an antenna array. Obtained results for both real and simulated data at different signal to noise ratio (SNR) values indicate the superior performance of the proposed technique over the well known MUltiple SIgnal Classification (MUSIC) technique.

  18. Single Molecule Techniques for Advanced in situ Hybridization

    SciTech Connect

    Hollars, C W; Stubbs, L; Carlson, K; Lu, X; Wehri, E

    2003-02-03

    One of the most significant achievements of modern science is completion of the human genome sequence, completed in the year 2000. Despite this monumental accomplishment, researchers have only begun to understand the relationships between this three-billion-nucleotide genetic code and the regulation and control of gene and protein expression within each of the millions of different types of highly specialized cells. Several methodologies have been developed for the analysis of gene and protein expression in situ, yet despite these advancements, the pace of such analyses is extremely limited. Because information regarding the precise timing and location of gene expression is a crucial component in the discovery of new pharmacological agents for the treatment of disease, there is an enormous incentive to develop technologies that accelerate the analytical process. Here we report on the use of plasmon resonant particles as advanced probes for in situ hybridization. These probes are used for the detection of low levels of gene-probe response and demonstrate a detection method that enables precise, simultaneous localization within a cell of the points of expression of multiple genes or proteins in a single sample.

  19. Developments and advances concerning the hyperpolarisation technique SABRE.

    PubMed

    Mewis, Ryan E

    2015-10-01

    To overcome the inherent sensitivity issue in NMR and MRI, hyperpolarisation techniques are used. Signal Amplification By Reversible Exchange (SABRE) is a hyperpolarisation technique that utilises parahydrogen, a molecule that possesses a nuclear singlet state, as the source of polarisation. A metal complex is required to break the singlet order of parahydrogen and, by doing so, facilitates polarisation transfer to analyte molecules ligated to the same complex through the J-coupled network that exists. The increased signal intensities that the analyte molecules possess as a result of this process have led to investigations whereby their potential as MRI contrast agents has been probed and to understand the fundamental processes underpinning the polarisation transfer mechanism. As well as discussing literature relevant to both of these areas, the chemical structure of the complex, the physical constraints of the polarisation transfer process and the successes of implementing SABRE at low and high magnetic fields are discussed. PMID:26264565

  20. Advanced techniques for characterization of ion beam modified materials

    SciTech Connect

    Zhang, Yanwen; Debelle, Aurélien; Boulle, Alexandre; Kluth, Patrick; Tuomisto, Filip

    2014-10-30

    Understanding the mechanisms of damage formation in materials irradiated with energetic ions is essential for the field of ion-beam materials modification and engineering. Utilizing incident ions, electrons, photons, and positrons, various analysis techniques, including Rutherford backscattering spectrometry (RBS), electron RBS, Raman spectroscopy, high-resolution X-ray diffraction, small-angle X-ray scattering, and positron annihilation spectroscopy, are routinely used or gaining increasing attention in characterizing ion beam modified materials. The distinctive information, recent developments, and some perspectives in these techniques are reviewed in this paper. Applications of these techniques are discussed to demonstrate their unique ability for studying ion-solid interactions and the corresponding radiation effects in modified depths ranging from a few nm to a few tens of μm, and to provide information on electronic and atomic structure of the materials, defect configuration and concentration, as well as phase stability, amorphization and recrystallization processes. Finally, such knowledge contributes to our fundamental understanding over a wide range of extreme conditions essential for enhancing material performance and also for design and synthesis of new materials to address a broad variety of future energy applications.

  1. Advanced techniques for characterization of ion beam modified materials

    DOE PAGES

    Zhang, Yanwen; Debelle, Aurélien; Boulle, Alexandre; Kluth, Patrick; Tuomisto, Filip

    2014-10-30

    Understanding the mechanisms of damage formation in materials irradiated with energetic ions is essential for the field of ion-beam materials modification and engineering. Utilizing incident ions, electrons, photons, and positrons, various analysis techniques, including Rutherford backscattering spectrometry (RBS), electron RBS, Raman spectroscopy, high-resolution X-ray diffraction, small-angle X-ray scattering, and positron annihilation spectroscopy, are routinely used or gaining increasing attention in characterizing ion beam modified materials. The distinctive information, recent developments, and some perspectives in these techniques are reviewed in this paper. Applications of these techniques are discussed to demonstrate their unique ability for studying ion-solid interactions and the corresponding radiationmore » effects in modified depths ranging from a few nm to a few tens of μm, and to provide information on electronic and atomic structure of the materials, defect configuration and concentration, as well as phase stability, amorphization and recrystallization processes. Finally, such knowledge contributes to our fundamental understanding over a wide range of extreme conditions essential for enhancing material performance and also for design and synthesis of new materials to address a broad variety of future energy applications.« less

  2. Advanced materials and techniques for fibre-optic sensing

    NASA Astrophysics Data System (ADS)

    Henderson, Philip J.

    2014-06-01

    Fibre-optic monitoring systems came of age in about 1999 upon the emergence of the world's first significant commercialising company - a spin-out from the UK's collaborative MAST project. By using embedded fibre-optic technology, the MAST project successfully measured transient strain within high-performance composite yacht masts. Since then, applications have extended from smart composites into civil engineering, energy, military, aerospace, medicine and other sectors. Fibre-optic sensors come in various forms, and may be subject to embedment, retrofitting, and remote interrogation. The unique challenges presented by each implementation require careful scrutiny before widespread adoption can take place. Accordingly, various aspects of design and reliability are discussed spanning a range of representative technologies that include resonant microsilicon structures, MEMS, Bragg gratings, advanced forms of spectroscopy, and modern trends in nanotechnology. Keywords: Fibre-optic sensors, fibre Bragg gratings, MEMS, MOEMS, nanotechnology, plasmon.

  3. Recent advances in bioprinting techniques: approaches, applications and future prospects.

    PubMed

    Li, Jipeng; Chen, Mingjiao; Fan, Xianqun; Zhou, Huifang

    2016-01-01

    Bioprinting technology shows potential in tissue engineering for the fabrication of scaffolds, cells, tissues and organs reproducibly and with high accuracy. Bioprinting technologies are mainly divided into three categories, inkjet-based bioprinting, pressure-assisted bioprinting and laser-assisted bioprinting, based on their underlying printing principles. These various printing technologies have their advantages and limitations. Bioprinting utilizes biomaterials, cells or cell factors as a "bioink" to fabricate prospective tissue structures. Biomaterial parameters such as biocompatibility, cell viability and the cellular microenvironment strongly influence the printed product. Various printing technologies have been investigated, and great progress has been made in printing various types of tissue, including vasculature, heart, bone, cartilage, skin and liver. This review introduces basic principles and key aspects of some frequently used printing technologies. We focus on recent advances in three-dimensional printing applications, current challenges and future directions. PMID:27645770

  4. Development of processing techniques for advanced thermal protection materials

    NASA Technical Reports Server (NTRS)

    Selvaduray, Guna S.

    1994-01-01

    The effort, which was focused on the research and development of advanced materials for use in Thermal Protection Systems (TPS), has involved chemical and physical testing of refractory ceramic tiles, fabrics, threads and fibers. This testing has included determination of the optical properties, thermal shock resistance, high temperature dimensional stability, and tolerance to environmental stresses. Materials have also been tested in the Arc Jet 2 x 9 Turbulent Duct Facility (TDF), the 1 atmosphere Radiant Heat Cycler, and the Mini-Wind Tunnel Facility (MWTF). A significant part of the effort hitherto has gone towards modifying and upgrading the test facilities so that meaningful tests can be carried out. Another important effort during this period has been the creation of a materials database. Computer systems administration and support have also been provided. These are described in greater detail below.

  5. Multiclass Bayes error estimation by a feature space sampling technique

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.

    1979-01-01

    A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

  6. Advanced techniques for constrained internal coordinate molecular dynamics.

    PubMed

    Wagner, Jeffrey R; Balaraman, Gouthaman S; Niesen, Michiel J M; Larsen, Adrien B; Jain, Abhinandan; Vaidehi, Nagarajan

    2013-04-30

    Internal coordinate molecular dynamics (ICMD) methods provide a more natural description of a protein by using bond, angle, and torsional coordinates instead of a Cartesian coordinate representation. Freezing high-frequency bonds and angles in the ICMD model gives rise to constrained ICMD (CICMD) models. There are several theoretical aspects that need to be developed to make the CICMD method robust and widely usable. In this article, we have designed a new framework for (1) initializing velocities for nonindependent CICMD coordinates, (2) efficient computation of center of mass velocity during CICMD simulations, (3) using advanced integrators such as Runge-Kutta, Lobatto, and adaptive CVODE for CICMD simulations, and (4) cancelling out the "flying ice cube effect" that sometimes arises in Nosé-Hoover dynamics. The Generalized Newton-Euler Inverse Mass Operator (GNEIMO) method is an implementation of a CICMD method that we have developed to study protein dynamics. GNEIMO allows for a hierarchy of coarse-grained simulation models based on the ability to rigidly constrain any group of atoms. In this article, we perform tests on the Lobatto and Runge-Kutta integrators to determine optimal simulation parameters. We also implement an adaptive coarse-graining tool using the GNEIMO Python interface. This tool enables the secondary structure-guided "freezing and thawing" of degrees of freedom in the molecule on the fly during molecular dynamics simulations and is shown to fold four proteins to their native topologies. With these advancements, we envision the use of the GNEIMO method in protein structure prediction, structure refinement, and in studying domain motion.

  7. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  8. Some Bayesian statistical techniques useful in estimating frequency and density

    USGS Publications Warehouse

    Johnson, D.H.

    1977-01-01

    This paper presents some elementary applications of Bayesian statistics to problems faced by wildlife biologists. Bayesian confidence limits for frequency of occurrence are shown to be generally superior to classical confidence limits. Population density can be estimated from frequency data if the species is sparsely distributed relative to the size of the sample plot. For other situations, limits are developed based on the normal distribution and prior knowledge that the density is non-negative, which insures that the lower confidence limit is non-negative. Conditions are described under which Bayesian confidence limits are superior to those calculated with classical methods; examples are also given on how prior knowledge of the density can be used to sharpen inferences drawn from a new sample.

  9. Estimation of soil hydraulic properties with microwave techniques

    NASA Technical Reports Server (NTRS)

    Oneill, P. E.; Gurney, R. J.; Camillo, P. J.

    1985-01-01

    Useful quantitative information about soil properties may be obtained by calibrating energy and moisture balance models with remotely sensed data. A soil physics model solves heat and moisture flux equations in the soil profile and is driven by the surface energy balance. Model generated surface temperature and soil moisture and temperature profiles are then used in a microwave emission model to predict the soil brightness temperature. The model hydraulic parameters are varied until the predicted temperatures agree with the remotely sensed values. This method is used to estimate values for saturated hydraulic conductivity, saturated matrix potential, and a soil texture parameter. The conductivity agreed well with a value measured with an infiltration ring and the other parameters agreed with values in the literature.

  10. Confidence region estimation techniques for nonlinear regression :three case studies.

    SciTech Connect

    Swiler, Laura Painton (Sandia National Laboratories, Albuquerque, NM); Sullivan, Sean P. (University of Texas, Austin, TX); Stucky-Mack, Nicholas J. (Harvard University, Cambridge, MA); Roberts, Randall Mark; Vugrin, Kay White

    2005-10-01

    This work focuses on different methods to generate confidence regions for nonlinear parameter identification problems. Three methods for confidence region estimation are considered: a linear approximation method, an F-test method, and a Log-Likelihood method. Each of these methods are applied to three case studies. One case study is a problem with synthetic data, and the other two case studies identify hydraulic parameters in groundwater flow problems based on experimental well-test results. The confidence regions for each case study are analyzed and compared. Although the F-test and Log-Likelihood methods result in similar regions, there are differences between these regions and the regions generated by the linear approximation method for nonlinear problems. The differing results, capabilities, and drawbacks of all three methods are discussed.

  11. Advanced Method to Estimate Fuel Slosh Simulation Parameters

    NASA Technical Reports Server (NTRS)

    Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl

    2005-01-01

    The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the

  12. A comparison of 2 techniques for estimating deer density

    USGS Publications Warehouse

    Robbins, C.S.

    1977-01-01

    We applied mark-resight and area-conversion methods to estimate deer abundance at a 2,862-ha area in and surrounding the Gettysburg National Military Park and Eisenhower National Historic Site during 1987-1991. One observer in each of 11 compartments counted marked and unmarked deer during 65-75 minutes at dusk during 3 counts in each of April and November. Use of radio-collars and vinyl collars provided a complete inventory of marked deer in the population prior to the counts. We sighted 54% of the marked deer during April 1987 and 1988, and 43% of the marked deer during November 1987 and 1988. Mean number of deer counted increased from 427 in April 1987 to 582 in April 1991, and increased from 467 in November 1987 to 662 in November 1990. Herd size during April, based on the mark-resight method, increased from approximately 700-1,400 from 1987-1991, whereas the estimates for November indicated an increase from 983 for 1987 to 1,592 for 1990. Given the large proportion of open area and the extensive road system throughout the study area, we concluded that the sighting probability for marked and unmarked deer was fairly similar. We believe that the mark-resight method was better suited to our study than the area-conversion method because deer were not evenly distributed between areas suitable and unsuitable for sighting within open and forested areas. The assumption of equal distribution is required by the area-conversion method. Deer marked for the mark-resight method also helped reduce double counting during the dusk surveys.

  13. Development of a technique for estimating noise covariances using multiple observers

    NASA Technical Reports Server (NTRS)

    Bundick, W. Thomas

    1988-01-01

    Friedland's technique for estimating the unknown noise variances of a linear system using multiple observers has been extended by developing a general solution for the estimates of the variances, developing the statistics (mean and standard deviation) of these estimates, and demonstrating the solution on two examples.

  14. Advances in dental veneers: materials, applications, and techniques.

    PubMed

    Pini, Núbia Pavesi; Aguiar, Flávio Henrique Baggio; Lima, Débora Alves Nunes Leite; Lovadino, José Roberto; Terada, Raquel Sano Suga; Pascotto, Renata Corrêa

    2012-01-01

    Laminate veneers are a conservative treatment of unaesthetic anterior teeth. The continued development of dental ceramics offers clinicians many options for creating highly aesthetic and functional porcelain veneers. This evolution of materials, ceramics, and adhesive systems permits improvement of the aesthetic of the smile and the self-esteem of the patient. Clinicians should understand the latest ceramic materials in order to be able to recommend them and their applications and techniques, and to ensure the success of the clinical case. The current literature was reviewed to search for the most important parameters determining the long-term success, correct application, and clinical limitations of porcelain veneers.

  15. Advances in dental veneers: materials, applications, and techniques

    PubMed Central

    Pini, Núbia Pavesi; Aguiar, Flávio Henrique Baggio; Lima, Débora Alves Nunes Leite; Lovadino, José Roberto; Terada, Raquel Sano Suga; Pascotto, Renata Corrêa

    2012-01-01

    Laminate veneers are a conservative treatment of unaesthetic anterior teeth. The continued development of dental ceramics offers clinicians many options for creating highly aesthetic and functional porcelain veneers. This evolution of materials, ceramics, and adhesive systems permits improvement of the aesthetic of the smile and the self-esteem of the patient. Clinicians should understand the latest ceramic materials in order to be able to recommend them and their applications and techniques, and to ensure the success of the clinical case. The current literature was reviewed to search for the most important parameters determining the long-term success, correct application, and clinical limitations of porcelain veneers. PMID:23674920

  16. The emerging role of advanced neuroimaging techniques for brain metastases.

    PubMed

    Nowosielski, Martha; Radbruch, Alexander

    2015-06-01

    Brain metastases are an increasingly encountered and frightening manifestation of systemic cancer. More effective therapeutic strategies for the primary tumor are resulting in longer patient survival on the one hand while on the other, better brain tumor detection has resulted from increased availability and development of more precise brain imaging methods. This review focuses on the emerging role of functional neuroimaging techniques; magnetic resonance imaging (MRI) as well as positron emission tomography (PET), in establishing diagnosis, for monitoring treatment response with an emphasis on new targeted as well as immunomodulatory therapies and for predicting prognosis in patients with brain metastases.

  17. Cost estimation model for advanced planetary programs, fourth edition

    NASA Technical Reports Server (NTRS)

    Spadoni, D. J.

    1983-01-01

    The development of the planetary program cost model is discussed. The Model was updated to incorporate cost data from the most recent US planetary flight projects and extensively revised to more accurately capture the information in the historical cost data base. This data base is comprised of the historical cost data for 13 unmanned lunar and planetary flight programs. The revision was made with a two fold objective: to increase the flexibility of the model in its ability to deal with the broad scope of scenarios under consideration for future missions, and to maintain and possibly improve upon the confidence in the model's capabilities with an expected accuracy of 20%. The Model development included a labor/cost proxy analysis, selection of the functional forms of the estimating relationships, and test statistics. An analysis of the Model is discussed and two sample applications of the cost model are presented.

  18. A novel technique for real-time estimation of edge pedestal density gradients via reflectometer time delay data

    NASA Astrophysics Data System (ADS)

    Zeng, L.; Doyle, E. J.; Rhodes, T. L.; Wang, G.; Sung, C.; Peebles, W. A.; Bobrek, M.

    2016-11-01

    A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layer density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.

  19. Advanced techniques in reliability model representation and solution

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Nicol, David M.

    1992-01-01

    The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.

  20. Advanced terahertz techniques for quality control and counterfeit detection

    NASA Astrophysics Data System (ADS)

    Ahi, Kiarash; Anwar, Mehdi

    2016-04-01

    This paper reports our invented methods for detection of counterfeit electronic. These versatile techniques are also handy in quality control applications. Terahertz pulsed laser systems are capable of giving the material characteristics and thus make it possible to distinguish between the materials used in authentic components and their counterfeit clones. Components with material defects can also be distinguished in section in this manner. In this work different refractive indices and absorption coefficients were observed for counterfeit components compared to their authentic counterparts. Existence of unexpected ingredient materials was detected in counterfeit components by Fourier Transform analysis of the transmitted terahertz pulse. Thicknesses of different layers are obtainable by analyzing the reflected terahertz pulse. Existence of unexpected layers is also detectable in this manner. Recycled, sanded and blacktopped counterfeit electronic components were detected as a result of these analyses. Counterfeit ICs with die dislocations were detected by depicting the terahertz raster scanning data in a coordinate plane which gives terahertz images. In the same manner, raster scanning of the reflected pulse gives terahertz images of the surfaces of the components which were used to investigate contaminant materials and sanded points on the surfaces. The results of the later technique, reveals the recycled counterfeit components.

  1. Advanced microscopy techniques resolving complex precipitates in steels

    NASA Astrophysics Data System (ADS)

    Saikaly, W.; Soto, R.; Bano, X.; Issartel, C.; Rigaut, G.; Charaï, A.

    1999-06-01

    Scanning electron microscopy as well as analytical transmission electron microscopy techniques such as high resolution, electron diffraction, energy dispersive X-ray spectrometry (EDX), parallel electron energy loss spectroscopy (PEELS) and elemental mapping via a Gatan Imaging Filter (GIF) have been used to study complex precipitation in commercial dual phase steels microalloyed with titanium. Titanium nitrides, titanium carbosulfides, titanium carbonitrides and titanium carbides were characterized in this study. Both carbon extraction replicas and thin foils were used as sample preparation techniques. On both the microscopic and nanometric scales, it was found that a large amount of precipitation occurred heterogeneously on already existing inclusions/precipitates. CaS inclusions (1 to 2 μm), already present in liquid steel, acted as nucleation sites for TiN precipitating upon the steel's solidification. In addition, TiC nucleated on existing smaller TiN (around 30 to 50 nm). Despite the complexity of such alloys, the statistical analysis conducted on the non-equilibrium samples were found to be in rather good agreement with the theoretical equilibrium calculations. Heterogeneous precipitation must have played a role in bringing these results closer together.

  2. Comparison of three advanced chromatographic techniques for cannabis identification.

    PubMed

    Debruyne, D; Albessard, F; Bigot, M C; Moulin, M

    1994-01-01

    The development of chromatography technology, with the increasing availability of easier-to-use mass spectrometers combined with gas chromatography (GC), the use of diode-array or programmable variable-wavelength ultraviolet absorption detectors in conjunction with high-performance liquid chromatography (HPLC), and the availability of scanners capable of reading thin-layer chromatography (TLC) plates in the ultraviolet and visible regions, has made for easier, quicker and more positive identification of cannabis samples that standard analytical laboratories are occasionally required to undertake in the effort to combat drug addiction. At laboratories that do not possess the technique of GC combined with mass spectrometry, which provides an irrefutable identification, the following procedure involving HPLC or TLC techniques may be used: identification of the chromatographic peaks corresponding to each of the three main cannabis constituents-cannabidiol (CBD), delta-9-tetrahydrocannabinol (delta-9-THC) and cannabinol (CBN)-by comparison with published data in conjunction with a specific absorption spectrum for each of those constituents obtained between 200 and 300 nm. The collection of the fractions corresponding to the three major cannabinoids at the HPLC system outlet and the cross-checking of their identity in the GC process with flame ionization detection can further corroborate the identification and minimize possible errors due to interference.

  3. Development of Advanced In-Situ Techniques for Chemistry Monitoring and Corrosion Mitigation in SCWO Environments

    SciTech Connect

    Macdonald, D. D.; Lvov, S. N.

    2000-03-31

    This project is developing sensing technologies and corrosion monitoring techniques for use in super critical water oxidation (SCWO) systems to reduce the volume of mixed low-level nuclear waste by oxidizing organic components in a closed cycle system where CO2 and other gaseous oxides are produced, leaving the radioactive elements concentrated in ash. The technique uses water at supercritical temperatures under highly oxidized conditions by maintaining a high fugacity of molecular oxygen in the system, which causes high corrosion rates of even the most corrosive resistant reactor materials. This project significantly addresses the high corrosion shortcoming through development of (a) advanced electrodes and sensors for in situ potentiometric monitoring of pH in high subcritical and supercritical aqueous solutions, (b) an approach for evaluating the association constants for 1-1 aqueous electrolytes using a flow-through electrochemical thermocell; (c) an electrochemical noise sensor for the in situ measurement of corrosion rate in subcritical and supercritical aqueous systems; (d) a model for estimating the effect of pressure on reaction rates, including corrosion reactions, in high subcritical and supercritical aqueous systems. The project achieved all objectives, except for installing some of the sensors into a fully operating SCWO system.

  4. Recent Advances in Spaceborne Precipitation Radar Measurement Techniques and Technology

    NASA Technical Reports Server (NTRS)

    Im, Eastwood; Durden, Stephen L.; Tanelli, Simone

    2006-01-01

    NASA is currently developing advanced instrument concepts and technologies for future spaceborne atmospheric radars, with an over-arching objective of making such instruments more capable in supporting future science needs and more cost effective. Two such examples are the Second-Generation Precipitation Radar (PR-2) and the Nexrad-In-Space (NIS). PR-2 is a 14/35-GHz dual-frequency rain radar with a deployable 5-meter, wide-swath scanned membrane antenna, a dual-polarized/dual-frequency receiver, and a realtime digital signal processor. It is intended for Low Earth Orbit (LEO) operations to provide greatly enhanced rainfall profile retrieval accuracy while consuming only a fraction of the mass of the current TRMM Precipitation Radar (PR). NIS is designed to be a 35-GHz Geostationary Earth Orbiting (GEO) radar for providing hourly monitoring of the life cycle of hurricanes and tropical storms. It uses a 35-m, spherical, lightweight membrane antenna and Doppler processing to acquire 3-dimensional information on the intensity and vertical motion of hurricane rainfall.

  5. Coal and Coal Constituent Studies by Advanced EMR Techniques

    SciTech Connect

    Alex I. Smirnov; Mark J. Nilges; R. Linn Belford; Robert B. Clarkson

    1998-03-31

    Advanced electronic magnetic resonance (EMR) methods are used to examine properties of coals, chars, and molecular species related to constituents of coal. We have achieved substantial progress on upgrading the high field (HF) EMR (W-band, 95 GHz) spectrometers that are especially advantageous for such studies. Particularly, we have built a new second W-band instrument (Mark II) in addition to our Mark I. Briefly, Mark II features: (i) an Oxford custom-built 7 T superconducting magnet which is scannable from 0 to 7 T at up to 0.5 T/min; (ii) water-cooled coaxial solenoid with up to ±550 G scan under digital (15 bits resolution) computer control; (iii) custom-engineered precision feed-back circuit, which is used to drive this solenoid, is based on an Ultrastab 860R sensor that has linearity better than 5 ppm and resolution of 0.05 ppm; (iv) an Oxford CF 1200 cryostat for variable temperature studies from 1.8 to 340 K. During this grant period we have completed several key upgrades of both Mark I and II, particularly microwave bridge, W-band probehead, and computer interfaces. We utilize these improved instruments for HF EMR studies of spin-spin interaction and existence of different paramagnetic species in carbonaceous solids.

  6. Advanced Cell Culture Techniques for Cancer Drug Discovery

    PubMed Central

    Lovitt, Carrie J.; Shelper, Todd B.; Avery, Vicky M.

    2014-01-01

    Human cancer cell lines are an integral part of drug discovery practices. However, modeling the complexity of cancer utilizing these cell lines on standard plastic substrata, does not accurately represent the tumor microenvironment. Research into developing advanced tumor cell culture models in a three-dimensional (3D) architecture that more prescisely characterizes the disease state have been undertaken by a number of laboratories around the world. These 3D cell culture models are particularly beneficial for investigating mechanistic processes and drug resistance in tumor cells. In addition, a range of molecular mechanisms deconstructed by studying cancer cells in 3D models suggest that tumor cells cultured in two-dimensional monolayer conditions do not respond to cancer therapeutics/compounds in a similar manner. Recent studies have demonstrated the potential of utilizing 3D cell culture models in drug discovery programs; however, it is evident that further research is required for the development of more complex models that incorporate the majority of the cellular and physical properties of a tumor. PMID:24887773

  7. Advanced coding techniques for few mode transmission systems.

    PubMed

    Okonkwo, Chigo; van Uden, Roy; Chen, Haoshuo; de Waardt, Huug; Koonen, Ton

    2015-01-26

    We experimentally verify the advantage of employing advanced coding schemes such as space-time coding and 4 dimensional modulation formats to enhance the transmission performance of a 3-mode transmission system. The performance gain of space-time block codes for extending the optical signal-to-noise ratio tolerance in multiple-input multiple-output optical coherent spatial division multiplexing transmission systems with respect to single-mode transmission performance are evaluated. By exploiting the spatial diversity that few-mode-fibers offer, with respect to single mode fiber back-to-back performance, significant OSNR gains of 3.2, 4.1, 4.9, and 6.8 dB at the hard-decision forward error correcting limit are demonstrated for DP-QPSK 8, 16 and 32 QAM, respectively. Furthermore, by employing 4D constellations, 6 × 28Gbaud 128 set partitioned quadrature amplitude modulation is shown to outperform conventional 8 QAM transmission performance, whilst carrying an additional 0.5 bit/symbol.

  8. Multi-sensor merging techniques for improving burned area estimates

    NASA Astrophysics Data System (ADS)

    Bradley, A.; Tansey, K.; Chuvieco, E.

    2012-04-01

    The ESA Climate Change Initiative (CCI) aims to create a set of Essential Climate Variables (ECV) to assist climate modellers. One of these is the fire ECV, a product in line with typical requirements of climate, vegetation and ecological modellers investigated by the fire ECV project and documented in the fire product specification document. The product is derived from burned area estimates of three sensors, SPOT VEGETATION (SPOT-VGT), the Along-Track Scanning Radiometer (ATSR) series, and the MEdium Resolution Imaging Spectrometer at Full ReSolution (MERIS FRS). This abstract is concerned with the final stage in the production of the fire product, merging of the burned area estimates from the three sensors into two products. The two products are created at monthly time steps, the pixel (1km) and the aggregated grid product (0.5° and 0.25°). The pixel product contains information on sensors detecting the burn, date of burn detection, confidence of the burn and land cover statistics. The grid product contains aggregated information on burned area totals and proportion, major land cover burned, heterogeneity of burning in the grid cell, confidence and cloud cover levels. The method used to create these products needs to allow for time series gaps due to multiple sensor combinations and different orbital and swath characteristics and comprises a combination statistical, selective, stratification and fusion methods common to the satellite remote sensing community. The method is in three stages, first a combined merge of sensors in the same 1km resolution. The earliest date of detection is recorded and the sensor that performs the best over a particular vegetation type is taken as the most reliable confidence level. The second part involves fusion of the 300 m MERIS FRS data allowing confidence levels and burn dates to be reported to a finer resolution. To allow for MERIS FRS pixels that cross adjacent 1km pixels from the first step the fusion is carried out at 100 m

  9. Techniques for estimating blood pressure variation using video images.

    PubMed

    Sugita, Norihiro; Obara, Kazuma; Yoshizawa, Makoto; Abe, Makoto; Tanaka, Akira; Homma, Noriyasu

    2015-01-01

    It is important to know about a sudden blood pressure change that occurs in everyday life and may pose a danger to human health. However, monitoring the blood pressure variation in daily life is difficult because a bulky and expensive sensor is needed to measure the blood pressure continuously. In this study, a new non-contact method is proposed to estimate the blood pressure variation using video images. In this method, the pulse propagation time difference or instantaneous phase difference is calculated between two pulse waves obtained from different parts of a subject's body captured by a video camera. The forehead, left cheek, and right hand are selected as regions to obtain pulse waves. Both the pulse propagation time difference and instantaneous phase difference were calculated from the video images of 20 healthy subjects performing the Valsalva maneuver. These indices are considered to have a negative correlation with the blood pressure variation because they approximate the pulse transit time obtained from a photoplethysmograph. However, the experimental results showed that the correlation coefficients between the blood pressure and the proposed indices were approximately 0.6 for the pulse wave obtained from the right hand. This result is considered to be due to the difference in the transmission depth into the skin between the green and infrared light used as light sources for the video image and conventional photoplethysmogram, respectively. In addition, the difference in the innervation of the face and hand may be related to the results.

  10. A Fast Goal Recognition Technique Based on Interaction Estimates

    NASA Technical Reports Server (NTRS)

    E-Martin, Yolanda; R-Moreno, Maria D.; Smith, David E.

    2015-01-01

    Goal Recognition is the task of inferring an actor's goals given some or all of the actor's observed actions. There is considerable interest in Goal Recognition for use in intelligent personal assistants, smart environments, intelligent tutoring systems, and monitoring user's needs. In much of this work, the actor's observed actions are compared against a generated library of plans. Recent work by Ramirez and Geffner makes use of AI planning to determine how closely a sequence of observed actions matches plans for each possible goal. For each goal, this is done by comparing the cost of a plan for that goal with the cost of a plan for that goal that includes the observed actions. This approach yields useful rankings, but is impractical for real-time goal recognition in large domains because of the computational expense of constructing plans for each possible goal. In this paper, we introduce an approach that propagates cost and interaction information in a plan graph, and uses this information to estimate goal probabilities. We show that this approach is much faster, but still yields high quality results.

  11. Advanced Process Monitoring Techniques for Safeguarding Reprocessing Facilities

    SciTech Connect

    Orton, Christopher R.; Bryan, Samuel A.; Schwantes, Jon M.; Levitskaia, Tatiana G.; Fraga, Carlos G.; Peper, Shane M.

    2010-11-30

    The International Atomic Energy Agency (IAEA) has established international safeguards standards for fissionable material at spent fuel reprocessing plants to ensure that significant quantities of weapons-grade nuclear material are not diverted from these facilities. For large throughput nuclear facilities, it is difficult to satisfy the IAEA safeguards accountancy goal for detection of abrupt diversion. Currently, methods to verify material control and accountancy (MC&A) at these facilities require time-consuming and resource-intensive destructive assay (DA). Leveraging new on-line non destructive assay (NDA) process monitoring techniques in conjunction with the traditional and highly precise DA methods may provide an additional measure to nuclear material accountancy which would potentially result in a more timely, cost-effective and resource efficient means for safeguards verification at such facilities. By monitoring process control measurements (e.g. flowrates, temperatures, or concentrations of reagents, products or wastes), abnormal plant operations can be detected. Pacific Northwest National Laboratory (PNNL) is developing on-line NDA process monitoring technologies, including both the Multi-Isotope Process (MIP) Monitor and a spectroscopy-based monitoring system, to potentially reduce the time and resource burden associated with current techniques. The MIP Monitor uses gamma spectroscopy and multivariate analysis to identify off-normal conditions in process streams. The spectroscopic monitor continuously measures chemical compositions of the process streams including actinide metal ions (U, Pu, Np), selected fission products, and major cold flowsheet chemicals using UV-Vis, Near IR and Raman spectroscopy. This paper will provide an overview of our methods and report our on-going efforts to develop and demonstrate the technologies.

  12. Automated angiogenesis quantification through advanced image processing techniques.

    PubMed

    Doukas, Charlampos N; Maglogiannis, Ilias; Chatziioannou, Aristotle; Papapetropoulos, Andreas

    2006-01-01

    Angiogenesis, the formation of blood vessels in tumors, is an interactive process between tumor, endothelial and stromal cells in order to create a network for oxygen and nutrients supply, necessary for tumor growth. According to this, angiogenic activity is considered a suitable method for both tumor growth or inhibition detection. The angiogenic potential is usually estimated by counting the number of blood vessels in particular sections. One of the most popular assay tissues to study the angiogenesis phenomenon is the developing chick embryo and its chorioallantoic membrane (CAM), which is a highly vascular structure lining the inner surface of the egg shell. The aim of this study was to develop and validate an automated image analysis method that would give an unbiased quantification of the micro-vessel density and growth in angiogenic CAM images. The presented method has been validated by comparing automated results to manual counts over a series of digital chick embryo photos. The results indicate the high accuracy of the tool, which has been thus extensively used for tumor growth detection at different stages of embryonic development. PMID:17946107

  13. Silicon and germanium crystallization techniques for advanced device applications

    NASA Astrophysics Data System (ADS)

    Liu, Yaocheng

    Three-dimensional architectures are believed to be one of the possible approaches to reduce interconnect delay in integrated circuits. Metal-induced crystallization (MIC) can produce reasonably high-quality Si crystals with low-temperature processing, enabling the monolithic integration of multilevel devices and circuits. A two-step MIC process was developed to make single-crystal Si pillars on insulator by forming a single-grain NiSi2 template in the first step and crystallizing the amorphous Si by NiSi2-mediated solid-phase epitaxy (SPE) in the second step. A transmission electron microscopy study clearly showed the quality improvement over the traditional MIC process. Another crystallization technique developed is rapid melt growth (RMG) for the fabrication of Ge crystals and Ge-on-insulator (GeOI) substrates. Ge is an important semiconductor with high carrier mobility and excellent optoelectronic properties. GeOI substrates are particularly desired to achieve high device performances and to solve the process problems traditionally associated with bulk Ge wafers. High-quality Ge crystals and GeOI structures were grown on Si substrates using the novel rapid melt growth technique that integrates the key elements in Czochralski growth---seeding, melting, epitaxy and defect necking. Growth velocity and nucleation rate were calculated to determine the RMG process window. Self-aligned microcrucibles were created to hold the Ge liquid during the RMG annealing. Material characterization showed a very low defect density in the RMG GeOI structures. The Ge films are relaxed, with their orientations controlled by the Si substrates. P-channel MOSFETs and p-i-n photodetectors were fabricated with the GeOI substrates. The device properties are comparable to those obtained with bulk Ge wafers, indicating that the RMG GeOI substrates are well suited for device fabrication. A new theory, growth-induced barrier lowering (GIBL), is proposed to understand the defect generation in

  14. Advanced Manufacturing Techniques Demonstrated for Fabricating Developmental Hardware

    NASA Technical Reports Server (NTRS)

    Redding, Chip

    2004-01-01

    NASA Glenn Research Center's Engineering Development Division has been working in support of innovative gas turbine engine systems under development by Glenn's Combustion Branch. These one-of-a-kind components require operation under extreme conditions. High-temperature ceramics were chosen for fabrication was because of the hostile operating environment. During the designing process, it became apparent that traditional machining techniques would not be adequate to produce the small, intricate features for the conceptual design, which was to be produced by stacking over a dozen thin layers with many small features that would then be aligned and bonded together into a one-piece unit. Instead of using traditional machining, we produced computer models in Pro/ENGINEER (Parametric Technology Corporation (PTC), Needham, MA) to the specifications of the research engineer. The computer models were exported in stereolithography standard (STL) format and used to produce full-size rapid prototype polymer models. These semi-opaque plastic models were used for visualization and design verification. The computer models also were exported in International Graphics Exchange Specification (IGES) format and sent to Glenn's Thermal/Fluids Design & Analysis Branch and Applied Structural Mechanics Branch for profiling heat transfer and mechanical strength analysis.

  15. Simulation of an advanced techniques of ion propulsion Rocket system

    NASA Astrophysics Data System (ADS)

    Bakkiyaraj, R.

    2016-07-01

    The ion propulsion rocket system is expected to become popular with the development of Deuterium,Argon gas and Hexagonal shape Magneto hydrodynamic(MHD) techniques because of the stimulation indirectly generated the power from ionization chamber,design of thrust range is 1.2 N with 40 KW of electric power and high efficiency.The proposed work is the study of MHD power generation through ionization level of Deuterium gas and combination of two gaseous ions(Deuterium gas ions + Argon gas ions) at acceleration stage.IPR consists of three parts 1.Hexagonal shape MHD based power generator through ionization chamber 2.ion accelerator 3.Exhaust of Nozzle.Initially the required energy around 1312 KJ/mol is carrying out the purpose of deuterium gas which is changed to ionization level.The ionized Deuterium gas comes out from RF ionization chamber to nozzle through MHD generator with enhanced velocity then after voltage is generated across the two pairs of electrode in MHD.it will produce thrust value with the help of mixing of Deuterium ion and Argon ion at acceleration position.The simulation of the IPR system has been carried out by MATLAB.By comparing the simulation results with the theoretical and previous results,if reaches that the proposed method is achieved of thrust value with 40KW power for simulating the IPR system.

  16. Advances in Current Rating Techniques for Flexible Printed Circuits

    NASA Technical Reports Server (NTRS)

    Hayes, Ron

    2014-01-01

    Twist Capsule Assemblies are power transfer devices commonly used in spacecraft mechanisms that require electrical signals to be passed across a rotating interface. Flexible printed circuits (flex tapes, see Figure 2) are used to carry the electrical signals in these devices. Determining the current rating for a given trace (conductor) size can be challenging. Because of the thermal conditions present in this environment the most appropriate approach is to assume that the only means by which heat is removed from the trace is thru the conductor itself, so that when the flex tape is long the temperature rise in the trace can be extreme. While this technique represents a worst-case thermal situation that yields conservative current ratings, this conservatism may lead to overly cautious designs when not all traces are used at their full rated capacity. A better understanding of how individual traces behave when they are not all in use is the goal of this research. In the testing done in support of this paper, a representative flex tape used for a flight Solar Array Drive Assembly (SADA) application was tested by energizing individual traces (conductors in the tape) in a vacuum chamber and the temperatures of the tape measured using both fine-gauge thermocouples and infrared thermographic imaging. We find that traditional derating schemes used for bundles of wires do not apply for the configuration tested. We also determine that single active traces located in the center of a flex tape operate at lower temperatures than those on the outside edges.

  17. Recent advances in techniques for tsetse-fly control*

    PubMed Central

    MacLennan, K. J. R.

    1967-01-01

    With the advent of modern persistent insecticides, it has become possible to utilize some of the knowledge that has accumulated on the ecology and bionomics of Glossina and to devise more effective techniques for the control and eventual extermination of these species. The present article, based on experience of the tsetse fly problem in Northern Nigeria, points out that the disadvantages of control techniques—heavy expenditure of money and manpower and undue damage to the biosystem—can now largely be overcome by basing the application of insecticides on knowledge of the habits of the particular species of Glossina in a particular environment. Two factors are essential to the success of a control project: the proper selection of sites for spraying (the concept of restricted application) and the degree of persistence of the insecticide used. Reinfestation from within or outside the project area must also be taken into account. These and other aspects are discussed in relation to experience gained from a successful extermination project carried out in the Sudan vegetation zone and from present control activities in the Northern Guinea vegetation zone. PMID:5301739

  18. Advanced pattern-matching techniques for autonomous acquisition

    NASA Astrophysics Data System (ADS)

    Narendra, P. M.; Westover, B. L.

    1981-01-01

    The key objective of this effort is the development of pattern-matching algorithms which can impart autonomous acquisition capability to precision-guided munitions such as Copperhead and Hellfire. Autonomous acquisition through pattern matching holds the promise of eliminating laser designation and enhancing fire power by multiple target prioritization. The pattern-matching approach being developed under this program is based on a symbolic pattern-matching framework, which is suited for the autonomous acquisition scenario. It is based on matching a symbolic representation derived from the two images, and it can accommodate the stringent pattern-matchine criteria established by the scenario: enormous differences in the scene perspective, aspect and range between the two sensors, differences in sensor characteristics and illumination, and scene changes such as target motion and obscuration from one view point ot the other. This report contains a description of an efficient branch-and-bound technique for symbolic pattern matching. Also presented are the results of applying a simulation of the algorithm to pairs of FLIR images of military vehicles in cluttered environments as well as pairs of images from different sensors (FLIR and silicon TV). The computational requirements are analyzed toward real-time implementation, and avenues of future work are recommended.

  19. Advanced signal processing technique for damage detection in steel tubes

    NASA Astrophysics Data System (ADS)

    Amjad, Umar; Yadav, Susheel Kumar; Dao, Cac Minh; Dao, Kiet; Kundu, Tribikram

    2016-04-01

    In recent years, ultrasonic guided waves gained attention for reliable testing and characterization of metals and composites. Guided wave modes are excited and detected by PZT (Lead Zirconate Titanate) transducers either in transmission or reflection mode. In this study guided waves are excited and detected in the transmission mode and the phase change of the propagating wave modes are recorded. In most of the other studies reported in the literature, the change in the received signal strength (amplitude) is investigated with varying degrees of damage while in this study the change in phase is correlated with the extent of damage. Feature extraction techniques are used for extracting phase and time-frequency information. The main advantage of this approach is that the bonding condition between the transducer and the specimen does not affect the phase while it can affect the strength of recorded signal. Therefore, if the specimen is not damaged but the transducer-specimen bonding is deteriorated then the received signal strength is altered but the phase remains same and thus false positive predictions for damage can be avoided.

  20. Geostatistical characterization of the soil of Aguascalientes, México, by using spatial estimation techniques.

    PubMed

    Magdaleno-Márquez, Ricardo; de la Luz Pérez-Rea, María; Castaño, Víctor M

    2016-01-01

    Four spatial estimation techniques available in commercial computational packages are evaluated and compared, namely: regularized splines interpolation, tension splines interpolation, inverse distance weighted interpolation, and ordinary Kriging estimation, in order to establish the best representation for the shallow stratigraphic configuration in the city of Aguascalientes, in Central Mexico. Data from 478 sample points along with the software ArcGIS (Environmental Systems Research Institute, Inc. (ESRI), ArcGIS, ver. 9.3, Redlands, California 2008) to calculate the spatial estimates. Each technique was evaluated based on the root mean square error, calculated from a validation between the generated estimates and measured data from 64 sample points which were not used in the spatial estimation process. The present study shows that, for the estimation of the hard-soil layer, ordinary Kriging offered the best performance among the evaluated techniques.

  1. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    SciTech Connect

    Durlofsky, Louis J.

    2000-08-28

    This project targets the development of (1) advanced reservoir simulation techniques for modeling non-conventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and well index (for use in simulation models), including the effects of wellbore flow; and (3) accurate approaches to account for heterogeneity in the near-well region.

  2. Biotechnology Apprenticeship for Secondary-Level Students: Teaching Advanced Cell Culture Techniques for Research

    ERIC Educational Resources Information Center

    Lewis, Jennifer R.; Kotur, Mark S.; Butt, Omar; Kulcarni, Sumant; Riley, Alyssa A.; Ferrell, Nick; Sullivan, Kathryn D.; Ferrari, Mauro

    2002-01-01

    The purpose of this article is to discuss "small-group apprenticeships (SGAs)" as a method to instruct cell culture techniques to high school participants. The study aimed to teach cell culture practices and to introduce advanced imaging techniques to solve various biomedical engineering problems. Participants designed and completed experiments…

  3. Biotechnology Apprenticeship for Secondary-Level Students: Teaching Advanced Cell Culture Techniques for Research.

    ERIC Educational Resources Information Center

    Lewis, Jennifer R.; Kotur, Mark S.; Butt, Omar; Kulcarni, Sumant; Riley, Alyssa A.; Ferrell, Nick; Sullivan, Kathryn D.; Ferrari, Mauro

    2002-01-01

    Discusses small-group apprenticeships (SGAs) as a method for introducing cell culture techniques to high school participants. Teaches cell culture practices and introduces advance imaging techniques to solve various biomedical engineering problems. Clarifies and illuminates the value of small-group laboratory apprenticeships. (Author/KHR)

  4. Advanced techniques for determining long term compatibility of materials with propellants

    NASA Technical Reports Server (NTRS)

    Green, R. L.; Stebbins, J. P.; Smith, A. W.; Pullen, K. E.

    1973-01-01

    A method for the prediction of propellant-material compatibility for periods of time up to ten years is presented. Advanced sensitive measurement techniques used in the prediction method are described. These include: neutron activation analysis, radioactive tracer technique, and atomic absorption spectroscopy with a graphite tube furnace sampler. The results of laboratory tests performed to verify the prediction method are presented.

  5. Endoscopic therapy for early gastric cancer: Standard techniques and recent advances in ESD

    PubMed Central

    Kume, Keiichiro

    2014-01-01

    The technique of endoscopic submucosal dissection (ESD) is now a well-known endoscopic therapy for early gastric cancer. ESD was introduced to resect large specimens of early gastric cancer in a single piece. ESD can provide precision of histologic diagnosis and can also reduce the recurrence rate. However, the drawback of ESD is its technical difficulty, and, consequently, it is associated with a high rate of complications, the need for advanced endoscopic techniques, and a lengthy procedure time. Various advances in the devices and techniques used for ESD have contributed to overcoming these drawbacks. PMID:24914364

  6. Use of environmental isotope tracer and GIS techniques to estimate basin recharge

    NASA Astrophysics Data System (ADS)

    Odunmbaku, Abdulganiu A. A.

    The extensive use of ground water only began with the advances in pumping technology at the early portion of 20th Century. Groundwater provides the majority of fresh water supply for municipal, agricultural and industrial uses, primarily because of little to no treatment it requires. Estimating the volume of groundwater available in a basin is a daunting task, and no accurate measurements can be made. Usually water budgets and simulation models are primarily used to estimate the volume of water in a basin. Precipitation, land surface cover and subsurface geology are factors that affect recharge; these factors affect percolation which invariably affects groundwater recharge. Depending on precipitation, soil chemistry, groundwater chemical composition, gradient and depth, the age and rate of recharge can be estimated. This present research proposes to estimate the recharge in Mimbres, Tularosa and Diablo Basin using the chloride environmental isotope; chloride mass-balance approach and GIS. It also proposes to determine the effect of elevation on recharge rate. Mimbres and Tularosa Basin are located in southern New Mexico State, and extend southward into Mexico. Diablo Basin is located in Texas in extends southward. This research utilizes the chloride mass balance approach to estimate the recharge rate through collection of groundwater data from wells, and precipitation. The data were analysed statistically to eliminate duplication, outliers, and incomplete data. Cluster analysis, piper diagram and statistical significance were performed on the parameters of the groundwater; the infiltration rate was determined using chloride mass balance technique. The data was then analysed spatially using ArcGIS10. Regions of active recharge were identified in Mimbres and Diablo Basin, but this could not be clearly identified in Tularosa Basin. CMB recharge for Tularosa Basin yields 0.04037mm/yr (0.0016in/yr), Diablo Basin was 0.047mm/yr (0.0016 in/yr), and 0.2153mm/yr (0.00848in

  7. Estimation of distributional parameters for censored trace level water quality data. 1. Estimation techniques

    USGS Publications Warehouse

    Gilliom, R.J.; Helsel, D.R.

    1986-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores.

  8. [Our experience with the treatment of high perianal fistulas with the mucosal flap advancement technique].

    PubMed

    Marino, Giuseppe; Greco, Ettore; Gasparrini, Marcello; Romanzi, Aldo; Ottaviani, Maurizio; Nasi, Stefano; Pasquini, Giorgio

    2004-01-01

    The authors present their experience with the treatment of high transphincteric anal fistulas with the mucosal flap advancement technique. This technique, though by no means easy to perform, allows fistulas to be treated in a single surgical session in comparison to the technique in which setone is used or to the less well known transposition techniques, given the same long-term results in terms of continence and recurrence rate. After a brief overview of the problem, from the points of view of both aetiopathogenesis and classification, the principal surgical treatment techniques are described, presenting the results and complications observed in the authors' own case series. PMID:15038659

  9. Advanced remote sensing techniques for forestry applications: an application case in Sarawak, Malaysia

    NASA Astrophysics Data System (ADS)

    Nezry, Edmond; Yakam-Simen, Francis; Romeijn, Paul P.; Supit, Iwan; Demargne, Louis

    2001-02-01

    12 This paper reports the operational implementation of new techniques for the exploitation of remote sensing data (SAR and optical) in the framework of forestry applications. In particular, we present a new technique for standing timber volume estimation. This technique is based on remote sensing knowledge (SAR and optical synergy) and forestry knowledge (forest structure models), proved fairly accurate. To illustrate the application of these techniques, an operational commercial case study regarding forest concessions in Sarawak is presented. Validation of this technique by comparison of the remote sensing results and the database of the customer has shown that this technique is fairly accurate.

  10. A Time Series Separation and Reconstruction (TSSR) Technique to Estimate Daily Suspended Sediment Concentrations

    EPA Science Inventory

    High suspended sediment concentrations (SSCs) from natural and anthropogenic sources are responsible for biological impairments of many streams, rivers, lakes, and estuaries, but techniques to estimate sediment concentrations or loads accurately at the daily temporal resolution a...

  11. A Rapid Screen Technique for Estimating Nanoparticle Transport in Porous Media

    EPA Science Inventory

    Quantifying the mobility of engineered nanoparticles in hydrologic pathways from point of release to human or ecological receptors is essential for assessing environmental exposures. Column transport experiments are a widely used technique to estimate the transport parameters of ...

  12. A technique for the radar cross-section estimation of axisymmetric plasmoid

    NASA Astrophysics Data System (ADS)

    Naumov, N. D.; Petrovskiy, V. P.; Sasinovskiy, Yu K.; Shkatov, O. Yu

    2015-11-01

    A model for the radio waves backscattering from both penetrable plasma and reflecting plasma is developed. The technique proposed is based on Huygens's principle and reduces the radar cross-section estimation to numerical integrations.

  13. A comparison of minimum distance and maximum likelihood techniques for proportion estimation

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.

    1982-01-01

    The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.

  14. Recent Advances in Stable Isotope Techniques for N2O Source Partitioning in Soils

    NASA Astrophysics Data System (ADS)

    Baggs, E.; Mair, L.; Mahmood, S.

    2007-12-01

    The use of 13C, 15N and 18O enables us to overcome uncertainties associated with soil C and N processes and to assess the links between species diversity and ecosystem function. Recent advances in stable isotope techniques enable determination of process rates, and are fundamental for examining interactions between C and N cycles. Here we will introduce the 15N-, 18O- and 13C-enrichment techniques we have developed to distinguish between different N2O-producing processes in situ in soils, presenting selected results, and will critically assess their potential, alone and in combination with molecular techniques, to help address key research questions for soil biogeochemistry and microbial ecology. We have developed 15N- 18O-enrichment techniques to distinguish between, and to quantify, N2O production during ammonia oxidation, nitrifier denitrification and denitrification. This provides a great advantage over natural abundance approaches as it enables quantification of N2O from each microbial source, which can be coupled with quantification of N2 production, and used to examine interactions between different processes and cycles. These approaches have also provided new insights into the N cycle and how it interacts with the C cycle. For example, we now know that ammonia oxidising bacteria significantly contribute to N2O emissions from soils, both via the traditionally accepted ammonia oxidation pathway, and also via denitrification (nitrifier denitrification) which can proceed even under aerobic conditions. We are also linking emissions from each source to diversity and activity of relevant microbial functional groups, for example through the development and application of a specific nirK primer for the nitrite reductase in ammonia oxidising bacteria. Recently, isotopomers have been proposed as an alternative for source partitioning N2O at natural abundance levels, and offers the potential to investigate N2O production from nitrate ammonification, and overcomes the

  15. Parameter estimation and tests of General Relativity with GW transients in Advanced LIGO

    NASA Astrophysics Data System (ADS)

    Vitale, Salvatore

    2016-03-01

    The Advanced LIGO observatories have successfully completed their first observation run. Data were collected from September 2015 to January 2016, with a sensitivity a few times better than initial instruments in the hundreds of Hertz band. Bayesian parameter estimation and model selection algorithms can be used to estimate the astrophysical parameters of gravitational-wave sources, as well as to perform tests of General Relativity in its strong-field dynamical regime. In this talk we will describe the methods devised to characterize transient gravitational wave sources and their applications in the advanced gravitational-wave detector era.

  16. Evaluation of small area crop estimation techniques using LANDSAT- and ground-derived data. [South Dakota

    NASA Technical Reports Server (NTRS)

    Amis, M. L.; Martin, M. V.; Mcguire, W. G.; Shen, S. S. (Principal Investigator)

    1982-01-01

    Studies completed in fiscal year 1981 in support of the clustering/classification and preprocessing activities of the Domestic Crops and Land Cover project. The theme throughout the study was the improvement of subanalysis district (usually county level) crop hectarage estimates, as reflected in the following three objectives: (1) to evaluate the current U.S. Department of Agriculture Statistical Reporting Service regression approach to crop area estimation as applied to the problem of obtaining subanalysis district estimates; (2) to develop and test alternative approaches to subanalysis district estimation; and (3) to develop and test preprocessing techniques for use in improving subanalysis district estimates.

  17. Sinogram smoothing techniques for myocardial blood flow estimation from dose-reduced dynamic computed tomography

    PubMed Central

    Modgil, Dimple; Alessio, Adam M.; Bindschadler, Michael D.; La Rivière, Patrick J.

    2014-01-01

    Abstract. Dynamic contrast-enhanced computed tomography (CT) could provide an accurate and widely available technique for myocardial blood flow (MBF) estimation to aid in the diagnosis and treatment of coronary artery disease. However, one of its primary limitations is the radiation dose imparted to the patient. We are exploring techniques to reduce the patient dose by either reducing the tube current or by reducing the number of temporal frames in the dynamic CT sequence. Both of these dose reduction techniques result in noisy data. In order to extract the MBF information from the noisy acquisitions, we have explored several data-domain smoothing techniques. In this work, we investigate two specific smoothing techniques: the sinogram restoration technique in both the spatial and temporal domains and the use of the Karhunen–Loeve (KL) transform to provide temporal smoothing in the sinogram domain. The KL transform smoothing technique has been previously applied to dynamic image sequences in positron emission tomography. We apply a quantitative two-compartment blood flow model to estimate MBF from the time-attenuation curves and determine which smoothing method provides the most accurate MBF estimates in a series of simulations of different dose levels, dynamic contrast-enhanced cardiac CT acquisitions. As measured by root mean square percentage error (% RMSE) in MBF estimates, sinogram smoothing generally provides the best MBF estimates except for the cases of the lowest simulated dose levels (tube current=25  mAs, 2 or 3 s temporal spacing), where the KL transform method provides the best MBF estimates. The KL transform technique provides improved MBF estimates compared to conventional processing only at very low doses (<7  mSv). Results suggest that the proposed smoothing techniques could provide high fidelity MBF information and allow for substantial radiation dose savings. PMID:25642441

  18. A Novel Microcharacterization Technique in the Measurement of Strain and Orientation Gradient in Advanced Materials

    NASA Technical Reports Server (NTRS)

    Garmestai, H.; Harris, K.; Lourenco, L.

    1997-01-01

    Representation of morphology and evolution of the microstructure during processing and their relation to properties requires proper experimental techniques. Residual strains, lattice distortion, and texture (micro-texture) at the interface and the matrix of a layered structure or a functionally gradient material and their variation are among parameters important in materials characterization but hard to measure with present experimental techniques. Current techniques available to measure changes in interred material parameters (residual stress, micro-texture, microplasticity) produce results which are either qualitative or unreliable. This problem becomes even more complicated in the case of a temperature variation. These parameters affect many of the mechanical properties of advanced materials including stress-strain relation, ductility, creep, and fatigue. A review of some novel experimental techniques using recent advances in electron microscopy is presented here to measure internal stress, (micro)texture, interracial strength and (sub)grain formation and realignment. Two of these techniques are combined in the chamber of an Environmental Scanning Electron Microscope to measure strain and orientation gradients in advanced materials. These techniques which include Backscattered Kikuchi Diffractometry (BKD) and Microscopic Strain Field Analysis are used to characterize metallic and intermetallic matrix composites and superplastic materials. These techniques are compared with the more conventional x-ray diffraction and indentation techniques.

  19. Advances in high-resolution imaging – techniques for three-dimensional imaging of cellular structures

    PubMed Central

    Lidke, Diane S.; Lidke, Keith A.

    2012-01-01

    A fundamental goal in biology is to determine how cellular organization is coupled to function. To achieve this goal, a better understanding of organelle composition and structure is needed. Although visualization of cellular organelles using fluorescence or electron microscopy (EM) has become a common tool for the cell biologist, recent advances are providing a clearer picture of the cell than ever before. In particular, advanced light-microscopy techniques are achieving resolutions below the diffraction limit and EM tomography provides high-resolution three-dimensional (3D) images of cellular structures. The ability to perform both fluorescence and electron microscopy on the same sample (correlative light and electron microscopy, CLEM) makes it possible to identify where a fluorescently labeled protein is located with respect to organelle structures visualized by EM. Here, we review the current state of the art in 3D biological imaging techniques with a focus on recent advances in electron microscopy and fluorescence super-resolution techniques. PMID:22685332

  20. Modulation/demodulation techniques for satellite communications. Part 2: Advanced techniques. The linear channel

    NASA Technical Reports Server (NTRS)

    Omura, J. K.; Simon, M. K.

    1982-01-01

    A theory is presented for deducing and predicting the performance of transmitter/receivers for bandwidth efficient modulations suitable for use on the linear satellite channel. The underlying principle used is the development of receiver structures based on the maximum-likelihood decision rule. The application of the performance prediction tools, e.g., channel cutoff rate and bit error probability transfer function bounds to these modulation/demodulation techniques.

  1. Two techniques for mapping and area estimation of small grains in California using Landsat digital data

    NASA Technical Reports Server (NTRS)

    Sheffner, E. J.; Hlavka, C. A.; Bauer, E. M.

    1984-01-01

    Two techniques have been developed for the mapping and area estimation of small grains in California from Landsat digital data. The two techniques are Band Ratio Thresholding, a semi-automated version of a manual procedure, and LCLS, a layered classification technique which can be fully automated and is based on established clustering and classification technology. Preliminary evaluation results indicate that the two techniques have potential for providing map products which can be incorporated into existing inventory procedures and automated alternatives to traditional inventory techniques and those which currently employ Landsat imagery.

  2. Dissipation estimates in turbulent flows using the zero-wire-length technique

    NASA Astrophysics Data System (ADS)

    Browne, L. W. B.; Zhu, Y.; Antonia, R. A.

    1991-05-01

    The validity of the zero-wire-length technique of Azad and Kassab (1989) for estimating dissipation in turbulent flows was verified experimentally for the flow in the far wake of a circular cylinder. It was found that, for the far wake of the cylinder, the zero-wire-length dissipation technique using wire lengths no longer than about 5 times the Kolmogorov microscale does yield only the isotropic estimates of dissipation rather than correct estimates of the actual dissipation. It is suggested that the results reported by Turan and Azad (1989) and by Azad and Kasab (1989) for a boundary layer and a fully developed pipe flow cannot be regarded as accurate.

  3. POC-Scale Testing of an Advanced Fine Coal Dewatering Equipment/Technique

    SciTech Connect

    Karekh, B K; Tao, D; Groppo, J G

    1998-08-28

    Froth flotation technique is an effective and efficient process for recovering of ultra-fine (minus 74 mm) clean coal. Economical dewatering of an ultra-fine clean coal product to a 20% level moisture will be an important step in successful implementation of the advanced cleaning processes. This project is a step in the Department of Energy's program to show that ultra-clean coal could be effectively dewatered to 20% or lower moisture using either conventional or advanced dewatering techniques. The cost-sharing contract effort is for 45 months beginning September 30, 1994. This report discusses technical progress made during the quarter from January 1 - March 31, 1998.

  4. Modulation/demodulation techniques for satellite communications. Part 3: Advanced techniques. The nonlinear channel

    NASA Technical Reports Server (NTRS)

    Omura, J. K.; Simon, M. K.

    1982-01-01

    A theory for deducing and predicting the performance of transmitter/receivers for bandwidth efficient modulations suitable for use on the nonlinear satellite channel is presented. The underlying principle used throughout is the development of receiver structures based on the maximum likelihood decision rule and aproximations to it. The bit error probability transfer function bounds developed in great detail in Part 4 is applied to these modulation/demodulation techniques. The effects of the various degrees of receiver mismatch are considered both theoretically and by numerous illustrative examples.

  5. Improvement of color reproduction in color digital holography by using spectral estimation technique.

    PubMed

    Xia, Peng; Shimozato, Yuki; Ito, Yasunori; Tahara, Tatsuki; Kakue, Takashi; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu

    2011-12-01

    We propose a color digital holography by using spectral estimation technique to improve the color reproduction of objects. In conventional color digital holography, there is insufficient spectral information in holograms, and the color of the reconstructed images depend on only reflectances at three discrete wavelengths used in the recording of holograms. Therefore the color-composite image of the three reconstructed images is not accurate in color reproduction. However, in our proposed method, the spectral estimation technique was applied, which has been reported in multispectral imaging. According to the spectral estimation technique, the continuous spectrum of object can be estimated and the color reproduction is improved. The effectiveness of the proposed method was confirmed by a numerical simulation and an experiment, and, in the results, the average color differences are decreased from 35.81 to 7.88 and from 43.60 to 25.28, respectively. PMID:22193005

  6. The Pilot Training Study: A Cost-Estimating Model for Advanced Pilot Training (APT).

    ERIC Educational Resources Information Center

    Knollmeyer, L. E.

    The Advanced Pilot Training Cost Model is a statement of relationships that may be used, given the necessary inputs, for estimating the resources required and the costs to train pilots in the Air Force formal flying training schools. Resources and costs are computed by weapon system on an annual basis for use in long-range planning or sensitivity…

  7. Image enhancement and advanced information extraction techniques for ERTS-1 data

    NASA Technical Reports Server (NTRS)

    Malila, W. A. (Principal Investigator); Nalepka, R. F.; Sarno, J. E.

    1975-01-01

    The author has identified the following significant results. It was demonstrated and concluded that: (1) the atmosphere has significant effects on ERTS MSS data which can seriously degrade recognition performance; (2) the application of selected signature extension techniques serve to reduce the deleterious effects of both the atmosphere and changing ground conditions on recognition performance; and (3) a proportion estimation algorithm for overcoming problems in acreage estimation accuracy resulting from the coarse spatial resolution of the ERTS MSS, was able to significantly improve acreage estimation accuracy over that achievable by conventional techniques, especially for high contrast targets such as lakes and ponds.

  8. Feasibility Studies of Applying Kalman Filter Techniques to Power System Dynamic State Estimation

    SciTech Connect

    Huang, Zhenyu; Schneider, Kevin P.; Nieplocha, Jarek

    2007-08-01

    Abstract—Lack of dynamic information in power system operations mainly attributes to the static modeling of traditional state estimation, as state estimation is the basis driving many other operations functions. This paper investigates the feasibility of applying Kalman filter techniques to enable the inclusion of dynamic modeling in the state estimation process and the estimation of power system dynamic states. The proposed Kalman-filter-based dynamic state estimation is tested on a multi-machine system with both large and small disturbances. Sensitivity studies of the dynamic state estimation performance with respect to measurement characteristics – sampling rate and noise level – are presented as well. The study results show that there is a promising path forward to implementation the Kalman-filter-based dynamic state estimation with the emerging phasor measurement technologies.

  9. Application of Advanced Magnetic Resonance Imaging Techniques in Evaluation of the Lower Extremity

    PubMed Central

    Braun, Hillary J.; Dragoo, Jason L.; Hargreaves, Brian A.; Levenston, Marc E.; Gold, Garry E.

    2012-01-01

    Synopsis This article reviews current magnetic resonance imaging techniques for imaging the lower extremity, focusing on imaging of the knee, ankle, and hip joints. Recent advancements in MRI include imaging at 7 Tesla, using multiple receiver channels, T2* imaging, and metal suppression techniques, allowing more detailed visualization of complex anatomy, evaluation of morphological changes within articular cartilage, and imaging around orthopedic hardware. PMID:23622097

  10. A time series deformation estimation in the NW Himalayas using SBAS InSAR technique

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Venkataraman, G.

    2012-12-01

    A time series land deformation studies in north western Himalayan region has been presented in this study. Synthetic aperture radar (SAR) interferometry (InSAR) is an important tool for measuring the land displacement caused by different geological processes [1]. Frequent spatial and temporal decorrelation in the Himalayan region is a strong impediment in precise deformation estimation using conventional interferometric SAR approach. In such cases, advanced DInSAR approaches PSInSAR as well as Small base line subset (SBAS) can be used to estimate earth surface deformation. The SBAS technique [2] is a DInSAR approach which uses a twelve or more number of repeat SAR acquisitions in different combinations of a properly chosen data (subsets) for generation of DInSAR interferograms using two pass interferometric approach. Finally it leads to the generation of mean deformation velocity maps and displacement time series. Herein, SBAS algorithm has been used for time series deformation estimation in the NW Himalayan region. ENVISAT ASAR IS2 swath data from 2003 to 2008 have been used for quantifying slow deformation. Himalayan region is a very active tectonic belt and active orogeny play a significant role in land deformation process [3]. Geomorphology in the region is unique and reacts to the climate change adversely bringing with land slides and subsidence. Settlements on the hill slopes are prone to land slides, landslips, rockslides and soil creep. These hazardous features have hampered the over all progress of the region as they obstruct the roads and flow of traffic, break communication, block flowing water in stream and create temporary reservoirs and also bring down lot of soil cover and thus add enormous silt and gravel to the streams. It has been observed that average deformation varies from -30.0 mm/year to 10 mm/year in the NW Himalayan region . References [1] Massonnet, D., Feigl, K.L.,Rossi, M. and Adragna, F. (1994) Radar interferometry mapping of

  11. Clinical decision support systems for brain tumor characterization using advanced magnetic resonance imaging techniques.

    PubMed

    Tsolaki, Evangelia; Kousi, Evanthia; Svolos, Patricia; Kapsalaki, Efthychia; Theodorou, Kyriaki; Kappas, Constastine; Tsougos, Ioannis

    2014-04-28

    In recent years, advanced magnetic resonance imaging (MRI) techniques, such as magnetic resonance spectroscopy, diffusion weighted imaging, diffusion tensor imaging and perfusion weighted imaging have been used in order to resolve demanding diagnostic problems such as brain tumor characterization and grading, as these techniques offer a more detailed and non-invasive evaluation of the area under study. In the last decade a great effort has been made to import and utilize intelligent systems in the so-called clinical decision support systems (CDSS) for automatic processing, classification, evaluation and representation of MRI data in order for advanced MRI techniques to become a part of the clinical routine, since the amount of data from the aforementioned techniques has gradually increased. Hence, the purpose of the current review article is two-fold. The first is to review and evaluate the progress that has been made towards the utilization of CDSS based on data from advanced MRI techniques. The second is to analyze and propose the future work that has to be done, based on the existing problems and challenges, especially taking into account the new imaging techniques and parameters that can be introduced into intelligent systems to significantly improve their diagnostic specificity and clinical application.

  12. Third molar development: evaluation of nine tooth development registration techniques for age estimations.

    PubMed

    Thevissen, Patrick W; Fieuws, Steffen; Willems, Guy

    2013-03-01

    Multiple third molar development registration techniques exist. Therefore the aim of this study was to detect which third molar development registration technique was most promising to use as a tool for subadult age estimation. On a collection of 1199 panoramic radiographs the development of all present third molars was registered following nine different registration techniques [Gleiser, Hunt (GH); Haavikko (HV); Demirjian (DM); Raungpaka (RA); Gustafson, Koch (GK); Harris, Nortje (HN); Kullman (KU); Moorrees (MO); Cameriere (CA)]. Regression models with age as response and the third molar registration as predictor were developed for each registration technique separately. The MO technique disclosed highest R(2) (F 51%, M 45%) and lowest root mean squared error (F 3.42 years; M 3.67 years) values, but differences with other techniques were small in magnitude. The amount of stages utilized in the explored staging techniques slightly influenced the age predictions.

  13. Softform for facial rejuvenation: historical review, operative techniques, and recent advances.

    PubMed

    Miller, P J; Levine, J; Ahn, M S; Maas, C S; Constantinides, M

    2000-01-01

    The deep nasolabial fold and other facial furrows and wrinkles have challenged the facial plastic surgeon. A variety of techniques have been used in the past to correct these troublesome defects. Advances in the last five years in new materials and design have created a subcutaneous implant that has excellent properties. This article reviews the development and use of Softform facial implant.

  14. Traditional Materials and Techniques Used as Instructional Devices in an Advanced Business Spanish Conversation Class.

    ERIC Educational Resources Information Center

    Valdivieso, Jorge

    Spanish language training at the Thunderbird Graduate School of International Management is discussed, focusing on the instructional materials and classroom techniques used in advanced Spanish conversation classes. While traditional materials (dialogues, dictation, literature, mass media, video- and audiotapes) and learning activities (recitation,…

  15. Recognizing and Managing Complexity: Teaching Advanced Programming Concepts and Techniques Using the Zebra Puzzle

    ERIC Educational Resources Information Center

    Crabtree, John; Zhang, Xihui

    2015-01-01

    Teaching advanced programming can be a challenge, especially when the students are pursuing different majors with diverse analytical and problem-solving capabilities. The purpose of this paper is to explore the efficacy of using a particular problem as a vehicle for imparting a broad set of programming concepts and problem-solving techniques. We…

  16. Real-time application of advanced three-dimensional graphic techniques for research aircraft simulation

    NASA Technical Reports Server (NTRS)

    Davis, Steven B.

    1990-01-01

    Visual aids are valuable assets to engineers for design, demonstration, and evaluation. Discussed here are a variety of advanced three-dimensional graphic techniques used to enhance the displays of test aircraft dynamics. The new software's capabilities are examined and possible future uses are considered.

  17. Fabrication of advanced electrochemical energy materials using sol-gel processing techniques

    NASA Technical Reports Server (NTRS)

    Chu, C. T.; Chu, Jay; Zheng, Haixing

    1995-01-01

    Advanced materials play an important role in electrochemical energy devices such as batteries, fuel cells, and electrochemical capacitors. They are being used as both electrodes and electrolytes. Sol-gel processing is a versatile solution technique used in fabrication of ceramic materials with tailored stoichiometry, microstructure, and properties. The application of sol-gel processing in the fabrication of advanced electrochemical energy materials will be presented. The potentials of sol-gel derived materials for electrochemical energy applications will be discussed along with some examples of successful applications. Sol-gel derived metal oxide electrode materials such as V2O5 cathodes have been demonstrated in solid-slate thin film batteries; solid electrolytes materials such as beta-alumina for advanced secondary batteries had been prepared by the sol-gel technique long time ago; and high surface area transition metal compounds for capacitive energy storage applications can also be synthesized with this method.

  18. An estimation technique of Rayleigh wave phase velocities using arrays with arbitrary geometry

    NASA Astrophysics Data System (ADS)

    Shiraishi, H.; Asanuma, H.

    2008-12-01

    The mictotremor survey method (MSM) is one of the most practical techniques to estimate velocity structure of shear waves in sedimentary layers. In the MSM, the velocity models are determined by inversion analysis of the Rayleigh wave phase velocity dispersion curve observed from microtremors. In most of the cases, the phase velocity dispersion curve is obtained by either the spatial autocorrelation (SPAC) technique or the frequency-wavenumber (F-K) technique applied to array measurements of microtremors. These techniques place significant restrictions on the array geometry and number of stations required, which limits the applicability of MSM, especially in urban areas. We have derived a new technique for estimating phase velocities of Rayleigh waves. This new technique (the direct estimation method: DEM) enables to the use of flexible array configurations and a minimal number of stations. Moreover, the DEM can be applied to records from existing station arrays, such as those in an earthquake monitoring network. In the DEM, microtremors detected by arrays with arbitrary geometry can be represented by complex coherence functions (CCFs: Shiraishi et. al. 2006) of the Rayleigh wave. The CCF is derived from analytic solution of Lamb's problem, and it consists of the Bessel function of the first kind J0(ωr/c) (ω: angular frequency, r: distance between the stations, c: phase velocity), which is well-known function and is used in the SPAC technique to estimate phase velocity. The phase velocities can be estimated by solving the equations with the least squares approach to minimize the residual error between the observed and the theoretical values. A field experiment has been carried out to verify the effectiveness of the DEM, and the phase velocities obtained by the DEM with an array of arbitrary geometry are in excellent agreement with those obtained using the SPAC technique.

  19. An Automated Technique for Estimating Daily Precipitation over the State of Virginia

    NASA Technical Reports Server (NTRS)

    Follansbee, W. A.; Chamberlain, L. W., III

    1981-01-01

    Digital IR and visible imagery obtained from a geostationary satellite located over the equator at 75 deg west latitude were provided by NASA and used to obtain a linear relationship between cloud top temperature and hourly precipitation. Two computer programs written in FORTRAN were used. The first program computes the satellite estimate field from the hourly digital IR imagery. The second program computes the final estimate for the entire state area by comparing five preliminary estimates of 24 hour precipitation with control raingage readings and determining which of the five methods gives the best estimate for the day. The final estimate is then produced by incorporating control gage readings into the winning method. In presenting reliable precipitation estimates for every cell in Virginia in near real time on a daily on going basis, the techniques require on the order of 125 to 150 daily gage readings by dependable, highly motivated observers distributed as uniformly as feasible across the state.

  20. A comparison of frequency estimation techniques for high-dynamic trajectories

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Hinedi, S.; Kumar, R.

    1988-01-01

    A comparison is presented for four different estimation techniques applied to the problem of continuously estimating the parameters of a sinusoidal Global Positioning System (GPS) signal, observed in the presence of additive noise, under extremely high-dynamic conditions. Frequency estimates are emphasized, although phase and/or frequency rate are also estimated by some of the algorithms. These parameters are related to the velocity, position, and acceleration of the maneuvering transmitter. Estimated performance at low carrier-to-noise ratios and high dynamics is investigated for the purpose of determining the useful operating range of an approximate Maximum Likelihood (ML) estimator, an Extended Kalman Filter (EKF), a Cross-Product Automatic Frequency Control (CPAFC) loop, and a digital phase-locked loop (PPL). Numerical simulations are used to evaluate performance while tracking a common trajectory exhibiting high dynamics.

  1. Detection and Sizing of Fatigue Cracks in Steel Welds with Advanced Eddy Current Techniques

    NASA Astrophysics Data System (ADS)

    Todorov, E. I.; Mohr, W. C.; Lozev, M. G.

    2008-02-01

    Butt-welded specimens were fatigued to produce cracks in the weld heat-affected zone. Advanced eddy current (AEC) techniques were used to detect and size the cracks through a coating. AEC results were compared with magnetic particle and phased-array ultrasonic techniques. Validation through destructive crack measurements was also conducted. Factors such as geometry, surface treatment, and crack tightness interfered with depth sizing. AEC inspection techniques have the potential of providing more accurate and complete sizing flaw data for manufacturing and in-service inspections.

  2. Evaluation of age estimation technique: testing traits of the acetabulum to estimate age at death in adult males.

    PubMed

    Calce, Stephanie E; Rogers, Tracy L

    2011-03-01

    This study evaluates the accuracy and precision of a skeletal age estimation method, using the acetabulum of 100 male ossa coxae from the Grant Collection (GRO) at the University of Toronto, Canada. Age at death was obtained using Bayesian inference and a computational application (IDADE2) that requires a reference population, close in geographic and temporal distribution to the target case, to calibrate age ranges from scores generated by the technique. The inaccuracy of this method is 8 years. The direction of bias indicates the acetabulum technique tends to underestimate age. The categories 46-65 and 76-90 years exhibit the smallest inaccuracy (0.2), suggesting that this method may be appropriate for individuals over 40 years. Eighty-three percent of age estimates were ±12 years of known age; 79% were ±10 years of known age; and 62% were ±5 years of known age. Identifying a suitable reference population is the most significant limitation of this technique for forensic applications.

  3. Estimation of convective rain volumes utilizing the are-time-integral technique

    NASA Technical Reports Server (NTRS)

    Johnson, L. Ronald; Smith, Paul L.

    1990-01-01

    Interest in the possibility of developing useful estimates of convective rainfall with Area-Time Integral (ATI) methods is increasing. The basis of the ATI technique is the observed strong correlation between rainfall volumes and ATI values. This means that rainfall can be estimated by just determining the ATI values, if previous knowledge of the relationship to rain volume is available to calibrate the technique. Examples are provided of the application of the ATI approach to gage, radar, and satellite measurements. For radar data, the degree of transferability in time and among geographical areas is examined. Recent results on transferability of the satellite ATI calculations are presented.

  4. Advanced imaging techniques for assessment of structure, composition and function in biofilm systems.

    PubMed

    Neu, Thomas R; Manz, Bertram; Volke, Frank; Dynes, James J; Hitchcock, Adam P; Lawrence, John R

    2010-04-01

    Scientific imaging represents an important and accepted research tool for the analysis and understanding of complex natural systems. Apart from traditional microscopic techniques such as light and electron microscopy, new advanced techniques have been established including laser scanning microscopy (LSM), magnetic resonance imaging (MRI) and scanning transmission X-ray microscopy (STXM). These new techniques allow in situ analysis of the structure, composition, processes and dynamics of microbial communities. The three techniques open up quantitative analytical imaging possibilities that were, until a few years ago, impossible. The microscopic techniques represent powerful tools for examination of mixed environmental microbial communities usually encountered in the form of aggregates and films. As a consequence, LSM, MRI and STXM are being used in order to study complex microbial biofilm systems. This mini review provides a short outline of the more recent applications with the intention to stimulate new research and imaging approaches in microbiology.

  5. Advanced techniques for determining long term compatibility of materials with propellants

    NASA Technical Reports Server (NTRS)

    Green, R. L.

    1972-01-01

    The search for advanced measurement techniques for determining long term compatibility of materials with propellants was conducted in several parts. A comprehensive survey of the existing measurement and testing technology for determining material-propellant interactions was performed. Selections were made from those existing techniques which were determined could meet or be made to meet the requirements. Areas of refinement or changes were recommended for improvement of others. Investigations were also performed to determine the feasibility and advantages of developing and using new techniques to achieve significant improvements over existing ones. The most interesting demonstration was that of the new technique, the volatile metal chelate analysis. Rivaling the neutron activation analysis in terms of sensitivity and specificity, the volatile metal chelate technique was fully demonstrated.

  6. Comparison of techniques for estimating evaporation from an irrigation water storage

    NASA Astrophysics Data System (ADS)

    McJannet, D. L.; Cook, F. J.; Burn, S.

    2013-03-01

    With the emergence of water supply and food security issues as a result of increasing population and climate change pressures, the need for efficient use of available water supplies is paramount. Management of available resources and improved efficiency require accurate specification of evaporation, which is a major water loss pathway, yet evaporation remains difficult to accurately quantify. This study uses scintillometry-derived measurements of evaporation to test the performance of water balance, pan coefficient, and combination modeling techniques, which might commonly be used by resource managers. Both pan coefficient and water balance techniques performed poorly, but the Penman-Monteith model with local site data and site-specific wind function produced estimates within 2% of those measured. Recognizing that such a model parameterization would rarely be a possibility in most environments, further testing involving the range of data sets that might be available for a location was undertaken. Modeling using over-water measurements and, generally, applicable wind functions from the literature produced estimates 26% greater than those measured. Estimates within 12% of those measured were made for the equivalent model setup using over-land meteorological data; however, when data from the nearest meteorological station was used, this difference increased to 27%. The different evaporation estimation techniques tested were shown to produce a range of estimates of water availability, which varied by nearly 30%. The large differences between measured and predicted evaporation highlight the uncertainty that still exists in evaporation estimation and the sensitivity of predictions to the source of input data.

  7. Advanced Transportation System Studies. Technical Area 3: Alternate Propulsion Subsystems Concepts. Volume 3; Program Cost Estimates

    NASA Technical Reports Server (NTRS)

    Levack, Daniel J. H.

    2000-01-01

    The objective of this contract was to provide definition of alternate propulsion systems for both earth-to-orbit (ETO) and in-space vehicles (upper stages and space transfer vehicles). For such propulsion systems, technical data to describe performance, weight, dimensions, etc. was provided along with programmatic information such as cost, schedule, needed facilities, etc. Advanced technology and advanced development needs were determined and provided. This volume separately presents the various program cost estimates that were generated under three tasks: the F- IA Restart Task, the J-2S Restart Task, and the SSME Upper Stage Use Task. The conclusions, technical results , and the program cost estimates are described in more detail in Volume I - Executive Summary and in individual Final Task Reports.

  8. Nondestructive Characterization by Advanced Synchrotron Light Techniques: Spectromicroscopy and Coherent Radiology

    PubMed Central

    Margaritondo, Giorgio; Hwu, Yeukuang; Je, Jung Ho

    2008-01-01

    The advanced characteristics of synchrotron light has led in recent years to the development of a series of new experimental techniques to investigate chemical and physical properties on a microscopic scale. Although originally developed for materials science and biomedical research, such techniques find increasing applications in other domains – and could be quite useful for the study and conservation of cultural heritage. Specifically, they can nondestructively provide detailed chemical composition information that can be useful for the identification of specimens, for the discovery of historical links based on the sources of chemical raw materials and on chemical processes, for the analysis of damage, their causes and remedies and for many other issues. Likewise, morphological and structural information on a microscopic scale is useful for the identification, study and preservation of many different cultural and historical specimens. We concentrate here on two classes of techniques: in the first case, photoemission spectromicroscopy. This is the result of the advanced evolution of photoemission techniques like ESCA (Electron Microscopy for Chemical Analysis). By combining high lateral resolution to spectroscopy, photoemission spectromicroscopy can deliver fine chemical information on a microscopic scale in a nondestructive fashion. The second class of techniques exploits the high lateral coherence of modern synchrotron sources, a byproduct of the quest for high brightness or brilliance. We will see that such techniques now push radiology into the submicron scale and the submillisecond time domain. Furthermore, they can be implemented in a tomographic mode, increasing the information and becoming potentially quite useful for the analysis of cultural heritage specimens.

  9. Recent advances in 3D computed tomography techniques for simulation and navigation in hepatobiliary pancreatic surgery.

    PubMed

    Uchida, Masafumi

    2014-04-01

    A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging.

  10. Estimating numbers of greater prairie-chickens using mark-resight techniques

    USGS Publications Warehouse

    Clifton, A.M.; Krementz, D.G.

    2006-01-01

    Current monitoring efforts for greater prairie-chicken (Tympanuchus cupido pinnatus) populations indicate that populations are declining across their range. Monitoring the population status of greater prairie-chickens is based on traditional lek surveys (TLS) that provide an index without considering detectability. Estimators, such as immigration-emigration joint maximum-likelihood estimator from a hypergeometric distribution (IEJHE), can account for detectability and provide reliable population estimates based on resightings. We evaluated the use of mark-resight methods using radiotelemetry to estimate population size and density of greater prairie-chickens on 2 sites at a tallgrass prairie in the Flint Hills of Kansas, USA. We used average distances traveled from lek of capture to estimate density. Population estimates and confidence intervals at the 2 sites were 54 (CI 50-59) on 52.9 km 2 and 87 (CI 82-94) on 73.6 km2. The TLS performed at the same sites resulted in population ranges of 7-34 and 36-63 and always produced a lower population index than the mark-resight population estimate with a larger range. Mark-resight simulations with varying male:female ratios of marks indicated that this ratio was important in designing a population study on prairie-chickens. Confidence intervals for estimates when no marks were placed on females at the 2 sites (CI 46-50, 76-84) did not overlap confidence intervals when 40% of marks were placed on females (CI 54-64, 91-109). Population estimates derived using this mark-resight technique were apparently more accurate than traditional methods and would be more effective in detecting changes in prairie-chicken populations. Our technique could improve prairie-chicken management by providing wildlife biologists and land managers with a tool to estimate the population size and trends of lekking bird species, such as greater prairie-chickens.

  11. ESTIMATING CHLOROFORM BIOTRANSFORMATION IN F-344 RAT LIVER USING IN VITRO TECHNIQUES AND PHARMACOKINETIC MODELING

    EPA Science Inventory

    ESTIMATING CHLOROFORM BIOTRANSFORMATION IN F-344 RAT LIVER USING IN VITRO TECHNIQUES AND PHARMACOKINETIC MODELING

    Linskey, C.F.1, Harrison, R.A.2., Zhao, G.3., Barton, H.A., Lipscomb, J.C4., and Evans, M.V2., 1UNC, ESE, Chapel Hill, NC ; 2USEPA, ORD, NHEERL, RTP, NC; 3 UN...

  12. A solar energy estimation procedure using remote sensing techniques. [watershed hydrologic models

    NASA Technical Reports Server (NTRS)

    Khorram, S.

    1977-01-01

    The objective of this investigation is to design a remote sensing-aided procedure for daily location-specific estimation of solar radiation components over the watershed(s) of interest. This technique has been tested on the Spanish Creek Watershed, Northern California, with successful results.

  13. Comparison of seed bank estimation techniques using six weed species in two soil types

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Tests of three different seed bank estimation techniques were performed on six different weed species. Petri plate germination was compared to two emergence methods, each on two different soil types (stony loam vs. silt loam). Soil types produced equal emergence proportions, however both emergence...

  14. Using the Randomized Response Technique to Estimate the Extent of Delinquent Behavior in Schools.

    ERIC Educational Resources Information Center

    Gottfredson, Gary D.

    The Randomized Response Technique (RRT) appears to have promise in future work which studies the relation of school variables to disruption or delinquent behavior. The RRT is especially useful in situations when it is difficult or undesirable directly to ask stigmatizing questions. The proportions of students in this study estimated to have used…

  15. Development of a surface isolation estimation technique suitable for application of polar orbiting satellite data

    NASA Technical Reports Server (NTRS)

    Davis, P. A.; Penn, L. M. (Principal Investigator)

    1981-01-01

    A technique is developed for the estimation of total daily insolation on the basis of data derivable from operational polar-orbiting satellites. Although surface insolation and meteorological observations are used in the development, the algorithm is constrained in application by the infrequent daytime polar-orbiter coverage.

  16. A review of sex estimation techniques during examination of skeletal remains in forensic anthropology casework.

    PubMed

    Krishan, Kewal; Chatterjee, Preetika M; Kanchan, Tanuj; Kaur, Sandeep; Baryah, Neha; Singh, R K

    2016-04-01

    Sex estimation is considered as one of the essential parameters in forensic anthropology casework, and requires foremost consideration in the examination of skeletal remains. Forensic anthropologists frequently employ morphologic and metric methods for sex estimation of human remains. These methods are still very imperative in identification process in spite of the advent and accomplishment of molecular techniques. A constant boost in the use of imaging techniques in forensic anthropology research has facilitated to derive as well as revise the available population data. These methods however, are less reliable owing to high variance and indistinct landmark details. The present review discusses the reliability and reproducibility of various analytical approaches; morphological, metric, molecular and radiographic methods in sex estimation of skeletal remains. Numerous studies have shown a higher reliability and reproducibility of measurements taken directly on the bones and hence, such direct methods of sex estimation are considered to be more reliable than the other methods. Geometric morphometric (GM) method and Diagnose Sexuelle Probabiliste (DSP) method are emerging as valid methods and widely used techniques in forensic anthropology in terms of accuracy and reliability. Besides, the newer 3D methods are shown to exhibit specific sexual dimorphism patterns not readily revealed by traditional methods. Development of newer and better methodologies for sex estimation as well as re-evaluation of the existing ones will continue in the endeavour of forensic researchers for more accurate results.

  17. Study on estimating the evapotranspiration cover coefficient for stream flow simulation through remote sensing techniques

    NASA Astrophysics Data System (ADS)

    Wu, Chihda; Cheng, Chichuan; Lo, Hannchung; Chen, Yeongkeung

    2010-08-01

    This study focuses on using remote sensing techniques to estimate the evapotranspiration cover coefficient (CV) which is an important parameter for stream flow. The objective is to derive more accurate stream flow from the estimated CV. The study area is located in the Dan-Shuei watershed in northern Taiwan. The processes include the land-use classification using hybrid classification and four Landsat-5 TM images; the CV estimations based on remote sensing and traditional approaches; comparison of stream flow simulation according to the above two CV values. The result indicated that the study area was classified into seven land-use types with 88.3% classification accuracy. The simulated stream flow using remote sensing approach could represent more accurate hydrological characteristics than a traditional approach. Obviously integrating remote sensing technique and the SEBAL model is a useful approach to estimate the CV. The CV parameter estimated by remote sensing technique did improve the accuracy of the stream flow simulation. Therefore, the results can be extended to further studies such as forest water management.

  18. Development of low-cost test techniques for advancing film cooling technology

    NASA Astrophysics Data System (ADS)

    Soechting, F. O.; Landis, K. K.; Dobrowolski, R.

    1987-06-01

    A program for studying advanced film hole geometries that will provide improved film effectiveness levels relative to those reported in the literature is described. A planar wind tunnel was used to conduct flow visualization studies on different film hole shapes, followed by film effectiveness measurements. The most promising geometries were then tested in a two-dimensional cascade to define the film effectiveness distributions, while duplicating a turbine airfoil curvature, Mach number, and acceleration characteristics. The test techniques are assessed and typical results are presented. It was shown that smoke flow visualization is an excellent low-cost technique for observing film coolant-to-mainstream characteristics and that reusable liquid crystal sheets provide an accurate low-cost technique for measuring near-hole film effectiveness contours. Cascade airfoils constructed using specially developed precision fabrication techniques provided high-quality film effectiveness data.

  19. Advances in the surface modification techniques of bone-related implants for last 10 years

    PubMed Central

    Qiu, Zhi-Ye; Chen, Cen; Wang, Xiu-Mei; Lee, In-Seop

    2014-01-01

    At the time of implanting bone-related implants into human body, a variety of biological responses to the material surface occur with respect to surface chemistry and physical state. The commonly used biomaterials (e.g. titanium and its alloy, Co–Cr alloy, stainless steel, polyetheretherketone, ultra-high molecular weight polyethylene and various calcium phosphates) have many drawbacks such as lack of biocompatibility and improper mechanical properties. As surface modification is very promising technology to overcome such problems, a variety of surface modification techniques have been being investigated. This review paper covers recent advances in surface modification techniques of bone-related materials including physicochemical coating, radiation grafting, plasma surface engineering, ion beam processing and surface patterning techniques. The contents are organized with different types of techniques to applicable materials, and typical examples are also described. PMID:26816626

  20. Ultra-small time-delay estimation via a weak measurement technique with post-selection

    NASA Astrophysics Data System (ADS)

    Fang, Chen; Huang, Jing-Zheng; Yu, Yang; Li, Qinzheng; Zeng, Guihua

    2016-09-01

    Weak measurement is a novel technique for parameter estimation with higher precision. In this paper we develop a general theory for the parameter estimation based on a weak measurement technique with arbitrary post-selection. The weak-value amplification model and the joint weak measurement model are two special cases in our theory. Applying the developed theory, time-delay estimation is investigated in both theory and experiments. The experimental results show that when the time delay is ultra-small, the joint weak measurement scheme outperforms the weak-value amplification scheme, and is robust against not only misalignment errors but also the wavelength dependence of the optical components. These results are consistent with theoretical predictions that have not been previously verified by any experiment.

  1. A novel technique for estimating aerosol optical thickness trends using meteorological parameters

    NASA Astrophysics Data System (ADS)

    Emetere, Moses E.; Akinyemi, M. L.; Akin-Ojo, O.

    2016-02-01

    Estimating aerosol optical thickness (AOT) over regions can be tasking if satellite data set over such region is very scanty. Therefore a technique whose application captures real-time events is most appropriate for adequate monitoring of risk indicators. A new technique i.e. arithmetic translation of pictorial model (ATOPM) was developed. The ATOPM deals with the use mathematical expression to compute other meteorological parameters obtained from satellite or ground data set. Six locations within 335 × 230 Km2 area of a selected portion of Nigeria were chosen and analyzed -using the meteorological data set (1999-2012) and MATLAB. The research affirms the use of some parameters (e.g. minimum temperature, cloud cover, relative humidity and rainfall) to estimate the aerosol optical thickness. The objective of the paper was satisfied via the use of other meteorological parameters to estimate AOT when the satellite data set over an area is scanty.

  2. Advanced semiconductor diagnosis by multidimensional electron-beam-induced current technique.

    PubMed

    Chen, J; Yuan, X; Sekiguchi, T

    2008-01-01

    We present advanced semiconductor diagnosis by using electron-beam-induced current (EBIC) technique. By varying the parameters such as temperature, accelerating voltage (V(acc)), bias voltage, and stressing time, it is possible to extend EBIC application from conventional defect characterization to advanced device diagnosis. As an electron beam can excite a certain volume even beneath the surface passive layer, EBIC can be effectively employed to diagnose complicated devices with hybrid structure. Three topics were selected to demonstrate EBIC applications. First, the recombination activities of grain boundaries and their interaction with Fe impurity in photovoltaic multicrystalline Si (mc-Si) are clarified by temperature-dependent EBIC. Second, the detection of dislocations between strained-Si and SiGe virtual substrate are shown to overcome the limitation of depletion region. Third, the observation of leakage sites in high-k gate dielectric is demonstrated for the characterization of advanced hybrid device structures.

  3. A comparative study of shear wave speed estimation techniques in optical coherence elastography applications

    NASA Astrophysics Data System (ADS)

    Zvietcovich, Fernando; Yao, Jianing; Chu, Ying-Ju; Meemon, Panomsak; Rolland, Jannick P.; Parker, Kevin J.

    2016-03-01

    Optical Coherence Elastography (OCE) is a widely investigated noninvasive technique for estimating the mechanical properties of tissue. In particular, vibrational OCE methods aim to estimate the shear wave velocity generated by an external stimulus in order to calculate the elastic modulus of tissue. In this study, we compare the performance of five acquisition and processing techniques for estimating the shear wave speed in simulations and experiments using tissue-mimicking phantoms. Accuracy, contrast-to-noise ratio, and resolution are measured for all cases. The first two techniques make the use of one piezoelectric actuator for generating a continuous shear wave propagation (SWP) and a tone-burst propagation (TBP) of 400 Hz over the gelatin phantom. The other techniques make use of one additional actuator located on the opposite side of the region of interest in order to create an interference pattern. When both actuators have the same frequency, a standing wave (SW) pattern is generated. Otherwise, when there is a frequency difference df between both actuators, a crawling wave (CrW) pattern is generated and propagates with less speed than a shear wave, which makes it suitable for being detected by the 2D cross-sectional OCE imaging. If df is not small compared to the operational frequency, the CrW travels faster and a sampled version of it (SCrW) is acquired by the system. Preliminary results suggest that TBP (error < 4.1%) and SWP (error < 6%) techniques are more accurate when compared to mechanical measurement test results.

  4. Recent advancements in nanoelectrodes and nanopipettes used in combined scanning electrochemical microscopy techniques.

    PubMed

    Kranz, Christine

    2014-01-21

    In recent years, major developments in scanning electrochemical microscopy (SECM) have significantly broadened the application range of this electroanalytical technique from high-resolution electrochemical imaging via nanoscale probes to large scale mapping using arrays of microelectrodes. A major driving force in advancing the SECM methodology is based on developing more sophisticated probes beyond conventional micro-disc electrodes usually based on noble metals or carbon microwires. This critical review focuses on the design and development of advanced electrochemical probes particularly enabling combinations of SECM with other analytical measurement techniques to provide information beyond exclusively measuring electrochemical sample properties. Consequently, this critical review will focus on recent progress and new developments towards multifunctional imaging.

  5. POC-scale testing of an advanced fine coal dewatering equipment/technique

    SciTech Connect

    1998-09-01

    Froth flotation technique is an effective and efficient process for recovering of ultra-fine (minus 74 pm) clean coal. Economical dewatering of an ultra-fine clean-coal product to a 20% level moisture will be an important step in successful implementation of the advanced cleaning processes. This project is a step in the Department of Energy`s program to show that ultra-clean coal could be effectively dewatered to 20% or lower moisture using either conventional or advanced dewatering techniques. The cost-sharing contract effort is for 36 months beginning September 30, 1994. This report discusses technical progress made during the quarter from July 1 - September 30, 1997.

  6. The investigation of advanced remote sensing techniques for the measurement of aerosol characteristics

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Becher, J.

    1979-01-01

    Advanced remote sensing techniques and inversion methods for the measurement of characteristics of aerosol and gaseous species in the atmosphere were investigated. Of particular interest were the physical and chemical properties of aerosols, such as their size distribution, number concentration, and complex refractive index, and the vertical distribution of these properties on a local as well as global scale. Remote sensing techniques for monitoring of tropospheric aerosols were developed as well as satellite monitoring of upper tropospheric and stratospheric aerosols. Computer programs were developed for solving multiple scattering and radiative transfer problems, as well as inversion/retrieval problems. A necessary aspect of these efforts was to develop models of aerosol properties.

  7. Advanced digital modulation: Communication techniques and monolithic GaAs technology

    NASA Technical Reports Server (NTRS)

    Wilson, S. G.; Oliver, J. D., Jr.; Kot, R. C.; Richards, C. R.

    1983-01-01

    Communications theory and practice are merged with state-of-the-art technology in IC fabrication, especially monolithic GaAs technology, to examine the general feasibility of a number of advanced technology digital transmission systems. Satellite-channel models with (1) superior throughput, perhaps 2 Gbps; (2) attractive weight and cost; and (3) high RF power and spectrum efficiency are discussed. Transmission techniques possessing reasonably simple architectures capable of monolithic fabrication at high speeds were surveyed. This included a review of amplitude/phase shift keying (APSK) techniques and the continuous-phase-modulation (CPM) methods, of which MSK represents the simplest case.

  8. Combined preputial advancement and phallopexy as a revision technique for treating paraphimosis in a dog.

    PubMed

    Wasik, S M; Wallace, A M

    2014-11-01

    A 7-year-old neutered male Jack Russell terrier-cross was presented for signs of recurrent paraphimosis, despite previous surgical enlargement of the preputial ostium. Revision surgery was performed using a combination of preputial advancement and phallopexy, which resulted in complete and permanent coverage of the glans penis by the prepuce, and at 1 year postoperatively, no recurrence of paraphimosis had been observed. The combined techniques allow preservation of the normal penile anatomy, are relatively simple to perform and provide a cosmetic result. We recommend this combination for the treatment of paraphimosis in the dog, particularly when other techniques have failed. PMID:25348145

  9. Development of advanced electron holographic techniques and application to industrial materials and devices.

    PubMed

    Yamamoto, Kazuo; Hirayama, Tsukasa; Tanji, Takayoshi

    2013-06-01

    The development of a transmission electron microscope equipped with a field emission gun paved the way for electron holography to be put to practical use in various fields. In this paper, we review three advanced electron holography techniques: on-line real-time electron holography, three-dimensional (3D) tomographic holography and phase-shifting electron holography, which are becoming important techniques for materials science and device engineering. We also describe some applications of electron holography to the analysis of industrial materials and devices: GaAs compound semiconductors, solid oxide fuel cells and all-solid-state lithium ion batteries.

  10. An example of requirements for Advanced Subsonic Civil Transport (ASCT) flight control system using structured techniques

    NASA Technical Reports Server (NTRS)

    Mclees, Robert E.; Cohen, Gerald C.

    1991-01-01

    The requirements are presented for an Advanced Subsonic Civil Transport (ASCT) flight control system generated using structured techniques. The requirements definition starts from initially performing a mission analysis to identify the high level control system requirements and functions necessary to satisfy the mission flight. The result of the study is an example set of control system requirements partially represented using a derivative of Yourdon's structured techniques. Also provided is a research focus for studying structured design methodologies and in particular design-for-validation philosophies.

  11. Satellite angular velocity estimation based on star images and optical flow techniques.

    PubMed

    Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele

    2013-09-25

    An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components.

  12. An evaluation of population index and estimation techniques for tadpoles in desert pools

    USGS Publications Warehouse

    Jung, Robin E.; Dayton, Gage H.; Williamson, Stephen J.; Sauer, John R.; Droege, Sam

    2002-01-01

    Using visual (VI) and dip net indices (DI) and double-observer (DOE), removal (RE), and neutral red dye capture-recapture (CRE) estimates, we counted, estimated, and censused Couch's spadefoot (Scaphiopus couchii) and canyon treefrog (Hyla arenicolor) tadpole populations in Big Bend National Park, Texas. Initial dye experiments helped us determine appropriate dye concentrations and exposure times to use in mesocosm and field trials. The mesocosm study revealed higher tadpole detection rates, more accurate population estimates, and lower coefficients of variation among pools compared to those from the field study. In both mesocosm and field studies, CRE was the best method for estimating tadpole populations, followed by DOE and RE. In the field, RE, DI, and VI often underestimated populations in pools with higher tadpole numbers. DI improved with increased sampling. Larger pools supported larger tadpole populations, and tadpole detection rates in general decreased with increasing pool volume and surface area. Hence, pool size influenced bias in tadpole sampling. Across all techniques, tadpole detection rates differed among pools, indicating that sampling bias was inherent and techniques did not consistently sample the same proportion of tadpoles in each pool. Estimating bias (i.e., calculating detection rates) therefore was essential in assessing tadpole abundance. Unlike VI and DOE, DI, RE, and CRE could be used in turbid waters in which tadpoles are not visible. The tadpole population estimates we used accommodated differences in detection probabilities in simple desert pool environments but may not work in more complex habitats.

  13. Satellite Angular Velocity Estimation Based on Star Images and Optical Flow Techniques

    PubMed Central

    Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele

    2013-01-01

    An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components. PMID:24072023

  14. An evaluation of population index and estimation techniques for tadpoles in desert pools

    USGS Publications Warehouse

    Jung, R.E.; Dayton, G.H.; Williamson, S.J.; Sauer, J.R.; Droege, S.

    2002-01-01

    Using visual (VI) and dip net indices (DI) and double-observer (DOE), removal (RE), and neutral red dye capture-recapture (CRE) estimates, we counted, estimated, and censused Couch's spadefoot (Scaphiopus couchii) and canyon treefrog (Hyla arenicolor) tadpole populations in Big Bend National Park, Texas. Initial dye experiments helped us determine appropriate dye concentrations and exposure times to use in mesocosm and field trials. The mesocosm study revealed higher tadpole detection rates, more accurate population estimates, and lower coefficients of variation among pools compared to those from the field study. In both mesocosm and field studies, CRE was the best method for estimating tadpole populations, followed by DOE and RE. In the field, RE, DI, and VI often underestimated populations in pools with higher tadpole numbers. DI improved with increased sampling. Larger pools supported larger tadpole populations, and tadpole detection rates in general decreased with increasing pool volume and surface area. Hence, pool size influenced bias in tadpole sampling. Across all techniques, tadpole detection rates differed among pools, indicating that sampling bias was inherent and techniques did not consistently sample the same proportion of tadpoles in each pool Estimating bias (i.e, calculating detection rates) therefore was essential in assessing tadpole abundance. Unlike VI and DOE, DI, RE, and CRE could be used in turbid waters in which tadpoles are not visible. The tadpole population estimates we used accommodated differences in detection probabilities in simple desert pool environments but may not work in more complex habitats.

  15. The Novel Nonlinear Adaptive Doppler Shift Estimation Technique and the Coherent Doppler Lidar System Validation Lidar

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Koch, Grady J.

    2006-01-01

    The signal processing aspect of a 2-m wavelength coherent Doppler lidar system under development at NASA Langley Research Center in Virginia is investigated in this paper. The lidar system is named VALIDAR (validation lidar) and its signal processing program estimates and displays various wind parameters in real-time as data acquisition occurs. The goal is to improve the quality of the current estimates such as power, Doppler shift, wind speed, and wind direction, especially in low signal-to-noise-ratio (SNR) regime. A novel Nonlinear Adaptive Doppler Shift Estimation Technique (NADSET) is developed on such behalf and its performance is analyzed using the wind data acquired over a long period of time by VALIDAR. The quality of Doppler shift and power estimations by conventional Fourier-transform-based spectrum estimation methods deteriorates rapidly as SNR decreases. NADSET compensates such deterioration in the quality of wind parameter estimates by adaptively utilizing the statistics of Doppler shift estimate in a strong SNR range and identifying sporadic range bins where good Doppler shift estimates are found. The authenticity of NADSET is established by comparing the trend of wind parameters with and without NADSET applied to the long-period lidar return data.

  16. Study of advanced techniques for determining the long term performance of components

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The application of existing and new technology to the problem of determining the long-term performance capability of liquid rocket propulsion feed systems is discussed. The long term performance of metal to metal valve seats in a liquid propellant fuel system is stressed. The approaches taken in conducting the analysis are: (1) advancing the technology of characterizing components through the development of new or more sensitive techniques and (2) improving the understanding of the physical of degradation.

  17. ADVANCING THE FUNDAMENTAL UNDERSTANDING AND SCALE-UP OF TRISO FUEL COATERS VIA ADVANCED MEASUREMENT AND COMPUTATIONAL TECHNIQUES

    SciTech Connect

    Biswas, Pratim; Al-Dahhan, Muthanna

    2012-11-01

    to advance the fundamental understanding of the hydrodynamics by systematically investigating the effect of design and operating variables, to evaluate the reported dimensionless groups as scaling factors, and to establish a reliable scale-up methodology for the TRISO fuel particle spouted bed coaters based on hydrodynamic similarity via advanced measurement and computational techniques. An additional objective is to develop an on-line non-invasive measurement technique based on gamma ray densitometry (i.e. Nuclear Gauge Densitometry) that can be installed and used for coater process monitoring to ensure proper performance and operation and to facilitate the developed scale-up methodology. To achieve the objectives set for the project, the work will use optical probes and gamma ray computed tomography (CT) (for the measurements of solids/voidage holdup cross-sectional distribution and radial profiles along the bed height, spouted diameter, and fountain height) and radioactive particle tracking (RPT) (for the measurements of the 3D solids flow field, velocity, turbulent parameters, circulation time, solids lagrangian trajectories, and many other of spouted bed related hydrodynamic parameters). In addition, gas dynamic measurement techniques and pressure transducers will be utilized to complement the obtained information. The measurements obtained by these techniques will be used as benchmark data to evaluate and validate the computational fluid dynamic (CFD) models (two fluid model or discrete particle model) and their closures. The validated CFD models and closures will be used to facilitate the developed methodology for scale-up, design and hydrodynamic similarity. Successful execution of this work and the proposed tasks will advance the fundamental understanding of the coater flow field and quantify it for proper and safe design, scale-up, and performance. Such achievements will overcome the barriers to AGR applications and will help assure that the US maintains

  18. Integrating Organic Matter Structure with Ecosystem Function using Advanced Analytical Chemistry Techniques

    NASA Astrophysics Data System (ADS)

    Boot, C. M.

    2012-12-01

    Microorganisms are the primary transformers of organic matter in terrestrial and aquatic ecosystems. The structure of organic matter controls its bioavailability and researchers have long sought to link the chemical characteristics of the organic matter pool to its lability. To date this effort has been primarily attempted using low resolution descriptive characteristics (e.g. organic matter content, carbon to nitrogen ratio, aromaticity, etc .). However, recent progress in linking these two important ecosystem components has been advanced using advanced high resolution tools (e.g. nuclear magnetic resonance (NMR) spectroscopy, and mass spectroscopy (MS)-based techniques). A series of experiments will be presented that highlight the application of high resolution techniques in a variety of terrestrial and aquatic ecosystems with the focus on how these data explicitly provide the foundation for integrating organic matter structure into our concept of ecosystem function. The talk will highlight results from a series of experiments including: an MS-based metabolomics and fluorescence excitation emission matrix approach evaluating seasonal and vegetation based changes in dissolved organic matter (DOM) composition from arctic soils; Fourier transform ion cyclotron resonance (FTICR) MS and MS metabolomics analysis of DOM from three lakes in an alpine watershed; and the transformation of 13C labeled glucose track with NMR during a rewetting experiment from Colorado grassland soils. These data will be synthesized to illustrate how the application of advanced analytical techniques provides novel insight into our understanding of organic matter processing in a wide range of ecosystems.

  19. POC-scale testing of an advanced fine coal dewatering equipment/technique

    SciTech Connect

    Groppo, J.G.; Parekh, B.K.; Rawls, P.

    1995-11-01

    Froth flotation technique is an effective and efficient process for recovering of ultra-fine (minus 74 {mu}m) clean coal. Economical dewatering of an ultra-fine clean coal product to a 20 percent level moisture will be an important step in successful implementation of the advanced cleaning processes. This project is a step in the Department of Energy`s program to show that ultra-clean coal could be effectively dewatered to 20 percent or lower moisture using either conventional or advanced dewatering techniques. As the contract title suggests, the main focus of the program is on proof-of-concept testing of a dewatering technique for a fine clean coal product. The coal industry is reluctant to use the advanced fine coal recovery technology due to the non-availability of an economical dewatering process. in fact, in a recent survey conducted by U.S. DOE and Battelle, dewatering of fine clean coal was identified as the number one priority for the coal industry. This project will attempt to demonstrate an efficient and economic fine clean coal slurry dewatering process.

  20. Estimating the Concrete Compressive Strength Using Hard Clustering and Fuzzy Clustering Based Regression Techniques

    PubMed Central

    Nagwani, Naresh Kumar; Deo, Shirish V.

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939

  1. Advanced Time-Resolved Fluorescence Microscopy Techniques for the Investigation of Peptide Self-Assembly

    NASA Astrophysics Data System (ADS)

    Anthony, Neil R.

    The ubiquitous cross beta sheet peptide motif is implicated in numerous neurodegenerative diseases while at the same time offers remarkable potential for constructing isomorphic high-performance bionanomaterials. Despite an emerging understanding of the complex folding landscape of cross beta structures in determining disease etiology and final structure, we lack knowledge of the critical initial stages of nucleation and growth. In this dissertation, I advance our understanding of these key stages in the cross-beta nucleation and growth pathways using cutting-edge microscopy techniques. In addition, I present a new combined time-resolved fluorescence analysis technique with the potential to advance our current understanding of subtle molecular level interactions that play a pivotal role in peptide self-assembly. Using the central nucleating core of Alzheimer's Amyloid-beta protein, Abeta(16 22), as a model system, utilizing electron, time-resolved, and non-linear microscopy, I capture the initial and transient nucleation stages of peptide assembly into the cross beta motif. In addition, I have characterized the nucleation pathway, from monomer to paracrystalline nanotubes in terms of morphology and fluorescence lifetime, corroborating the predicted desolvation process that occurs prior to cross-beta nucleation. Concurrently, I have identified unique heterogeneous cross beta domains contained within individual nanotube structures, which have potential bionanomaterials applications. Finally, I describe a combined fluorescence theory and analysis technique that dramatically increases the sensitivity of current time-resolved techniques. Together these studies demonstrate the potential for advanced microscopy techniques in the identification and characterization of the cross-beta folding pathway, which will further our understanding of both amyloidogenesis and bionanomaterials.

  2. Convex-hull mass estimates of the dodo (Raphus cucullatus): application of a CT-based mass estimation technique

    PubMed Central

    O’Mahoney, Thomas G.; Kitchener, Andrew C.; Manning, Phillip L.; Sellers, William I.

    2016-01-01

    The external appearance of the dodo (Raphus cucullatus, Linnaeus, 1758) has been a source of considerable intrigue, as contemporaneous accounts or depictions are rare. The body mass of the dodo has been particularly contentious, with the flightless pigeon alternatively reconstructed as slim or fat depending upon the skeletal metric used as the basis for mass prediction. Resolving this dichotomy and obtaining a reliable estimate for mass is essential before future analyses regarding dodo life history, physiology or biomechanics can be conducted. Previous mass estimates of the dodo have relied upon predictive equations based upon hind limb dimensions of extant pigeons. Yet the hind limb proportions of dodo have been found to differ considerably from those of their modern relatives, particularly with regards to midshaft diameter. Therefore, application of predictive equations to unusually robust fossil skeletal elements may bias mass estimates. We present a whole-body computed tomography (CT) -based mass estimation technique for application to the dodo. We generate 3D volumetric renders of the articulated skeletons of 20 species of extant pigeons, and wrap minimum-fit ‘convex hulls’ around their bony extremities. Convex hull volume is subsequently regressed against mass to generate predictive models based upon whole skeletons. Our best-performing predictive model is characterized by high correlation coefficients and low mean squared error (a = − 2.31, b = 0.90, r2 = 0.97, MSE = 0.0046). When applied to articulated composite skeletons of the dodo (National Museums Scotland, NMS.Z.1993.13; Natural History Museum, NHMUK A.9040 and S/1988.50.1), we estimate eviscerated body masses of 8–10.8 kg. When accounting for missing soft tissues, this may equate to live masses of 10.6–14.3 kg. Mass predictions presented here overlap at the lower end of those previously published, and support recent suggestions of a relatively slim dodo. CT-based reconstructions provide a

  3. Convex-hull mass estimates of the dodo (Raphus cucullatus): application of a CT-based mass estimation technique.

    PubMed

    Brassey, Charlotte A; O'Mahoney, Thomas G; Kitchener, Andrew C; Manning, Phillip L; Sellers, William I

    2016-01-01

    The external appearance of the dodo (Raphus cucullatus, Linnaeus, 1758) has been a source of considerable intrigue, as contemporaneous accounts or depictions are rare. The body mass of the dodo has been particularly contentious, with the flightless pigeon alternatively reconstructed as slim or fat depending upon the skeletal metric used as the basis for mass prediction. Resolving this dichotomy and obtaining a reliable estimate for mass is essential before future analyses regarding dodo life history, physiology or biomechanics can be conducted. Previous mass estimates of the dodo have relied upon predictive equations based upon hind limb dimensions of extant pigeons. Yet the hind limb proportions of dodo have been found to differ considerably from those of their modern relatives, particularly with regards to midshaft diameter. Therefore, application of predictive equations to unusually robust fossil skeletal elements may bias mass estimates. We present a whole-body computed tomography (CT) -based mass estimation technique for application to the dodo. We generate 3D volumetric renders of the articulated skeletons of 20 species of extant pigeons, and wrap minimum-fit 'convex hulls' around their bony extremities. Convex hull volume is subsequently regressed against mass to generate predictive models based upon whole skeletons. Our best-performing predictive model is characterized by high correlation coefficients and low mean squared error (a = - 2.31, b = 0.90, r (2) = 0.97, MSE = 0.0046). When applied to articulated composite skeletons of the dodo (National Museums Scotland, NMS.Z.1993.13; Natural History Museum, NHMUK A.9040 and S/1988.50.1), we estimate eviscerated body masses of 8-10.8 kg. When accounting for missing soft tissues, this may equate to live masses of 10.6-14.3 kg. Mass predictions presented here overlap at the lower end of those previously published, and support recent suggestions of a relatively slim dodo. CT-based reconstructions provide a means of

  4. Convex-hull mass estimates of the dodo (Raphus cucullatus): application of a CT-based mass estimation technique.

    PubMed

    Brassey, Charlotte A; O'Mahoney, Thomas G; Kitchener, Andrew C; Manning, Phillip L; Sellers, William I

    2016-01-01

    The external appearance of the dodo (Raphus cucullatus, Linnaeus, 1758) has been a source of considerable intrigue, as contemporaneous accounts or depictions are rare. The body mass of the dodo has been particularly contentious, with the flightless pigeon alternatively reconstructed as slim or fat depending upon the skeletal metric used as the basis for mass prediction. Resolving this dichotomy and obtaining a reliable estimate for mass is essential before future analyses regarding dodo life history, physiology or biomechanics can be conducted. Previous mass estimates of the dodo have relied upon predictive equations based upon hind limb dimensions of extant pigeons. Yet the hind limb proportions of dodo have been found to differ considerably from those of their modern relatives, particularly with regards to midshaft diameter. Therefore, application of predictive equations to unusually robust fossil skeletal elements may bias mass estimates. We present a whole-body computed tomography (CT) -based mass estimation technique for application to the dodo. We generate 3D volumetric renders of the articulated skeletons of 20 species of extant pigeons, and wrap minimum-fit 'convex hulls' around their bony extremities. Convex hull volume is subsequently regressed against mass to generate predictive models based upon whole skeletons. Our best-performing predictive model is characterized by high correlation coefficients and low mean squared error (a = - 2.31, b = 0.90, r (2) = 0.97, MSE = 0.0046). When applied to articulated composite skeletons of the dodo (National Museums Scotland, NMS.Z.1993.13; Natural History Museum, NHMUK A.9040 and S/1988.50.1), we estimate eviscerated body masses of 8-10.8 kg. When accounting for missing soft tissues, this may equate to live masses of 10.6-14.3 kg. Mass predictions presented here overlap at the lower end of those previously published, and support recent suggestions of a relatively slim dodo. CT-based reconstructions provide a means of

  5. A technique for estimating complicated power spectra from time series with gaps

    NASA Astrophysics Data System (ADS)

    Brown, Timothy M.; Christensen-Dalsgaard, Jorgen

    1990-02-01

    Fahlman and Ulrych (1982) describe a method for estimating the power and phase spectra of gapped time series, using a maximum-entropy reconstruction of the data in the gaps. It has proved difficult to apply this technique to solar oscillations data, because of the great complexity of the solar oscillations spectrum. A means for avoiding this difficulty is described, and the results of a series of blind tests of the modified technique are reported. The main results of these tests are: (1) gap filling gives good results, provided that the signal-to-noise ratio in the original data is large enough, and provided the gaps are short enough. For low-noise data, the duty cycle of the observations should not be less than about 50 percent. (2) the frequencies and widths of narrow spectrum features are well reproduced by the technique. (3) The technique systematically reduces the apparent amplitudes of small features in the spectrum relative to large ones.

  6. A technique for optimal temperature estimation for modeling sunrise/sunset thermal snap disturbance torque

    NASA Technical Reports Server (NTRS)

    Zimbelman, D. F.; Dennehy, C. J.; Welch, R. V.; Born, G. H.

    1990-01-01

    A predictive temperature estimation technique which can be used to drive a model of the Sunrise/Sunset thermal 'snap' disturbance torque experienced by low Earth orbiting spacecraft is described. The twice per orbit impulsive disturbance torque is attributed to vehicle passage in and out of the Earth's shadow cone (umbra), during which large flexible appendages undergo rapidly changing thermal conditions. Flexible members, in particular solar arrays, experience rapid cooling during umbra entrance (Sunset) and rapid heating during exit (Sunrise). The thermal 'snap' phenomena has been observed during normal on-orbit operations of both the LANDSAT-4 satellite and the Communications Technology Satellite (CTS). Thermal 'snap' has also been predicted to be a dominant source of error for the TOPEX satellite. The fundamental equations used to model the Sunrise/Sunset thermal 'snap' disturbance torque for a typical solar array like structure will be described. For this derivation the array is assumed to be a thin, cantilevered beam. The time varying thermal gradient is shown to be the driving force behind predicting the thermal 'snap' disturbance torque and therefore motivates the need for accurate estimates of temperature. The development of a technique to optimally estimate appendage surface temperature is highlighted. The objective analysis method used is structured on the Gauss-Markov Theorem and provides an optimal temperature estimate at a prescribed location given data from a distributed thermal sensor network. The optimally estimated surface temperatures could then be used to compute the thermal gradient across the body. The estimation technique is demonstrated using a typical satellite solar array.

  7. Recursive estimation techniques for detection of small objects in infrared image data

    NASA Astrophysics Data System (ADS)

    Zeidler, J. R.; Soni, T.; Ku, W. H.

    1992-04-01

    This paper describes a recursive detection scheme for point targets in infrared (IR) images. Estimation of the background noise is done using a weighted autocorrelation matrix update method and the detection statistic is calculated using a recursive technique. A weighting factor allows the algorithm to have finite memory and deal with nonstationary noise characteristics. The detection statistic is created by using a matched filter for colored noise, using the estimated noise autocorrelation matrix. The relationship between the weighting factor, the nonstationarity of the noise and the probability of detection is described. Some results on one- and two-dimensional infrared images are presented.

  8. Regressions by leaps and bounds and biased estimation techniques in yield modeling

    NASA Technical Reports Server (NTRS)

    Marquina, N. E. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. It was observed that OLS was not adequate as an estimation procedure when the independent or regressor variables were involved in multicollinearities. This was shown to cause the presence of small eigenvalues of the extended correlation matrix A'A. It was demonstrated that the biased estimation techniques and the all-possible subset regression could help in finding a suitable model for predicting yield. Latent root regression was an excellent tool that found how many predictive and nonpredictive multicollinearities there were.

  9. Accelerated Testing Methodology in Constant Stress-Rate Testing for Advanced Structural Ceramics: A Preloading Technique

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Gyekenyesi, John P.; Huebert, Dean; Bartlett, Allen; Choi, Han-Ho

    2001-01-01

    Preloading technique was used as a means of an accelerated testing methodology in constant stress-rate ('dynamic fatigue') testing for two different brittle materials. The theory developed previously for fatigue strength as a function of preload was further verified through extensive constant stress-rate testing for glass-ceramic and CRT glass in room temperature distilled water. The preloading technique was also used in this study to identify the prevailing failure mechanisms at elevated temperatures, particularly at lower test rate in which a series of mechanisms would be associated simultaneously with material failure, resulting in significant strength increase or decrease. Two different advanced ceramics including SiC whisker-reinforced composite silicon nitride and 96 wt% alumina were used at elevated temperatures. It was found that the preloading technique can be used as an additional tool to pinpoint the dominant failure mechanism that is associated with such a phenomenon of considerable strength increase or decrease.

  10. Accelerated Testing Methodology in Constant Stress-Rate Testing for Advanced Structural Ceramics: A Preloading Technique

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Gyekenyesi, John P.; Huebert, Dean; Bartlett, Allen; Choi, Han-Ho

    2001-01-01

    Preloading technique was used as a means of an accelerated testing methodology in constant stress-rate (dynamic fatigue) testing for two different brittle materials. The theory developed previously for fatigue strength as a function of preload was further verified through extensive constant stress-rate testing for glass-ceramic and CRT glass in room temperature distilled water. The preloading technique was also used in this study to identify the prevailing failure mechanisms at elevated temperatures, particularly at lower test rates in which a series of mechanisms would be associated simultaneously with material failure, resulting in significant strength increase or decrease. Two different advanced ceramics including SiC whisker-reinforced composite silicon nitride and 96 wt% alumina were used at elevated temperatures. It was found that the preloading technique can be used as an additional tool to pinpoint the dominant failure mechanism that is associated with such a phenomenon of considerable strength increase or decrease.

  11. Applications of Advanced Nondestructive Measurement Techniques to Address Safety of Flight Issues on NASA Spacecraft

    NASA Technical Reports Server (NTRS)

    Prosser, Bill

    2016-01-01

    Advanced nondestructive measurement techniques are critical for ensuring the reliability and safety of NASA spacecraft. Techniques such as infrared thermography, THz imaging, X-ray computed tomography and backscatter X-ray are used to detect indications of damage in spacecraft components and structures. Additionally, sensor and measurement systems are integrated into spacecraft to provide structural health monitoring to detect damaging events that occur during flight such as debris impacts during launch and assent or from micrometeoroid and orbital debris, or excessive loading due to anomalous flight conditions. A number of examples will be provided of how these nondestructive measurement techniques have been applied to resolve safety critical inspection concerns for the Space Shuttle, International Space Station (ISS), and a variety of launch vehicles and unmanned spacecraft.

  12. Recent advances in applying Free Vortex Sheet theory to the estimation of vortex flow aerodynamics

    NASA Technical Reports Server (NTRS)

    Luckring, J. M.; Schoonover, W. E., Jr.; Frink, N. T.

    1982-01-01

    Free Vortex Sheet theory has been applied to a variety of configurations for the estimation of three-dimensional pressure distributions for wings developing separation-induced leading-edge vortex flows. Correlations with experiment show reasonable estimates for the effects of compressibility, side-slip, side edges, swept-wing blast-induced loads, and leading-edge vortex flaps. Theoretical studies expand upon these correlations to show general aerodynamic trends. Consideration is also given to simple, yet effective techniques which expedite convergence and therefore reduce computational expense.

  13. Variance reduction techniques for estimation of integrals over a set of branching trajectories

    NASA Astrophysics Data System (ADS)

    Tsvetkov, E. A.

    2014-02-01

    Monte Carlo variance reduction techniques within the supertrack approach are justified as applied to estimating non-Boltzmann tallies equal to the mean of a random variable defined on the set of all branching trajectories. For this purpose, a probability space is constructed on the set of all branching trajectories, and the unbiasedness of this method is proved by averaging over all trajectories. Variance reduction techniques, such as importance sampling, splitting, and Russian roulette, are discussed. A method is described for extending available codes based on the von Neumann-Ulam scheme in order to cover the supertrack approach.

  14. Advanced spatio-temporal filtering techniques for photogrammetric image sequence analysis in civil engineering material testing

    NASA Astrophysics Data System (ADS)

    Liebold, F.; Maas, H.-G.

    2016-01-01

    The paper shows advanced spatial, temporal and spatio-temporal filtering techniques which may be used to reduce noise effects in photogrammetric image sequence analysis tasks and tools. As a practical example, the techniques are validated in a photogrammetric spatio-temporal crack detection and analysis tool applied in load tests in civil engineering material testing. The load test technique is based on monocular image sequences of a test object under varying load conditions. The first image of a sequence is defined as a reference image under zero load, wherein interest points are determined and connected in a triangular irregular network structure. For each epoch, these triangles are compared to the reference image triangles to search for deformations. The result of the feature point tracking and triangle comparison process is a spatio-temporally resolved strain value field, wherein cracks can be detected, located and measured via local discrepancies. The strains can be visualized as a color-coded map. In order to improve the measuring system and to reduce noise, the strain values of each triangle must be treated in a filtering process. The paper shows the results of various filter techniques in the spatial and in the temporal domain as well as spatio-temporal filtering techniques applied to these data. The best results were obtained by a bilateral filter in the spatial domain and by a spatio-temporal EOF (empirical orthogonal function) filtering technique.

  15. Comparative assessment of techniques for initial pose estimation using monocular vision

    NASA Astrophysics Data System (ADS)

    Sharma, Sumant; D`Amico, Simone

    2016-06-01

    This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.

  16. Review of recent advances in analytical techniques for the determination of neurotransmitters

    PubMed Central

    Perry, Maura; Li, Qiang; Kennedy, Robert T.

    2009-01-01

    Methods and advances for monitoring neurotransmitters in vivo or for tissue analysis of neurotransmitters over the last five years are reviewed. The review is organized primarily by neurotransmitter type. Transmitter and related compounds may be monitored by either in vivo sampling coupled to analytical methods or implanted sensors. Sampling is primarily performed using microdialysis, but low-flow push-pull perfusion may offer advantages of spatial resolution while minimizing the tissue disruption associated with higher flow rates. Analytical techniques coupled to these sampling methods include liquid chromatography, capillary electrophoresis, enzyme assays, sensors, and mass spectrometry. Methods for the detection of amino acid, monoamine, neuropeptide, acetylcholine, nucleoside, and soluable gas neurotransmitters have been developed and improved upon. Advances in the speed and sensitivity of these methods have enabled improvements in temporal resolution and increased the number of compounds detectable. Similar advances have enabled improved detection at tissue samples, with a substantial emphasis on single cell and other small samples. Sensors provide excellent temporal and spatial resolution for in vivo monitoring. Advances in application to catecholamines, indoleamines, and amino acids have been prominent. Improvements in stability, sensitivity, and selectivity of the sensors have been of paramount interest. PMID:19800472

  17. Estimation of Biochemical Constituents From Fresh, Green Leaves By Spectrum Matching Techniques

    NASA Technical Reports Server (NTRS)

    Goetz, A. F. H.; Gao, B. C.; Wessman, C. A.; Bowman, W. D.

    1990-01-01

    Estimation of biochemical constituents in vegetation such as lignin, cellulose, starch, sugar and protein by remote sensing methods is an important goal in ecological research. The spectral reflectances of dried leaves exhibit diagnostic absorption features which can be used to estimate the abundance of important constituents. Lignin and nitrogen concentrations have been obtained from canopies by use of imaging spectrometry and multiple linear regression techniques. The difficulty in identifying individual spectra of leaf constituents in the region beyond 1 micrometer is that liquid water contained in the leaf dominates the spectral reflectance of leaves in this region. By use of spectrum matching techniques, originally used to quantify whole column water abundance in the atmosphere and equivalent liquid water thickness in leaves, we have been able to remove the liquid water contribution to the spectrum. The residual spectra resemble spectra for cellulose in the 1.1 micrometer region, lignin in the 1.7 micrometer region, and starch in the 2.0-2.3 micrometer region. In the entire 1.0-2.3 micrometer region each of the major constituents contributes to the spectrum. Quantitative estimates will require using unmixing techniques on the residual spectra.

  18. Estimating heat fluxes by merging profile formulae and the energy budget with a variational technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shuwen; Qiu, Chongjian; Zhang, Weidong

    2004-08-01

    A variational technique (VT) is applied to estimate surface sensible and latent heat fluxes based on observations of air temperature, wind speed, and humidity, respectively, at three heights (1 m, 4 m, and 10 m), and the surface energy and radiation budgets by the surface energy and radiation system (SERBS). The method fully uses all information provided by the measurements of air temperature, wind, and humidity profiles, the surface energy budget, and the similarity profile formulae as well. Data collected at Feixi experiment station installed by the China Heavy Rain Experiment and Study (HeRES) Program are used to test the method. Results show that the proposed technique can overcome the well-known unstablility problem that occurs when the Bowen method becomes singular; in comparison with the profile method, it reduces both the sensitivities of latent heat fluxes to observational errors in humidity and those of sensible heat fluxes to observational errors in temperature, while the estimated heat fluxes approximately satisfy the surface energy budget. Therefore, the variational technique is more reliable and stable than the two conventional methods in estimating surface sensible and latent heat fluxes.

  19. Controlled random search technique for estimation of convective heat transfer coefficient

    NASA Astrophysics Data System (ADS)

    Mehta, R. C.; Tiwari, S. B.

    2007-09-01

    This paper is concerned with a method for solving inverse heat conduction problem. The method is based on the controlled random search (CRS) technique in conjunction with modified Newton-Raphson method. The random search procedure does not need the computation of derivative of the function to be evaluated. Therefore, it is independent of the calculation of the sensitivity coefficient for nonlinear parameter estimation. The algorithm does not depend on the future-temperature information and can predict convective heat transfer coefficient with random errors in the input temperature data. The technique is first validated against an analytical solution of heat conduction equation for a typical rocket nozzle. Comparison with an earlier analysis of inverse heat conduction problem of a similar experiment shows that the present method provides solutions, which are fully consistent with the earlier results. Once validated, the technique is used to investigate another estimation of heat transfer coefficient for an experiment of short duration, high heating rate, and employing indepth temperature measurement. The CRS procedure, in conjunction with modified Newton-Raphson method, is quite useful in estimating the value of the convective heat-transfer coefficient from the measured transient temperature data on the outer surface or imbedded thermocouple inside the rocket nozzle. Some practical examples are illustrated, which demonstrate the stability and accuracy of the method to predict the surface heat flux.

  20. Preliminary development of a technique for estimating municipal-solid-waste generation

    NASA Astrophysics Data System (ADS)

    1981-06-01

    The data obtained revealed detailed generation quantities by collection route for defined areas of the cities. These data were then used to test various predictive factors. The end result of the analysis is a provisional method for estimating residential solid-waste generation by relating it to income data readily available from government documents, and a provisional method for estimating commercial solid waste generation by relating it to readily available retail sales data. The analysis of the data obtained has resulted in a residential solid-waste estimation technique that can be applied to virtually any city or region of interest. The general approach consisted of: data collection from selected solid waste jurisdictions, the determination of demographic and socioeconomic data for each solid waste jurisdiction; and data analysis. Each stage is discussed.

  1. Advanced methods for time-varying effective connectivity estimation in memory processes.

    PubMed

    Astolfi, L; Toppi, J; Wood, G; Kober, S; Risetti, M; Macchiusi, L; Salinari, S; Babiloni, F; Mattia, D

    2013-01-01

    Memory processes are based on large cortical networks characterized by non-stationary properties and time scales which represent a limitation to the traditional connectivity estimation methods. The recent development of connectivity approaches able to consistently describe the temporal evolution of large dimension connectivity networks, in a fully multivariate way, represents a tool that can be used to extract novel information about the processes at the basis of memory functions. In this paper, we applied such advanced approach in combination with the use of state-of-the-art graph theory indexes, computed on the connectivity networks estimated from high density electroencephalographic (EEG) data recorded in a group of healthy adults during the Sternberg Task. The results show how this approach is able to return a characterization of the main phases of the investigated memory task which is also sensitive to the increased length of the numerical string to be memorized. PMID:24110342

  2. Reliability and Efficacy of Water Use Estimation Techniques and their Impact on Water Management and Policy

    NASA Astrophysics Data System (ADS)

    Singh, A.; Deeds, N.; Kelley, V.

    2012-12-01

    Estimating how much water is being used by various water users is key to effective management and optimal utilization of groundwater resources. This is especially true for aquifers like the Ogallala that are severely stressed and display depleting trends over the last many years. The High Plains Underground Water Conservation District (HPWD) is the largest and oldest of the Texas water conservation districts, and oversees approximately 1.7 million irrigated acres. Water users within the 16 counties that comprise the HPWD draw from the Ogallala extensively. The HPWD has recently proposed flow-meters as well as various 'alternative methods' for water users to report water usage. Alternative methods include using a) site specific energy conversion factors to convert total amount of energy used (for pumping stations) to water pumped, b) reporting nozzle package (on center pivot irrigation systems) specifications and hours of usage, and c) reporting concentrated animal feeding operations (CAFOs). The focus of this project was to evaluate the reliability and effectiveness for each of these water use estimation techniques for regulatory purposes. Reliability and effectiveness of direct flow-metering devices was also addressed. Findings indicate that due to site-specific variability and hydrogeologic heterogeneity, alternative methods for estimating water use can have significant uncertainties associated with water use estimates. The impact of these uncertainties on overall water usage, conservation, and management was also evaluated. The findings were communicated to the Stakeholder Advisory Group and the Water Conservation District with guidelines and recommendations on how best to implement the various techniques.

  3. Random sets technique for information fusion applied to estimation of brain functional images

    NASA Astrophysics Data System (ADS)

    Smith, Therese M.; Kelly, Patrick A.

    1999-05-01

    A new mathematical technique for information fusion based on random sets, developed and described by Goodman, Mahler and Nguyen (The Mathematics of Data Fusion, Kluwer, 1997) can be useful for estimation of functional brian images. Many image estimation algorithms employ prior models that incorporate general knowledge about sizes, shapes and locations of brain regions. Recently, algorithms have been proposed using specific prior knowledge obtained from other imaging modalities (for example, Bowsher, et al., IEEE Trans. Medical Imaging, 1996). However, there is more relevant information than is presently used. A technique that permits use of additional prior information about activity levels would improve the quality of prior models, and hence, of the resulting image estimate. The use of random sets provides this capability because it allows seemingly non-statistical (or ambiguous) information such as that contained in inference rules to be represented and combined with observations in a single statistical model, corresponding to a global joint density. This paper illustrates the use of this approach by constructing an example global joint density function for brain functional activity from measurements of functional activity, anatomical information, clinical observations and inference rules. The estimation procedure is tested on a data phantom with Poisson noise.

  4. Innovative techniques for estimating illegal activities in a human-wildlife-management conflict.

    PubMed

    Cross, Paul; St John, Freya A V; Khan, Saira; Petroczi, Andrea

    2013-01-01

    Effective management of biological resources is contingent upon stakeholder compliance with rules. With respect to disease management, partial compliance can undermine attempts to control diseases within human and wildlife populations. Estimating non-compliance is notoriously problematic as rule-breakers may be disinclined to admit to transgressions. However, reliable estimates of rule-breaking are critical to policy design. The European badger (Meles meles) is considered an important vector in the transmission and maintenance of bovine tuberculosis (bTB) in cattle herds. Land managers in high bTB prevalence areas of the UK can cull badgers under license. However, badgers are also known to be killed illegally. The extent of illegal badger killing is currently unknown. Herein we report on the application of three innovative techniques (Randomized Response Technique (RRT); projective questioning (PQ); brief implicit association test (BIAT)) for investigating illegal badger killing by livestock farmers across Wales. RRT estimated that 10.4% of farmers killed badgers in the 12 months preceding the study. Projective questioning responses and implicit associations relate to farmers' badger killing behavior reported via RRT. Studies evaluating the efficacy of mammal vector culling and vaccination programs should incorporate estimates of non-compliance. Mitigating the conflict concerning badgers as a vector of bTB requires cross-disciplinary scientific research, departure from deep-rooted positions, and the political will to implement evidence-based management.

  5. Oscillation and Reaction Board Techniques for Estimating Inertial Properties of a Below-knee Prosthesis

    PubMed Central

    Smith, Jeremy D.; Ferris, Abbie E.; Heise, Gary D.; Hinrichs, Richard N.; Martin, Philip E.

    2014-01-01

    The purpose of this study was two-fold: 1) demonstrate a technique that can be used to directly estimate the inertial properties of a below-knee prosthesis, and 2) contrast the effects of the proposed technique and that of using intact limb inertial properties on joint kinetic estimates during walking in unilateral, transtibial amputees. An oscillation and reaction board system was validated and shown to be reliable when measuring inertial properties of known geometrical solids. When direct measurements of inertial properties of the prosthesis were used in inverse dynamics modeling of the lower extremity compared with inertial estimates based on an intact shank and foot, joint kinetics at the hip and knee were significantly lower during the swing phase of walking. Differences in joint kinetics during stance, however, were smaller than those observed during swing. Therefore, researchers focusing on the swing phase of walking should consider the impact of prosthesis inertia property estimates on study outcomes. For stance, either one of the two inertial models investigated in our study would likely lead to similar outcomes with an inverse dynamics assessment. PMID:24837164

  6. New Generation of High Resolution Ultrasonic Imaging Technique for Advanced Material Characterization: Review

    NASA Astrophysics Data System (ADS)

    Maev, R. Gr.

    The role of non-destructive material characterization and NDT is changing at a rapid rate, continuing to evolve alongside the dramatic development of novel techniques based on the principles of high-resolution imaging. The modern use of advanced optical, thermal, ultrasonic, laser-ultrasound, acoustic emission, vibration, electro-magnetic, and X-ray techniques, etc., as well as refined measurement and signal/data processing devices, allows for continuous generation of on-line information. As a result real-time process monitoring can be achieved, leading to the more effective and efficient control of numerous processes, greatly improving manufacturing as a whole. Indeed, concurrent quality inspection has become an attainable reality. With the advent of new materials for use in various structures, joints, and parts, however, innovative applications of modern NDT imaging techniques are necessary to monitor as many stages of manufacturing as possible. Simply put, intelligent advance manufacturing is impossible without actively integrating modern non-destructive evaluation into the production system.

  7. Application of Advanced Atomic Force Microscopy Techniques to Study Quantum Dots and Bio-materials

    NASA Astrophysics Data System (ADS)

    Guz, Nataliia

    In recent years, there has been an increase in research towards micro- and nanoscale devices as they have proliferated into diverse areas of scientific exploration. Many of the general fields of study that have greatly affected the advancement of these devices includes the investigation of their properties. The sensitivity of Atomic Force Microscopy (AFM) allows detecting charges up to the single electron value in quantum dots in ambient conditions, the measurement of steric forces on the surface of the human cell brush, determination of cell mechanics, magnetic forces, and other important properties. Utilizing AFM methods, the fast screening of quantum dot efficiency and the differences between cancer, normal (healthy) and precancer (immortalized) human cells has been investigated. The current research using AFM techniques can help to identify biophysical differences of cancer cells to advance our understanding of the resistance of the cells against the existing medicine.

  8. Impact of advanced microstructural characterization techniques on modeling and analysis of radiation damage

    SciTech Connect

    Garner, F.A.; Odette, G.R.

    1980-01-01

    The evolution of radiation-induced alterations of dimensional and mechanical properties has been shown to be a direct and often predictable consequence of radiation-induced microstructural changes. Recent advances in understanding of the nature and role of each microstructural component in determining the property of interest has led to a reappraisal of the type and priority of data needed for further model development. This paper presents an overview of the types of modeling and analysis activities in progress, the insights that prompted these activities, and specific examples of successful and ongoing efforts. A review is presented of some problem areas that in the authors' opinion are not yet receiving sufficient attention and which may benefit from the application of advanced techniques of microstructural characterization. Guidelines based on experience gained in previous studies are also provided for acquisition of data in a form most applicable to modeling needs.

  9. CMB EB and TB cross-spectrum estimation via pseudospectrum techniques

    NASA Astrophysics Data System (ADS)

    Grain, J.; Tristram, M.; Stompor, R.

    2012-10-01

    We discuss methods for estimating EB and TB spectra of the cosmic microwave background anisotropy maps covering limited sky area. Such odd-parity correlations are expected to vanish whenever parity is not broken. As this is indeed the case in the standard cosmologies, any evidence to the contrary would have a profound impact on our theories of the early Universe. Such correlations could also become a sensitive diagnostic of some particularly insidious instrumental systematics. In this work we introduce three different unbiased estimators based on the so-called standard and pure pseudo-spectrum techniques and later assess their performance by means of extensive Monte Carlo simulations performed for different experimental configurations. We find that a hybrid approach combining a pure estimate of B-mode multipoles with a standard one for E-mode (or T) multipoles, leads to the smallest error bars for both EB (or TB respectively) spectra as well as for the three other polarization-related angular power spectra (i.e., EE, BB, and TE). However, if both E and B multipoles are estimated using the pure technique, the loss of precision for the EB spectrum is not larger than ˜30%. Moreover, for the experimental configurations considered here, the statistical uncertainties-due to sampling variance and instrumental noise-of the pseudo-spectrum estimates is at most a factor ˜1.4 for TT, EE, and TE spectra and a factor ˜2 for BB, TB, and EB spectra, higher than the most optimistic Fisher estimate of the variance.

  10. A rapid screening technique for estimating nanoparticle transport in porous media.

    PubMed

    Bouchard, Dermont; Zhang, Wei; Chang, Xiaojun

    2013-08-01

    Quantifying the mobility of engineered nanoparticles in hydrologic pathways from point of release to human or ecological receptors is essential for assessing environmental exposures. Column transport experiments are a widely used technique to estimate the transport parameters of engineered nanoparticles in the subsurface environment, but this technique is often time-consuming, labor-intensive, and of low sample throughput. Thus, the traditional column experiment is unlikely to be a viable tool for processing the large numbers of engineered nanomaterials in various types of porous media that will be needed for environmental impact assessment and regulatory activities. Here we present a high throughput screening technique for nanoparticle transport using 96 deep well plate columns packed with porous media. The technique was tested for the transport of 60-nm polystyrene microspheres, fullerene C60 nanoparticles (aq/nC60), and surfactant-wrapped single-walled carbon nanotubes (SWNTs) in 0.001-0.1% sodium dodecyl sulfate (SDS) through Iota quartz sand and Calls Creek sediment. Our results showed that this screening technique produced highly reproducible column hydrodynamic properties as revealed by conservative tracer tests and precise measurements of nanoparticle transport parameters. Additionally, all nanoparticles exhibited greater retention in the sediment than in Iota quartz, and the retention of SDS-SWNTs decreased with increasing SDS concentrations, which is consistent with the existing literature. We conclude that this technique is well suited for rapidly screening the mobility of engineered nanomaterials in porous media.

  11. A biomechanical review of the techniques used to estimate or measure resistive forces in swimming.

    PubMed

    Sacilotto, Gina B D; Ball, Nick; Mason, Bruce R

    2014-02-01

    Resistive or drag forces encountered during free swimming greatly influence the swim performance of elite competitive swimmers. The benefits in understanding the factors which affect the drag encountered will enhance performance within the sport. However, the current techniques used to experimentally measure or estimate drag values are questioned for their consistency, therefore limiting investigations in these factors. This paper aims to further understand how the resistive forces in swimming are measured and calculated. All techniques outlined demonstrate both strengths and weaknesses in the overall assessment of free swimming. By reviewing all techniques in this area, the reader should be able to select which one is best depending on what researchers want to gain from the testing.

  12. Extrusion based rapid prototyping technique: an advanced platform for tissue engineering scaffold fabrication.

    PubMed

    Hoque, M Enamul; Chuan, Y Leng; Pashby, Ian

    2012-02-01

    Advances in scaffold design and fabrication technology have brought the tissue engineering field stepping into a new era. Conventional techniques used to develop scaffolds inherit limitations, such as lack of control over the pore morphology and architecture as well as reproducibility. Rapid prototyping (RP) technology, a layer-by-layer additive approach offers a unique opportunity to build complex 3D architectures overcoming those limitations that could ultimately be tailored to cater for patient-specific applications. Using RP methods, researchers have been able to customize scaffolds to mimic the biomechanical properties (in terms of structural integrity, strength, and microenvironment) of the organ or tissue to be repaired/replaced quite closely. This article provides intensive description on various extrusion based scaffold fabrication techniques and review their potential utility for TE applications. The extrusion-based technique extrudes the molten polymer as a thin filament through a nozzle onto a platform layer-by-layer and thus building 3D scaffold. The technique allows full control over pore architecture and dimension in the x- and y- planes. However, the pore height in z-direction is predetermined by the extruding nozzle diameter rather than the technique itself. This review attempts to assess the current state and future prospects of this technology.

  13. Advanced techniques and technology for efficient data storage, access, and transfer

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.; Miller, Warner

    1991-01-01

    Advanced techniques for efficiently representing most forms of data are being implemented in practical hardware and software form through the joint efforts of three NASA centers. These techniques adapt to local statistical variations to continually provide near optimum code efficiency when representing data without error. Demonstrated in several earlier space applications, these techniques are the basis of initial NASA data compression standards specifications. Since the techniques clearly apply to most NASA science data, NASA invested in the development of both hardware and software implementations for general use. This investment includes high-speed single-chip very large scale integration (VLSI) coding and decoding modules as well as machine-transferrable software routines. The hardware chips were tested in the laboratory at data rates as high as 700 Mbits/s. A coding module's definition includes a predictive preprocessing stage and a powerful adaptive coding stage. The function of the preprocessor is to optimally process incoming data into a standard form data source that the second stage can handle.The built-in preprocessor of the VLSI coder chips is ideal for high-speed sampled data applications such as imaging and high-quality audio, but additionally, the second stage adaptive coder can be used separately with any source that can be externally preprocessed into the 'standard form'. This generic functionality assures that the applicability of these techniques and their recent high-speed implementations should be equally broad outside of NASA.

  14. Cost estimates for near-term depolyment of advanced traffic management systems. Final report

    SciTech Connect

    Stevens, S.S.; Chin, S.M.

    1993-02-15

    The objective of this study is to provide cost est engineering, design, installation, operation and maintenance of Advanced Traffic Management Systems (ATMS) in the largest 75 metropolitan areas in the United States. This report gives estimates for deployment costs for ATMS in the next five years, subject to the qualifications and caveats set out in following paragraphs. The report considers infrastructure components required to realize fully a functional ATMS over each of two highway networks (as discussed in the Section describing our general assumptions) under each of the four architectures identified in the MITRE Intelligent Vehicle Highway Systems (IVHS) Architecture studies. The architectures are summarized in this report in Table 2. Estimates are given for eight combinations of highway networks and architectures. We estimate that it will cost between $8.5 Billion (minimal network) and $26 Billion (augmented network) to proceed immediately with deployment of ATMS in the largest 75 metropolitan areas. Costs are given in 1992 dollars, and are not adjusted for future inflation. Our estimates are based partially on completed project costs, which have been adjusted to 1992 dollars. We assume that a particular architecture will be chosen; projected costs are broken by architecture.

  15. Techniques for measurement of the thermal expansion of advanced composite materials

    NASA Technical Reports Server (NTRS)

    Tompkins, Stephen S.

    1989-01-01

    Techniques available to measure small thermal displacements in flat laminates and structural tubular elements of advanced composite materials are described. Emphasis is placed on laser interferometry and the laser interferometric dilatometer system used at the National Aeronautics and Space Administration (NASA) Langley Research Center. Thermal expansion data are presented for graphite-fiber reinforced 6061 and 2024 aluminum laminates and for graphite fiber reinforced AZ91 C and QH21 A magnesium laminates before and after processing to minimize or eliminate thermal strain hysteresis. Data are also presented on the effects of reinforcement volume content on thermal expansion of silicon-carbide whisker and particulate reinforced aluminum.

  16. Measuring the microbiome: perspectives on advances in DNA-based techniques for exploring microbial life

    PubMed Central

    Bunge, John; Gilbert, Jack A.; Moore, Jason H.

    2012-01-01

    This article reviews recent advances in ‘microbiome studies’: molecular, statistical and graphical techniques to explore and quantify how microbial organisms affect our environments and ourselves given recent increases in sequencing technology. Microbiome studies are moving beyond mere inventories of specific ecosystems to quantifications of community diversity and descriptions of their ecological function. We review the last 24 months of progress in this sort of research, and anticipate where the next 2 years will take us. We hope that bioinformaticians will find this a helpful springboard for new collaborations with microbiologists. PMID:22308073

  17. Advanced techniques in IR thermography as a tool for the pest management professional

    NASA Astrophysics Data System (ADS)

    Grossman, Jon L.

    2006-04-01

    Within the past five years, the Pest Management industry has become aware that IR thermography can aid in the detection of pest infestations and locate other conditions that are within the purview of the industry. This paper will review the applications that can be utilized by the pest management professional and discuss the advanced techniques that may be required in conjunction with thermal imaging to locate insect and other pest infestations, moisture within structures, the verification of data and the special challenges associated with the inspection process.

  18. Advanced SuperDARN meteor wind observations based on raw time series analysis technique

    NASA Astrophysics Data System (ADS)

    Tsutsumi, M.; Yukimatu, A. S.; Holdsworth, D. A.; Lester, M.

    2009-04-01

    The meteor observation technique based on SuperDARN raw time series analysis has been upgraded. This technique extracts meteor information as biproducts and does not degrade the quality of normal SuperDARN operations. In the upgrade the radar operating system (RADOPS) has been modified so that it can oversample every 15 km during the normal operations, which have a range resolution of 45 km. As an alternative method for better range determination a frequency domain interferometry (FDI) capability was also coded in RADOPS, where the operating radio frequency can be changed every pulse sequence. Test observations were conducted using the CUTLASS Iceland East and Finland radars, where oversampling and FDI operation (two frequencies separated by 3 kHz) were simultaneously carried out. Meteor ranges obtained in both ranging techniques agreed very well. The ranges were then combined with the interferometer data to estimate meteor echo reflection heights. Although there were still some ambiguities in the arrival angles of echoes because of the rather long antenna spacing of the interferometers, the heights and arrival angles of most of meteor echoes were more accurately determined than previously. Wind velocities were successfully estimated over the height range of 84 to 110 km. The FDI technique developed here can be further applied to the common SuperDARN operation, and study of fine horizontal structures of F region plasma irregularities is expected in the future.

  19. Estimating the sources of global sea level rise with data assimilation techniques

    PubMed Central

    Hay, Carling C.; Morrow, Eric; Kopp, Robert E.; Mitrovica, Jerry X.

    2013-01-01

    A rapidly melting ice sheet produces a distinctive geometry, or fingerprint, of sea level (SL) change. Thus, a network of SL observations may, in principle, be used to infer sources of meltwater flux. We outline a formalism, based on a modified Kalman smoother, for using tide gauge observations to estimate the individual sources of global SL change. We also report on a series of detection experiments based on synthetic SL data that explore the feasibility of extracting source information from SL records. The Kalman smoother technique iteratively calculates the maximum-likelihood estimate of Greenland ice sheet (GIS) and West Antarctic ice sheet (WAIS) melt at each time step, and it accommodates data gaps while also permitting the estimation of nonlinear trends. Our synthetic tests indicate that when all tide gauge records are used in the analysis, it should be possible to estimate GIS and WAIS melt rates greater than ∼0.3 and ∼0.4 mm of equivalent eustatic sea level rise per year, respectively. We have also implemented a multimodel Kalman filter that allows us to account rigorously for additional contributions to SL changes and their associated uncertainty. The multimodel filter uses 72 glacial isostatic adjustment models and 3 ocean dynamic models to estimate the most likely models for these processes given the synthetic observations. We conclude that our modified Kalman smoother procedure provides a powerful method for inferring melt rates in a warming world. PMID:22543163

  20. Accuracy and sampling error of two age estimation techniques using rib histomorphometry on a modern sample.

    PubMed

    García-Donas, Julieta G; Dyke, Jeffrey; Paine, Robert R; Nathena, Despoina; Kranioti, Elena F

    2016-02-01

    Most age estimation methods are proven problematic when applied in highly fragmented skeletal remains. Rib histomorphometry is advantageous in such cases; yet it is vital to test and revise existing techniques particularly when used in legal settings (Crowder and Rosella, 2007). This study tested Stout & Paine (1992) and Stout et al. (1994) histological age estimation methods on a Modern Greek sample using different sampling sites. Six left 4th ribs of known age and sex were selected from a modern skeletal collection. Each rib was cut into three equal segments. Two thin sections were acquired from each segment. A total of 36 thin sections were prepared and analysed. Four variables (cortical area, intact and fragmented osteon density and osteon population density) were calculated for each section and age was estimated according to Stout & Paine (1992) and Stout et al. (1994). The results showed that both methods produced a systemic underestimation of the individuals (to a maximum of 43 years) although a general improvement in accuracy levels was observed when applying the Stout et al. (1994) formula. There is an increase of error rates with increasing age with the oldest individual showing extreme differences between real age and estimated age. Comparison of the different sampling sites showed small differences between the estimated ages suggesting that any fragment of the rib could be used without introducing significant error. Yet, a larger sample should be used to confirm these results.

  1. The twitch interpolation technique for the estimation of true quadriceps muscle strength.

    PubMed

    Nørregaard, J; Lykkegaard, J J; Bülow, P M; Danneskiold-Samsøe, B

    1997-09-01

    The aim of this study was to examine the reliability of the twitch interpolation technique when used to estimate the true isometric knee extensor muscle strength. This included an examination of whether submaximal activation causes any bias in the estimation of the true muscle strength and an examination of the precision of the method. Twenty healthy subjects completed three contraction series, in which the subjects were told to perform as if their voluntary strength was 60%, 80% or 100% of that determined by a maximal voluntary contraction (MVC). Electrical muscle stimulations were given at each of five different contraction levels in each series. At torque levels above 25% of MVC the relationship between torque and twitch size could be approximated to be linear. The true muscle strength (TMS) could therefore be estimated using linear regression of the twitch-torque relationship to the torque point of no twitch in each of the three series, termed TMS60, TMS80 and TMS100. The TMS80 was slightly lower (P < 0.01), median 94% (IQ range 87-101%) of the TMS100. The TMS60 was median 99% (IQ range 83-125%) (NS) of TMS100, but a few severe outliers were observed. In conclusion, we found the reliability of the method acceptable for many research purposes, if series with estimated central activation of below 40-50% were excluded. The only moderate precision and the slightly lower estimations in subjects applying submaximal does, however, limit its usefulness.

  2. Estimating the sources of global sea level rise with data assimilation techniques.

    PubMed

    Hay, Carling C; Morrow, Eric; Kopp, Robert E; Mitrovica, Jerry X

    2013-02-26

    A rapidly melting ice sheet produces a distinctive geometry, or fingerprint, of sea level (SL) change. Thus, a network of SL observations may, in principle, be used to infer sources of meltwater flux. We outline a formalism, based on a modified Kalman smoother, for using tide gauge observations to estimate the individual sources of global SL change. We also report on a series of detection experiments based on synthetic SL data that explore the feasibility of extracting source information from SL records. The Kalman smoother technique iteratively calculates the maximum-likelihood estimate of Greenland ice sheet (GIS) and West Antarctic ice sheet (WAIS) melt at each time step, and it accommodates data gaps while also permitting the estimation of nonlinear trends. Our synthetic tests indicate that when all tide gauge records are used in the analysis, it should be possible to estimate GIS and WAIS melt rates greater than ∼0.3 and ∼0.4 mm of equivalent eustatic sea level rise per year, respectively. We have also implemented a multimodel Kalman filter that allows us to account rigorously for additional contributions to SL changes and their associated uncertainty. The multimodel filter uses 72 glacial isostatic adjustment models and 3 ocean dynamic models to estimate the most likely models for these processes given the synthetic observations. We conclude that our modified Kalman smoother procedure provides a powerful method for inferring melt rates in a warming world.

  3. Accuracy and sampling error of two age estimation techniques using rib histomorphometry on a modern sample.

    PubMed

    García-Donas, Julieta G; Dyke, Jeffrey; Paine, Robert R; Nathena, Despoina; Kranioti, Elena F

    2016-02-01

    Most age estimation methods are proven problematic when applied in highly fragmented skeletal remains. Rib histomorphometry is advantageous in such cases; yet it is vital to test and revise existing techniques particularly when used in legal settings (Crowder and Rosella, 2007). This study tested Stout & Paine (1992) and Stout et al. (1994) histological age estimation methods on a Modern Greek sample using different sampling sites. Six left 4th ribs of known age and sex were selected from a modern skeletal collection. Each rib was cut into three equal segments. Two thin sections were acquired from each segment. A total of 36 thin sections were prepared and analysed. Four variables (cortical area, intact and fragmented osteon density and osteon population density) were calculated for each section and age was estimated according to Stout & Paine (1992) and Stout et al. (1994). The results showed that both methods produced a systemic underestimation of the individuals (to a maximum of 43 years) although a general improvement in accuracy levels was observed when applying the Stout et al. (1994) formula. There is an increase of error rates with increasing age with the oldest individual showing extreme differences between real age and estimated age. Comparison of the different sampling sites showed small differences between the estimated ages suggesting that any fragment of the rib could be used without introducing significant error. Yet, a larger sample should be used to confirm these results. PMID:26698389

  4. Techniques and Methods used to determine the Best Estimate of Radiation Fluxes at SGP Central Facility

    SciTech Connect

    Shi, Yan; Long, Charles N.

    2002-07-30

    The DOE ARM Program operates three independent surface radiation measurement systems co-located within a few meters at the Southern Great Plains Central Facility (SGP CF) site. This redundancy affords a unique opportunity for producing a high quality estimate of the actual continuous irradiance record. The Best Estimate Radiation Flux Value Added Product (VAP) currently being developed for ARM (beflux1long VAP) is attempting to determine the best estimate value for each radiation field from these multiple measurements as an operational product. In the development of this VAP, it is necessary to assess the nominal long-term unattended operational accuracy (as opposed to accuracy assessments based on calibrations or short term attended operation) to screen the data for quality assessment. We will present statistical results of this assessment, including our estimates of nominal operational accuracies, and the amount of data that pass the resultant data quality testing. Central to data quality assessment is the notion that having three pieces of information allows one not only to detect measurement problems, but to identify which of the three similar measurements is likely to be in error. We will discuss the techniques we have developed to use similar, but often differing, measurement data as comparison tools for operationally detecting measurement errors. We will also present statistical analyses of the resultant best estimate radiation climatology for the SGP CF.

  5. Comparison of different automatic adaptive threshold selection techniques for estimating discharge from river width

    NASA Astrophysics Data System (ADS)

    Elmi, Omid; Javad Tourian, Mohammad; Sneeuw, Nico

    2015-04-01

    The importance of river discharge monitoring is critical for e.g., water resource planning, climate change, hazard monitoring. River discharge has been measured at in situ gauges for more than a century. Despite various attempts, some basins are still ungauged. Moreover, a reduction in the number of worldwide gauging stations increases the interest to employ remote sensing data for river discharge monitoring. Finding an empirical relationship between simultaneous in situ measurements of discharge and river widths derived from satellite imagery has been introduced as a straightforward remote sensing alternative. Classifying water and land in an image is the primary task for defining the river width. Water appears dark in the near infrared and infrared bands in satellite images. As a result low values in the histogram usually represent the water content. In this way, applying a threshold on the image histogram and separating into two different classes is one of the most efficient techniques to build a water mask. Beside its simple definition, finding the appropriate threshold value in each image is the most critical issue. The threshold is variable due to changes in the water level, river extent, atmosphere, sunlight radiation, onboard calibration of the satellite over time. These complexities in water body classification are the main source of error in river width estimation. In this study, we are looking for the most efficient adaptive threshold algorithm to estimate the river discharge. To do this, all cloud free MODIS images coincident with the in situ measurement are collected. Next a number of automatic threshold selection techniques are employed to generate different dynamic water masks. Then, for each of them a separate empirical relationship between river widths and discharge measurements are determined. Through these empirical relationships, we estimate river discharge at the gauge and then validate our results against in situ measurements and also

  6. Quantitative ultrasound characterization of locally advanced breast cancer by estimation of its scatterer properties

    SciTech Connect

    Tadayyon, Hadi; Sadeghi-Naini, Ali; Czarnota, Gregory; Wirtzfeld, Lauren; Wright, Frances C.

    2014-01-15

    Purpose: Tumor grading is an important part of breast cancer diagnosis and currently requires biopsy as its standard. Here, the authors investigate quantitative ultrasound parameters in locally advanced breast cancers that can potentially separate tumors from normal breast tissue and differentiate tumor grades. Methods: Ultrasound images and radiofrequency data from 42 locally advanced breast cancer patients were acquired and analyzed. Parameters related to the linear regression of the power spectrum—midband fit, slope, and 0-MHz-intercept—were determined from breast tumors and normal breast tissues. Mean scatterer spacing was estimated from the spectral autocorrelation, and the effective scatterer diameter and effective acoustic concentration were estimated from the Gaussian form factor. Parametric maps of each quantitative ultrasound parameter were constructed from the gated radiofrequency segments in tumor and normal tissue regions of interest. In addition to the mean values of the parametric maps, higher order statistical features, computed from gray-level co-occurrence matrices were also determined and used for characterization. Finally, linear and quadratic discriminant analyses were performed using combinations of quantitative ultrasound parameters to classify breast tissues. Results: Quantitative ultrasound parameters were found to be statistically different between tumor and normal tissue (p < 0.05). The combination of effective acoustic concentration and mean scatterer spacing could separate tumor from normal tissue with 82% accuracy, while the addition of effective scatterer diameter to the combination did not provide significant improvement (83% accuracy). Furthermore, the two advanced parameters, including effective scatterer diameter and mean scatterer spacing, were found to be statistically differentiating among grade I, II, and III tumors (p = 0.014 for scatterer spacing, p = 0.035 for effective scatterer diameter). The separation of the tumor

  7. A Technique for Estimating Distinctive Asperity Source Models by Waveform Fitting

    NASA Astrophysics Data System (ADS)

    Matsushima, S.; Kawase, H.; Sato, T.; Graves, R. W.

    2001-12-01

    For predicting near fault strong motion, it is important to adequately evaluate the heterogeneity of the slip distribution of the source rupture process as well as the effects of the complex subsurface geology. Since the characteristics of pulse waves derived from forward rupture directivity effects are significantly affected by the size and the slip velocity function of the asperities, it is necessary to evaluate these parameters accurately (Matsushima and Kawase, 1999). In this study, we developed a technique for estimating rupture process assuming distinctive asperities by waveform fitting. In order to take into account of the 3-D subsurface geology in the Green?s functions, we used 3-D reciprocal Green?s functions (RGFs) calculated using the methodology by Graves and Wald (2001). We assumed that the fault geometry and the hypocenter was given, and that the asperity to be estimated was rectangular and on the fault plane. We also assumed that the slip is concentrated only on the asperity. The idea of this technique was as follows. First we calculated strong motions at observation sites using the RGFs for given range of parameters. Then we searched for the best fitting case by grid search technique (Sato et al., 1998). There were eight parameters, which were, location of asperity on the fault plane (X0, Y0), size of asperity (L, W), amplitude (Vd), duration (td), and decay shape parameter (α ) of the slip velocity function, and rake angle (λ ). We assumed that the rise time of the slip velocity function was 0.06 seconds and decays proportional to exp (-α t). The initiation point of the asperity was the closest point to the hypocenter. Numerical experiments showed that we can resolve the asperity model fairly well with good stability. We are planning to extend this technique to multiple asperities and to estimate asperity models for actual earthquakes.

  8. Arthroscopically assisted Sauvé-Kapandji procedure: an advanced technique for distal radioulnar joint arthritis.

    PubMed

    Luchetti, Riccardo; Khanchandani, Prakash; Da Rin, Ferdinando; Borelli, Pierpaolo P; Mathoulin, Christophe; Atzei, Andrea

    2008-12-01

    Osteoarthritis of distal radioulnar joint (DRUJ) leads to chronic wrist pain, weakness of grip strength, and limitation of motion, all of which affect the quality of life of the patient. Over the years, several procedures have been used for the treatment of this condition; however, this condition still remains a therapeutic challenge for the hand surgeons. Many procedures such as Darrach procedure, Bower procedure, Sauvé-Kapandji procedure, and ulnar head replacement have been used. Despite many advances in wrist arthroscopy, arthroscopy has not been used for the treatment of arthritis of the DRUJ. We describe a novel technique of arthroscopically assisted Sauvé-Kapandji procedure for the arthritis of the DRUJ. The advantages of this technique are its less invasive nature, preservation of the extensor retinaculum, more anatomical position of the DRUJ, faster rehabilitation, and a better cosmesis.

  9. A comparison of conventional and advanced ultrasonic inspection techniques in the characterization of TMC materials

    NASA Technical Reports Server (NTRS)

    Holland, Mark R.; Handley, Scott M.; Miller, James G.; Reighard, Mark K.

    1992-01-01

    Results obtained with a conventional ultrasonic inspection technique as well as those obtained with more advanced ultrasonic NDE methods in the characterization of an 8-ply quasi-isotropic titanium matrix composite (TMC) specimen are presented. Images obtained from a conventional ultrasonic inspection of TMC material are compared with those obtained using more sophisticated ultrasonic inspection methods. It is suggested that the latter techniques are able to provide quantitative images of TMC material. They are able to reveal the same potential defect indications while simultaneously providing more quantitative information concerning the material's inherent properties. Band-limited signal loss and slope-of-attenuation images provide quantitative data on the inherent material characteristics and defects in TMC.

  10. Chemistry of Metal-organic Frameworks Monitored by Advanced X-ray Diffraction and Scattering Techniques.

    PubMed

    Mazaj, Matjaž; Kaučič, Venčeslav; Zabukovec Logar, Nataša

    2016-01-01

    The research on metal-organic frameworks (MOFs) experienced rapid progress in recent years due to their structure diversity and wide range of application opportunities. Continuous progress of X-ray and neutron diffraction methods enables more and more detailed insight into MOF's structural features and significantly contributes to the understanding of their chemistry. Improved instrumentation and data processing in high-resolution X-ray diffraction methods enables the determination of new complex MOF crystal structures in powdered form. By the use of neutron diffraction techniques, a lot of knowledge about the interaction of guest molecules with crystalline framework has been gained in the past few years. Moreover, in-situ time-resolved studies by various diffraction and scattering techniques provided comprehensive information about crystallization kinetics, crystal growth mechanism and structural dynamics triggered by external physical or chemical stimuli. The review emphasizes most relevant advanced structural studies of MOFs based on powder X-ray and neutron scattering. PMID:27640372

  11. Chemistry of Metal-organic Frameworks Monitored by Advanced X-ray Diffraction and Scattering Techniques.

    PubMed

    Mazaj, Matjaž; Kaučič, Venčeslav; Zabukovec Logar, Nataša

    2016-01-01

    The research on metal-organic frameworks (MOFs) experienced rapid progress in recent years due to their structure diversity and wide range of application opportunities. Continuous progress of X-ray and neutron diffraction methods enables more and more detailed insight into MOF's structural features and significantly contributes to the understanding of their chemistry. Improved instrumentation and data processing in high-resolution X-ray diffraction methods enables the determination of new complex MOF crystal structures in powdered form. By the use of neutron diffraction techniques, a lot of knowledge about the interaction of guest molecules with crystalline framework has been gained in the past few years. Moreover, in-situ time-resolved studies by various diffraction and scattering techniques provided comprehensive information about crystallization kinetics, crystal growth mechanism and structural dynamics triggered by external physical or chemical stimuli. The review emphasizes most relevant advanced structural studies of MOFs based on powder X-ray and neutron scattering.

  12. A technique for estimating 4D-CBCT using prior knowledge and limited-angle projections

    SciTech Connect

    Zhang, You; Yin, Fang-Fang; Ren, Lei; Segars, W. Paul

    2013-12-15

    Purpose: To develop a technique to estimate onboard 4D-CBCT using prior information and limited-angle projections for potential 4D target verification of lung radiotherapy.Methods: Each phase of onboard 4D-CBCT is considered as a deformation from one selected phase (prior volume) of the planning 4D-CT. The deformation field maps (DFMs) are solved using a motion modeling and free-form deformation (MM-FD) technique. In the MM-FD technique, the DFMs are estimated using a motion model which is extracted from planning 4D-CT based on principal component analysis (PCA). The motion model parameters are optimized by matching the digitally reconstructed radiographs of the deformed volumes to the limited-angle onboard projections (data fidelity constraint). Afterward, the estimated DFMs are fine-tuned using a FD model based on data fidelity constraint and deformation energy minimization. The 4D digital extended-cardiac-torso phantom was used to evaluate the MM-FD technique. A lung patient with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume, including changes of respiration amplitude, lesion size and lesion average-position, and phase shift between lesion and body respiratory cycle. The lesions were contoured in both the estimated and “ground-truth” onboard 4D-CBCT for comparison. 3D volume percentage-difference (VPD) and center-of-mass shift (COMS) were calculated to evaluate the estimation accuracy of three techniques: MM-FD, MM-only, and FD-only. Different onboard projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy.Results: For all simulated patient and projection acquisition scenarios, the mean VPD (±S.D.)/COMS (±S.D.) between lesions in prior images and “ground-truth” onboard images were 136.11% (±42.76%)/15.5 mm (±3.9 mm). Using orthogonal-view 15°-each scan angle, the mean VPD/COMS between the lesion

  13. Individual Particle Analysis of Ambient PM 2.5 Using Advanced Electron Microscopy Techniques

    SciTech Connect

    Gerald J. Keeler; Masako Morishita

    2006-12-31

    The overall goal of this project was to demonstrate a combination of advanced electron microscopy techniques that can be effectively used to identify and characterize individual particles and their sources. Specific techniques to be used include high-angle annular dark field scanning transmission electron microscopy (HAADF-STEM), STEM energy dispersive X-ray spectrometry (EDX), and energy-filtered TEM (EFTEM). A series of ambient PM{sub 2.5} samples were collected in communities in southwestern Detroit, MI (close to multiple combustion sources) and Steubenville, OH (close to several coal fired utility boilers). High-resolution TEM (HRTEM) -imaging showed a series of nano-metal particles including transition metals and elemental composition of individual particles in detail. Submicron and nano-particles with Al, Fe, Ti, Ca, U, V, Cr, Si, Ba, Mn, Ni, K and S were observed and characterized from the samples. Among the identified nano-particles, combinations of Al, Fe, Si, Ca and Ti nano-particles embedded in carbonaceous particles were observed most frequently. These particles showed very similar characteristics of ultrafine coal fly ash particles that were previously reported. By utilizing HAADF-STEM, STEM-EDX, and EF-TEM, this investigation was able to gain information on the size, morphology, structure, and elemental composition of individual nano-particles collected in Detroit and Steubenville. The results showed that the contributions of local combustion sources - including coal fired utilities - to ultrafine particle levels were significant. Although this combination of advanced electron microscopy techniques by itself can not identify source categories, these techniques can be utilized as complementary analytical tools that are capable of providing detailed information on individual particles.

  14. Recent Advances and New Techniques in Visualization of Ultra-short Relativistic Electron Bunches

    SciTech Connect

    Xiang, Dao; /SLAC

    2012-06-05

    Ultrashort electron bunches with rms length of {approx} 1 femtosecond (fs) can be used to generate ultrashort x-ray pulses in FELs that may open up many new regimes in ultrafast sciences. It is also envisioned that ultrashort electron bunches may excite {approx}TeV/m wake fields for plasma wake field acceleration and high field physics studies. Recent success of using 20 pC electron beam to drive an x-ray FEL at LCLS has stimulated world-wide interests in using low charge beam (1 {approx} 20 pC) to generate ultrashort x-ray pulses (0.1 fs {approx} 10 fs) in FELs. Accurate measurement of the length (preferably the temporal profile) of the ultrashort electron bunch is essential for understanding the physics associated with the bunch compression and transportation. However, the shorter and shorter electron bunch greatly challenges the present beam diagnostic methods. In this paper we review the recent advances in the measurement of ultra-short electron bunches. We will focus on several techniques and their variants that provide the state-of-the-art temporal resolution. Methods to further improve the resolution of these techniques and the promise to break the 1 fs time barrier is discussed. We review recent advances in the measurement of ultrashort relativistic electron bunches. We will focus on several techniques and their variants that are capable of breaking the femtosecond time barrier in measurements of ultrashort bunches. Techniques for measuring beam longitudinal phase space as well as the x-ray pulse shape in an x-ray FEL are also discussed.

  15. Comparison of Erosion Rates Estimated by Sediment Budget Techniques and Suspended Sediment Monitoring and Regulatory Implications

    NASA Astrophysics Data System (ADS)

    O'Connor, M.; Eads, R.

    2007-12-01

    Watersheds in the northern California Coast Range have been designated as "impaired" with respect to water quality because of excessive sediment loads and/or high water temperature. Sediment budget techniques have typically been used by regulatory authorities to estimate current erosion rates and to develop targets for future desired erosion rates. This study examines erosion rates estimated by various methods for portions of the Gualala River watershed, designated as having water quality impaired by sediment under provisions of the Clean Water Act Section 303(d), located in northwest Sonoma County (~90 miles north of San Francisco). The watershed is underlain by Jurassic age sedimentary and meta-sedimentary rocks of the Franciscan formation. The San Andreas Fault passes through the western edge of watershed, and other active faults are present. A substantial portion of the watershed is mantled by rock slides and earth flows, many of which are considered dormant. The Coast Range is geologically young, and rapid rates of uplift are believed to have contributed to high erosion rates. This study compares quantitative erosion rate estimates developed at different spatial and temporal scales. It is motivated by a proposed vineyard development project in the watershed, and the need to document conditions in the project area, assess project environmental impacts and meet regulatory requirements pertaining to water quality. Erosion rate estimates were previously developed using sediment budget techniques for relatively large drainage areas (~100 to 1,000 km2) by the North Coast Regional Water Quality Control Board and US EPA and by the California Geological Survey. In this study, similar sediment budget techniques were used for smaller watersheds (~3 to 8 km2), and were supplemented by a suspended sediment monitoring program utilizing Turbidity Threshold Sampling techniques (as described in a companion study in this session). The duration of the monitoring program to date

  16. Food consumption and digestion time estimation of spotted scat, Scatophagus argus, using X-radiography technique

    NASA Astrophysics Data System (ADS)

    Hashim, Marina; Abidin, Diana Atiqah Zainal; Das, Simon K.; Ghaffar, Mazlan Abd.

    2014-09-01

    The present study was conducted to investigate the food consumption pattern and gastric emptying time using x-radiography technique in scats fish, Scatophagus argus feeding to satiation in laboratory conditions. Prior to feeding experiment, fish of various sizes were examined their stomach volume, using freshly prepared stomachs ligatured at the tips of the burret, where the maximum amount of distilled water collected in the stomach were measured (ml). Stomach volume is correlated with maximum food intake (Smax) and it can estimate the maximum stomach distension by allometric model i.e volume=0.0000089W2.93. Gastric emptying time was estimated using a qualitative X-radiography technique, where the fish of various sizes were fed to satiation at different time since feeding. All the experimental fish was feed into satiation using radio-opaque barium sulphate (BaSO4) paste injected in the wet shrimp in proportion to the body weight. The BaSO4 was found suitable to track the movement of feed/prey in the stomach over time and gastric emptying time of scats fish can be estimated. The results of qualitative X-Radiography observation of gastric motility, showed the fish (200 gm) that fed to maximum satiation meal (circa 11 gm) completely emptied their stomach within 30 - 36 hrs. The results of the present study will provide the first baseline information on the stomach volume, gastric emptying of scats fish in captivity.

  17. Estimation of radon concentrations in coal mines using a hybrid technique calibration curve.

    PubMed

    Jamil, K; Ali, S

    2001-01-01

    The results of epidemiological studies in various countries show that radon and its progeny cause carcinogenic effects on mine workers. Therefore, it becomes of paramount importance to monitor radon concentrations and consequently determine the radon dose rates in coal mines for the protection of coal miners. A new calibration curve was obtained for radon concentration estimation using hybrid techniques. A calibration curve was generated using 226Ra activity concentration measured by a HPGe detector-based gamma-ray spectrometer versus alpha-track-density rate due to radon and its progeny on CR-39 track detector. Using the slope of the experimentally determined curve in the units of Becqueral per kilogram (Bq kg-1) per unit alpha-track-density per hour (cm-2 h-1), radon concentrations (Bq m-3) were estimated using coal samples from various coal mines in two provinces of Pakistan, Punjab and Balochistan. Consequently, radon dose rates were computed in the simulated environment of the coal mines. Results of these computations may be considered with a caveat that the method developed in this paper provides only a screening method to indicate the radon dose in coal mines. It has been shown that the actual measurements of radon concentrations in the coal mines are in agreement with the estimated radon concentrations using the hybrid-technique calibration curve.

  18. Technique for estimating the 2- to 500-year flood discharges on unregulated streams in rural Missouri

    USGS Publications Warehouse

    Alexander, Terry W.; Wilson, Gary L.

    1995-01-01

    A generalized least-squares regression technique was used to relate the 2- to 500-year flood discharges from 278 selected streamflow-gaging stations to statistically significant basin characteristics. The regression relations (estimating equations) were defined for three hydrologic regions (I, II, and III) in rural Missouri. Ordinary least-squares regression analyses indicate that drainage area (Regions I, II, and III) and main-channel slope (Regions I and II) are the only basin characteristics needed for computing the 2- to 500-year design-flood discharges at gaged or ungaged stream locations. The resulting generalized least-squares regression equations provide a technique for estimating the 2-, 5-, 10-, 25-, 50-, 100-, and 500-year flood discharges on unregulated streams in rural Missouri. The regression equations for Regions I and II were developed from stream-flow-gaging stations with drainage areas ranging from 0.13 to 11,500 square miles and 0.13 to 14,000 square miles, and main-channel slopes ranging from 1.35 to 150 feet per mile and 1.20 to 279 feet per mile. The regression equations for Region III were developed from streamflow-gaging stations with drainage areas ranging from 0.48 to 1,040 square miles. Standard errors of estimate for the generalized least-squares regression equations in Regions I, II, and m ranged from 30 to 49 percent.

  19. Food consumption and digestion time estimation of spotted scat, Scatophagus argus, using X-radiography technique

    SciTech Connect

    Hashim, Marina; Abidin, Diana Atiqah Zainal; Das, Simon K.; Ghaffar, Mazlan Abd.

    2014-09-03

    The present study was conducted to investigate the food consumption pattern and gastric emptying time using x-radiography technique in scats fish, Scatophagus argus feeding to satiation in laboratory conditions. Prior to feeding experiment, fish of various sizes were examined their stomach volume, using freshly prepared stomachs ligatured at the tips of the burret, where the maximum amount of distilled water collected in the stomach were measured (ml). Stomach volume is correlated with maximum food intake (S{sub max}) and it can estimate the maximum stomach distension by allometric model i.e volume=0.0000089W{sup 2.93}. Gastric emptying time was estimated using a qualitative X-radiography technique, where the fish of various sizes were fed to satiation at different time since feeding. All the experimental fish was feed into satiation using radio-opaque barium sulphate (BaSO{sub 4}) paste injected in the wet shrimp in proportion to the body weight. The BaSO{sub 4} was found suitable to track the movement of feed/prey in the stomach over time and gastric emptying time of scats fish can be estimated. The results of qualitative X-Radiography observation of gastric motility, showed the fish (200 gm) that fed to maximum satiation meal (circa 11 gm) completely emptied their stomach within 30 - 36 hrs. The results of the present study will provide the first baseline information on the stomach volume, gastric emptying of scats fish in captivity.

  20. Multi-sensor fusion techniques for state estimation of micro air vehicles

    NASA Astrophysics Data System (ADS)

    Donavanik, Daniel; Hardt-Stremayr, Alexander; Gremillion, Gregory; Weiss, Stephan; Nothwang, William

    2016-05-01

    Aggressive flight of micro air vehicles (MAVs) in unstructured, GPS-denied environments poses unique challenges for estimation of vehicle pose and velocity due to the noise, delay, and drift in individual sensor measurements. Maneuvering flight at speeds in excess of 5 m/s poses additional challenges even for active range sensors; in the case of LIDAR, an assembled scan of the vehicles environment will in most cases be obsolete by the time it is processed. Multi-sensor fusion techniques which combine inertial measurements with passive vision techniques and/or LIDAR have achieved breakthroughs in the ability to maintain accurate state estimates without the use of external positioning sensors. In this paper, we survey algorithmic approaches to exploiting sensors with a wide range of nonlinear dynamics using filter and bundle-adjustment based approaches for state estimation and optimal control. From this foundation, we propose a biologically-inspired framework for incorporating the human operator in the loop as a privileged sensor in a combined human/autonomy paradigm.

  1. Recent advances in molecular techniques to study microbial communities in food-associated matrices and processes.

    PubMed

    Justé, A; Thomma, B P H J; Lievens, B

    2008-09-01

    In the last two decades major changes have occurred in how microbial ecologists study microbial communities. Limitations associated with traditional culture-based methods have pushed for the development of culture-independent techniques, which are primarily based on the analysis of nucleic acids. These methods are now increasingly applied in food microbiology as well. This review presents an overview of current community profiling techniques with their (potential) applications in food and food-related ecosystems. We critically assessed both the power and limitations of these techniques and present recent advances in the field of food microbiology attained by their application. It is unlikely that a single approach will be universally applicable for analyzing microbial communities in unknown matrices. However, when screening samples for well-defined species or functions, techniques such as DNA arrays and real-time PCR have the potential to overtake current culture-based methods. Most importantly, molecular methods will allow us to surpass our current culturing limitations, thus revealing the extent and importance of the 'non-culturable' microbial flora that occurs in food matrices and production.

  2. Advancement of an Infra-Red Technique for Whole-Field Concentration Measurements in Fluidized Beds

    PubMed Central

    Medrano, Jose A.; de Nooijer, Niek C. A.; Gallucci, Fausto; van Sint Annaland, Martin

    2016-01-01

    For a better understanding and description of the mass transport phenomena in dense multiphase gas-solids systems such as fluidized bed reactors, detailed and quantitative experimental data on the concentration profiles is required, which demands advanced non-invasive concentration monitoring techniques with a high spatial and temporal resolution. A novel technique based on the selective detection of a gas component in a gas mixture using infra-red properties has been further developed. The first stage development was carried out using a very small sapphire reactor and CO2 as tracer gas. Although the measuring principle was demonstrated, the real application was hindered by the small reactor dimensions related to the high costs and difficult handling of large sapphire plates. In this study, a new system has been developed, that allows working at much larger scales and yet with higher resolution. In the new system, propane is used as tracer gas and quartz as reactor material. In this study, a thorough optimization and calibration of the technique is presented which is subsequently applied for whole-field measurements with high temporal resolution. The developed technique allows the use of a relatively inexpensive configuration for the measurement of detailed concentration fields and can be applied to a large variety of important chemical engineering topics. PMID:26927127

  3. Biotechnology Apprenticeship for Secondary-Level Students: Teaching Advanced Cell Culture Techniques for Research

    PubMed Central

    Lewis, Jennifer R.; Kotur, Mark S.; Butt, Omar; Kulcarni, Sumant; Riley, Alyssa A.; Ferrell, Nick; Sullivan, Kathryn D.; Ferrari, Mauro

    2002-01-01

    The purpose of this article is to discuss small-group apprenticeships (SGAs) as a method to instruct cell culture techniques to high school participants. The study aimed to teach cell culture practices and to introduce advanced imaging techniques to solve various biomedical engineering problems. Participants designed and completed experiments using both flow cytometry and laser scanning cytometry during the 1-month summer apprenticeship. In addition to effectively and efficiently teaching cell biology laboratory techniques, this course design provided an opportunity for research training, career exploration, and mentoring. Students participated in active research projects, working with a skilled interdisciplinary team of researchers in a large research institution with access to state-of-the-art instrumentation. The instructors, composed of graduate students, laboratory managers, and principal investigators, worked well together to present a real and worthwhile research experience. The students enjoyed learning cell culture techniques while contributing to active research projects. The institution's researchers were equally enthusiastic to instruct and serve as mentors. In this article, we clarify and illuminate the value of small-group laboratory apprenticeships to the institution and the students by presenting the results and experiences of seven middle and high school participants and their instructors. PMID:12587031

  4. Where in the Cell Are You? Probing HIV-1 Host Interactions through Advanced Imaging Techniques

    PubMed Central

    Dirk, Brennan S.; Van Nynatten, Logan R.; Dikeakos, Jimmy D.

    2016-01-01

    Viruses must continuously evolve to hijack the host cell machinery in order to successfully replicate and orchestrate key interactions that support their persistence. The type-1 human immunodeficiency virus (HIV-1) is a prime example of viral persistence within the host, having plagued the human population for decades. In recent years, advances in cellular imaging and molecular biology have aided the elucidation of key steps mediating the HIV-1 lifecycle and viral pathogenesis. Super-resolution imaging techniques such as stimulated emission depletion (STED) and photoactivation and localization microscopy (PALM) have been instrumental in studying viral assembly and release through both cell–cell transmission and cell–free viral transmission. Moreover, powerful methods such as Forster resonance energy transfer (FRET) and bimolecular fluorescence complementation (BiFC) have shed light on the protein-protein interactions HIV-1 engages within the host to hijack the cellular machinery. Specific advancements in live cell imaging in combination with the use of multicolor viral particles have become indispensable to unravelling the dynamic nature of these virus-host interactions. In the current review, we outline novel imaging methods that have been used to study the HIV-1 lifecycle and highlight advancements in the cell culture models developed to enhance our understanding of the HIV-1 lifecycle. PMID:27775563

  5. Management of metastatic malignant thymoma with advanced radiation and chemotherapy techniques: report of a rare case.

    PubMed

    D'Andrea, Mark A; Reddy, G Kesava

    2015-02-25

    Malignant thymomas are rare epithelial neoplasms of the anterior superior mediastinum that are typically invasive in nature and have a higher risk of relapse that may ultimately lead to death. Here we report a case of an advanced malignant thymoma that was successfully treated with neoadjuvant chemotherapy followed by surgical resection and subsequently with advanced and novel radiation therapy techniques. A 65-year-old male was diagnosed with a stage IV malignant thymoma with multiple metastatic lesions involving the left peripheral lung and pericardium. Initial neoadjuvant chemotherapy with a cisplatin-based regimen resulted in a partial response allowing the inoperable tumor to become operable. Following surgical resection of the residual disease, the tumor recurred within a year. The patient then underwent a course of targeted three-dimensional intensity modulated radiation therapy (IMRT) and image-guided radiation therapy (IGRT). Five years after radiation therapy, the localized soft tissue thickening at the left upper lung anterior pleural space had resolved. Seven years after radiation therapy the tumor mass had completely resolved. No recurrences were seen and the patient is well even 8 years after IMRT/IGRT with a favorable outcome. Chemotherapy with targeted three-dimensional IMRT/IGRT should be considered the primary modality for the management of advanced malignant thymoma patients.

  6. Effective gene prediction by high resolution frequency estimator based on least-norm solution technique

    PubMed Central

    2014-01-01

    Linear algebraic concept of subspace plays a significant role in the recent techniques of spectrum estimation. In this article, the authors have utilized the noise subspace concept for finding hidden periodicities in DNA sequence. With the vast growth of genomic sequences, the demand to identify accurately the protein-coding regions in DNA is increasingly rising. Several techniques of DNA feature extraction which involves various cross fields have come up in the recent past, among which application of digital signal processing tools is of prime importance. It is known that coding segments have a 3-base periodicity, while non-coding regions do not have this unique feature. One of the most important spectrum analysis techniques based on the concept of subspace is the least-norm method. The least-norm estimator developed in this paper shows sharp period-3 peaks in coding regions completely eliminating background noise. Comparison of proposed method with existing sliding discrete Fourier transform (SDFT) method popularly known as modified periodogram method has been drawn on several genes from various organisms and the results show that the proposed method has better as well as an effective approach towards gene prediction. Resolution, quality factor, sensitivity, specificity, miss rate, and wrong rate are used to establish superiority of least-norm gene prediction method over existing method. PMID:24386895

  7. Advanced MRI Techniques in the Evaluation of Complex Cystic Breast Lesions

    PubMed Central

    Popli, Manju Bala; Gupta, Pranav; Arse, Devraj; Kumar, Pawan; Kaur, Prabhjot

    2016-01-01

    OBJECTIVE The purpose of this research work was to evaluate complex cystic breast lesions by advanced MRI techniques and correlating imaging with histologic findings. METHODS AND MATERIALS In a cross-sectional design from September 2013 to August 2015, 50 patients having sonographically detected complex cystic lesions of the breast were included in the study. Morphological characteristics were assessed. Dynamic contrast-enhanced MRI along with diffusion-weighted imaging and MR spectroscopy were used to further classify lesions into benign and malignant categories. All the findings were correlated with histopathology. RESULTS Of the 50 complex cystic lesions, 32 proved to be benign and 18 were malignant on histopathology. MRI features of heterogeneous enhancement on CE-MRI (13/18), Type III kinetic curve (13/18), reduced apparent diffusion coefficient (18/18), and tall choline peak (17/18) were strong predictors of malignancy. Thirteen of the 18 lesions showed a combination of Type III curve, reduced apparent diffusion coefficient value, and tall choline peak. CONCLUSIONS Advanced MRI techniques like dynamic imaging, diffusion-weighted sequences, and MR spectroscopy provide a high level of diagnostic confidence in the characterization of complex cystic breast lesion, thus allowing early diagnosis and significantly reducing patient morbidity and mortality. From our study, lesions showing heterogeneous contrast enhancement, Type III kinetic curve, diffusion restriction, and tall choline peak were significantly associated with malignant complex cystic lesions of the breast. PMID:27330299

  8. Advanced grazing-incidence techniques for modern soft-matter materials analysis

    PubMed Central

    Hexemer, Alexander; Müller-Buschbaum, Peter

    2015-01-01

    The complex nano-morphology of modern soft-matter materials is successfully probed with advanced grazing-incidence techniques. Based on grazing-incidence small- and wide-angle X-ray and neutron scattering (GISAXS, GIWAXS, GISANS and GIWANS), new possibilities arise which are discussed with selected examples. Due to instrumental progress, highly interesting possibilities for local structure analysis in this material class arise from the use of micro- and nanometer-sized X-ray beams in micro- or nanofocused GISAXS and GIWAXS experiments. The feasibility of very short data acquisition times down to milliseconds creates exciting possibilities for in situ and in operando GISAXS and GIWAXS studies. Tuning the energy of GISAXS and GIWAXS in the soft X-ray regime and in time-of flight GISANS allows the tailoring of contrast conditions and thereby the probing of more complex morphologies. In addition, recent progress in software packages, useful for data analysis for advanced grazing-incidence techniques, is discussed. PMID:25610632

  9. Use of Advanced Magnetic Resonance Imaging Techniques in Neuromyelitis Optica Spectrum Disorder.

    PubMed

    Kremer, Stephane; Renard, Felix; Achard, Sophie; Lana-Peixoto, Marco A; Palace, Jacqueline; Asgari, Nasrin; Klawiter, Eric C; Tenembaum, Silvia N; Banwell, Brenda; Greenberg, Benjamin M; Bennett, Jeffrey L; Levy, Michael; Villoslada, Pablo; Saiz, Albert; Fujihara, Kazuo; Chan, Koon Ho; Schippling, Sven; Paul, Friedemann; Kim, Ho Jin; de Seze, Jerome; Wuerfel, Jens T; Cabre, Philippe; Marignier, Romain; Tedder, Thomas; van Pelt, Danielle; Broadley, Simon; Chitnis, Tanuja; Wingerchuk, Dean; Pandit, Lekha; Leite, Maria Isabel; Apiwattanakul, Metha; Kleiter, Ingo; Prayoonwiwat, Naraporn; Han, May; Hellwig, Kerstin; van Herle, Katja; John, Gareth; Hooper, D Craig; Nakashima, Ichiro; Sato, Douglas; Yeaman, Michael R; Waubant, Emmanuelle; Zamvil, Scott; Stüve, Olaf; Aktas, Orhan; Smith, Terry J; Jacob, Anu; O'Connor, Kevin

    2015-07-01

    Brain parenchymal lesions are frequently observed on conventional magnetic resonance imaging (MRI) scans of patients with neuromyelitis optica (NMO) spectrum disorder, but the specific morphological and temporal patterns distinguishing them unequivocally from lesions caused by other disorders have not been identified. This literature review summarizes the literature on advanced quantitative imaging measures reported for patients with NMO spectrum disorder, including proton MR spectroscopy, diffusion tensor imaging, magnetization transfer imaging, quantitative MR volumetry, and ultrahigh-field strength MRI. It was undertaken to consider the advanced MRI techniques used for patients with NMO by different specialists in the field. Although quantitative measures such as proton MR spectroscopy or magnetization transfer imaging have not reproducibly revealed diffuse brain injury, preliminary data from diffusion-weighted imaging and brain tissue volumetry indicate greater white matter than gray matter degradation. These findings could be confirmed by ultrahigh-field MRI. The use of nonconventional MRI techniques may further our understanding of the pathogenic processes in NMO spectrum disorders and may help us identify the distinct radiographic features corresponding to specific phenotypic manifestations of this disease.

  10. Use of Advanced Magnetic Resonance Imaging Techniques in Neuromyelitis Optica Spectrum Disorder

    PubMed Central

    Kremer, Stephane; Renard, Felix; Achard, Sophie; Lana-Peixoto, Marco A.; Palace, Jacqueline; Asgari, Nasrin; Klawiter, Eric C.; Tenembaum, Silvia N.; Banwell, Brenda; Greenberg, Benjamin M.; Bennett, Jeffrey L.; Levy, Michael; Villoslada, Pablo; Saiz, Albert; Fujihara, Kazuo; Chan, Koon Ho; Schippling, Sven; Paul, Friedemann; Kim, Ho Jin; de Seze, Jerome; Wuerfel, Jens T.

    2016-01-01

    Brain parenchymal lesions are frequently observed on conventional magnetic resonance imaging (MRI) scans of patients with neuromyelitis optica (NMO) spectrum disorder, but the specific morphological and temporal patterns distinguishing them unequivocally from lesions caused by other disorders have not been identified. This literature review summarizes the literature on advanced quantitative imaging measures reported for patients with NMO spectrum disorder, including proton MR spectroscopy, diffusion tensor imaging, magnetization transfer imaging, quantitative MR volumetry, and ultrahigh-field strength MRI. It was undertaken to consider the advanced MRI techniques used for patients with NMO by different specialists in the field. Although quantitative measures such as proton MR spectroscopy or magnetization transfer imaging have not reproducibly revealed diffuse brain injury, preliminary data from diffusion-weighted imaging and brain tissue volumetry indicate greater white matter than gray matter degradation. These findings could be confirmed by ultrahigh-field MRI. The use of nonconventional MRI techniques may further our understanding of the pathogenic processes in NMO spectrum disorders and may help us identify the distinct radiographic features corresponding to specific phenotypic manifestations of this disease. PMID:26010909

  11. Advanced grazing-incidence techniques for modern soft-matter materials analysis

    DOE PAGES

    Hexemer, Alexander; Müller-Buschbaum, Peter

    2015-01-01

    The complex nano-morphology of modern soft-matter materials is successfully probed with advanced grazing-incidence techniques. Based on grazing-incidence small- and wide-angle X-ray and neutron scattering (GISAXS, GIWAXS, GISANS and GIWANS), new possibilities arise which are discussed with selected examples. Due to instrumental progress, highly interesting possibilities for local structure analysis in this material class arise from the use of micro- and nanometer-sized X-ray beams in micro- or nanofocused GISAXS and GIWAXS experiments. The feasibility of very short data acquisition times down to milliseconds creates exciting possibilities forin situandin operandoGISAXS and GIWAXS studies. Tuning the energy of GISAXS and GIWAXS in themore » soft X-ray regime and in time-of flight GISANS allows the tailoring of contrast conditions and thereby the probing of more complex morphologies. In addition, recent progress in software packages, useful for data analysis for advanced grazing-incidence techniques, is discussed.« less

  12. Advanced grazing-incidence techniques for modern soft-matter materials analysis

    SciTech Connect

    Hexemer, Alexander; Müller-Buschbaum, Peter

    2015-01-01

    The complex nano-morphology of modern soft-matter materials is successfully probed with advanced grazing-incidence techniques. Based on grazing-incidence small- and wide-angle X-ray and neutron scattering (GISAXS, GIWAXS, GISANS and GIWANS), new possibilities arise which are discussed with selected examples. Due to instrumental progress, highly interesting possibilities for local structure analysis in this material class arise from the use of micro- and nanometer-sized X-ray beams in micro- or nanofocused GISAXS and GIWAXS experiments. The feasibility of very short data acquisition times down to milliseconds creates exciting possibilities forin situandin operandoGISAXS and GIWAXS studies. Tuning the energy of GISAXS and GIWAXS in the soft X-ray regime and in time-of flight GISANS allows the tailoring of contrast conditions and thereby the probing of more complex morphologies. In addition, recent progress in software packages, useful for data analysis for advanced grazing-incidence techniques, is discussed.

  13. Use of Advanced Magnetic Resonance Imaging Techniques in Neuromyelitis Optica Spectrum Disorder.

    PubMed

    Kremer, Stephane; Renard, Felix; Achard, Sophie; Lana-Peixoto, Marco A; Palace, Jacqueline; Asgari, Nasrin; Klawiter, Eric C; Tenembaum, Silvia N; Banwell, Brenda; Greenberg, Benjamin M; Bennett, Jeffrey L; Levy, Michael; Villoslada, Pablo; Saiz, Albert; Fujihara, Kazuo; Chan, Koon Ho; Schippling, Sven; Paul, Friedemann; Kim, Ho Jin; de Seze, Jerome; Wuerfel, Jens T; Cabre, Philippe; Marignier, Romain; Tedder, Thomas; van Pelt, Danielle; Broadley, Simon; Chitnis, Tanuja; Wingerchuk, Dean; Pandit, Lekha; Leite, Maria Isabel; Apiwattanakul, Metha; Kleiter, Ingo; Prayoonwiwat, Naraporn; Han, May; Hellwig, Kerstin; van Herle, Katja; John, Gareth; Hooper, D Craig; Nakashima, Ichiro; Sato, Douglas; Yeaman, Michael R; Waubant, Emmanuelle; Zamvil, Scott; Stüve, Olaf; Aktas, Orhan; Smith, Terry J; Jacob, Anu; O'Connor, Kevin

    2015-07-01

    Brain parenchymal lesions are frequently observed on conventional magnetic resonance imaging (MRI) scans of patients with neuromyelitis optica (NMO) spectrum disorder, but the specific morphological and temporal patterns distinguishing them unequivocally from lesions caused by other disorders have not been identified. This literature review summarizes the literature on advanced quantitative imaging measures reported for patients with NMO spectrum disorder, including proton MR spectroscopy, diffusion tensor imaging, magnetization transfer imaging, quantitative MR volumetry, and ultrahigh-field strength MRI. It was undertaken to consider the advanced MRI techniques used for patients with NMO by different specialists in the field. Although quantitative measures such as proton MR spectroscopy or magnetization transfer imaging have not reproducibly revealed diffuse brain injury, preliminary data from diffusion-weighted imaging and brain tissue volumetry indicate greater white matter than gray matter degradation. These findings could be confirmed by ultrahigh-field MRI. The use of nonconventional MRI techniques may further our understanding of the pathogenic processes in NMO spectrum disorders and may help us identify the distinct radiographic features corresponding to specific phenotypic manifestations of this disease. PMID:26010909

  14. Development of Advanced Nuclide Separation and Recovery Methods using Ion-Exchanhge Techniques in Nuclear Backend

    NASA Astrophysics Data System (ADS)

    Miura, Hitoshi

    The development of compact separation and recovery methods using selective ion-exchange techniques is very important for the reprocessing and high-level liquid wastes (HLLWs) treatment in the nuclear backend field. The selective nuclide separation techniques are effective for the volume reduction of wastes and the utilization of valuable nuclides, and expected for the construction of advanced nuclear fuel cycle system and the rationalization of waste treatment. In order to accomplish the selective nuclide separation, the design and synthesis of novel adsorbents are essential for the development of compact and precise separation processes. The present paper deals with the preparation of highly functional and selective hybrid microcapsules enclosing nano-adsorbents in the alginate gel polymer matrices by sol-gel methods, their characterization and the clarification of selective adsorption properties by batch and column methods. The selective separation of Cs, Pd and Re in real HLLW was further accomplished by using novel microcapsules, and an advanced nuclide separation system was proposed by the combination of selective processes using microcapsules.

  15. Advanced intensity-modulation continuous-wave lidar techniques for ASCENDS CO2 column measurements

    NASA Astrophysics Data System (ADS)

    Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.; Harrison, F. W.; Obland, Michael D.; Meadows, Byron

    2015-10-01

    Global atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity- Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud contamination. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating the need to correct for sidelobe bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These results are extended to include Richardson-Lucy deconvolution techniques to extend the resolution of the lidar beyond that implied by limit of the bandwidth of the modulation, where it is shown useful for making tree canopy measurements.

  16. Advanced Intensity-Modulation Continuous-Wave Lidar Techniques for Column CO2 Measurements

    NASA Astrophysics Data System (ADS)

    Campbell, J. F.; Lin, B.; Nehrir, A. R.; Obland, M. D.; Liu, Z.; Browell, E. V.; Chen, S.; Kooi, S. A.; Fan, T. F.

    2015-12-01

    Global and regional atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission and Atmospheric Carbon and Transport (ACT) - America airborne investigation are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity-Modulated Continuous-Wave (IM-CW) lidar techniques are being investigated as a means of facilitating CO2 measurements from space and airborne platforms to meet the mission science measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud returns. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of intervening optically thin clouds, thereby minimizing bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the Earth's surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques and provides very high (at sub-meter level) range resolution. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These techniques are used in a new data processing architecture to support the ASCENDS CarbonHawk Experiment Simulator (ACES) and ACT-America programs.

  17. Advanced Intensity-Modulation Continuous-Wave Lidar Techniques for ASCENDS O2 Column Measurements

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.; Harrison, F. Wallace; Obland, Michael D.; Meadows, Byron

    2015-01-01

    Global atmospheric carbon dioxide (CO2) measurements for the NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) space mission are critical for improving our understanding of global CO2 sources and sinks. Advanced Intensity- Modulated Continuous-Wave (IM-CW) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS measurement requirements. In recent numerical, laboratory and flight experiments we have successfully used the Binary Phase Shift Keying (BPSK) modulation technique to uniquely discriminate surface lidar returns from intermediate aerosol and cloud contamination. We demonstrate the utility of BPSK to eliminate sidelobes in the range profile as a means of making Integrated Path Differential Absorption (IPDA) column CO2 measurements in the presence of optically thin clouds, thereby eliminating the need to correct for sidelobe bias errors caused by the clouds. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate cloud layers, which is a requirement for the inversion of column CO2 number density measurements to column CO2 mixing ratios, has been demonstrated using new hyperfine interpolation techniques that takes advantage of the periodicity of the modulation waveforms. This approach works well for both BPSK and linear swept-frequency modulation techniques. The BPSK technique under investigation has excellent auto-correlation properties while possessing a finite bandwidth. A comparison of BPSK and linear swept-frequency is also discussed in this paper. These results are extended to include Richardson-Lucy deconvolution techniques to extend the resolution of the lidar beyond that implied by limit of the bandwidth of the modulation, where it is shown useful for making tree canopy measurements.

  18. Compensation technique for the intrinsic error in ultrasound motion estimation using a speckle tracking method

    NASA Astrophysics Data System (ADS)

    Taki, Hirofumi; Yamakawa, Makoto; Shiina, Tsuyoshi; Sato, Toru

    2015-07-01

    High-accuracy ultrasound motion estimation has become an essential technique in blood flow imaging, elastography, and motion imaging of the heart wall. Speckle tracking has been one of the best motion estimators; however, conventional speckle-tracking methods neglect the effect of out-of-plane motion and deformation. Our proposed method assumes that the cross-correlation between a reference signal and a comparison signal depends on the spatio-temporal distance between the two signals. The proposed method uses the decrease in the cross-correlation value in a reference frame to compensate for the intrinsic error caused by out-of-plane motion and deformation without a priori information. The root-mean-square error of the estimated lateral tissue motion velocity calculated by the proposed method ranged from 6.4 to 34% of that using a conventional speckle-tracking method. This study demonstrates the high potential of the proposed method for improving the estimation of tissue motion using an ultrasound speckle-tracking method in medical diagnosis.

  19. Estimation of root zone storage capacity at the catchment scale using improved Mass Curve Technique

    NASA Astrophysics Data System (ADS)

    Zhao, Jie; Xu, Zongxue; Singh, Vijay P.

    2016-09-01

    The root zone storage capacity (Sr) greatly influences runoff generation, soil water movement, and vegetation growth and is hence an important variable for ecological and hydrological modelling. However, due to the great heterogeneity in soil texture and structure, there seems to be no effective approach to monitor or estimate Sr at the catchment scale presently. To fill the gap, in this study the Mass Curve Technique (MCT) was improved by incorporating a snowmelt module for the estimation of Sr at the catchment scale in different climatic regions. The "range of perturbation" method was also used to generate different scenarios for determining the sensitivity of the improved MCT-derived Sr to its influencing factors after the evaluation of plausibility of Sr derived from the improved MCT. Results can be showed as: (i) Sr estimates of different catchments varied greatly from ∼10 mm to ∼200 mm with the changes of climatic conditions and underlying surface characteristics. (ii) The improved MCT is a simple but powerful tool for the Sr estimation in different climatic regions of China, and incorporation of more catchments into Sr comparisons can further improve our knowledge on the variability of Sr. (iii) Variation of Sr values is an integrated consequence of variations in rainfall, snowmelt water and evapotranspiration. Sr values are most sensitive to variations in evapotranspiration of ecosystems. Besides, Sr values with a longer return period are more stable than those with a shorter return period when affected by fluctuations in its influencing factors.

  20. Integrated State Estimation and Contingency Analysis Software Implementation using High Performance Computing Techniques

    SciTech Connect

    Chen, Yousu; Glaesemann, Kurt R.; Rice, Mark J.; Huang, Zhenyu

    2015-12-31

    Power system simulation tools are traditionally developed in sequential mode and codes are optimized for single core computing only. However, the increasing complexity in the power grid models requires more intensive computation. The traditional simulation tools will soon not be able to meet the grid operation requirements. Therefore, power system simulation tools need to evolve accordingly to provide faster and better results for grid operations. This paper presents an integrated state estimation and contingency analysis software implementation using high performance computing techniques. The software is able to solve large size state estimation problems within one second and achieve a near-linear speedup of 9,800 with 10,000 cores for contingency analysis application. The performance evaluation is presented to show its effectiveness.

  1. A technique for estimating maximum harvesting effort in a stochastic fishery model.

    PubMed

    Sarkar, Ram Rup; Chattopadhyay, J

    2003-06-01

    Exploitation of biological resources and the harvest of population species are commonly practiced in fisheries, forestry and wild life management. Estimation of maximum harvesting effort has a great impact on the economics of fisheries and other bio-resources. The present paper deals with the problem of a bioeconomic fishery model under environmental variability. A technique for finding the maximum harvesting effort in fluctuating environment has been developed in a two-species competitive system, which shows that under realistic environmental variability the maximum harvesting effort is less than what is estimated in the deterministic model. This method also enables us to find out the safe regions in the parametric space for which the chance of extinction of the species is minimized. A real life fishery problem has been considered to obtain the inaccessible parameters of the system in a systematic way. Such studies may help resource managers to get an idea for controlling the system.

  2. Use of LANDSAT 2 data technique to estimate silverleaf sunflower infestation

    NASA Technical Reports Server (NTRS)

    Richardson, A. J.; Escobar, D. E.; Gausman, H. W.; Everitt, J. H. (Principal Investigator)

    1982-01-01

    The feasibility of the technique using the Earth Resources Technology Satellite (LANDSAT-2) multispectral scanner (MSS) was tested; to distinguish silverleaf sunflowers (Helianthus argophyllus Torr. and Gray) from other plant species and to estimate the hectarage percent of its infestation. Sunflowers gave high mean digital counts in all four LANDSAT MSS bands that were manifested as a pinkish image response on the LANDSAT color composite imagery. Photo- and LANDSAT-estimated hectare percentages for silverleaf sunflower within a 23,467 ha study area were 9.1 and 9.5%, respectively. The geographic occurrence of sunflower areas on the line-printer recognition map was in good agreement with their known aerial photographic locations.

  3. A technique for estimating time of concentration and storage coefficient values for Illinois streams

    USGS Publications Warehouse

    Graf, Julia B.; Garklavs, George; Oberg, Kevin A.

    1982-01-01

    Values of the unit hydrograph parameters time of concentration (TC) and storage coefficient (R) can be estimated for streams in Illinois by a two-step technique developed from data for 98 gaged basins in the State. The sum of TC and R is related to stream length (L) and main channel slope (S) by the relation (TC + R)e = 35.2L0.39S-0.78. The variable R/(TC + R) is not significantly correlated with drainage area, slope, or length, but does exhibit a regional trend. Regional values of R/(TC + R) are used with the computed values of (TC + R)e to solve for estimated values of time of concentration (TCe) and storage coefficient (Re). The use of the variable R/(TC + R) is thought to account for variations in unit hydrograph parameters caused by physiographic variables such as basin topography, flood-plain development, and basin storage characteristics. (USGS)

  4. AN EVALUATION OF TWO GROUND-BASED CROWN CLOSURE ESTIMATION TECHNIQUES COMPARED TO CROWN CLOSURE ESTIMATES DERIVED FROM HIGH RESOLUTION IMAGERY

    EPA Science Inventory

    Two ground-based canopy closure estimation techniques, the Spherical Densitometer (SD) and the Vertical Tube (VT), were compared for the effect of deciduous understory on dominant/co-dominant crown closure estimates in even-aged loblolly (Pinus taeda) pine stands located in the N...

  5. AN EVALUATION OF TWO GROUND-BASED CROWN CLOSURE ESTIMATION TECHNIQUES COMPARED TO CROWN CLOSURE ESTIMATES DERIVED FROM HIGH RESOLUTION IMAGERY

    EPA Science Inventory

    Two ground-based canopy closure estimation techniques, the Spherical Densitometer (SD) and the Vertical Tube (VT), were compared for the effect of deciduous understory on dominantlco-dominant crown closure estimates in even-aged loblolly (Pinus taeda) pine stands located in the N...

  6. PREFACE: 16th International workshop on Advanced Computing and Analysis Techniques in physics research (ACAT2014)

    NASA Astrophysics Data System (ADS)

    Fiala, L.; Lokajicek, M.; Tumova, N.

    2015-05-01

    This volume of the IOP Conference Series is dedicated to scientific contributions presented at the 16th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2014), this year the motto was ''bridging disciplines''. The conference took place on September 1-5, 2014, at the Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic. The 16th edition of ACAT explored the boundaries of computing system architectures, data analysis algorithmics, automatic calculations, and theoretical calculation technologies. It provided a forum for confronting and exchanging ideas among these fields, where new approaches in computing technologies for scientific research were explored and promoted. This year's edition of the workshop brought together over 140 participants from all over the world. The workshop's 16 invited speakers presented key topics on advanced computing and analysis techniques in physics. During the workshop, 60 talks and 40 posters were presented in three tracks: Computing Technology for Physics Research, Data Analysis - Algorithms and Tools, and Computations in Theoretical Physics: Techniques and Methods. The round table enabled discussions on expanding software, knowledge sharing and scientific collaboration in the respective areas. ACAT 2014 was generously sponsored by Western Digital, Brookhaven National Laboratory, Hewlett Packard, DataDirect Networks, M Computers, Bright Computing, Huawei and PDV-Systemhaus. Special appreciations go to the track liaisons Lorenzo Moneta, Axel Naumann and Grigory Rubtsov for their work on the scientific program and the publication preparation. ACAT's IACC would also like to express its gratitude to all referees for their work on making sure the contributions are published in the proceedings. Our thanks extend to the conference liaisons Andrei Kataev and Jerome Lauret who worked with the local contacts and made this conference possible as well as to the program

  7. A technique for estimating seed production of common moist soil plants

    USGS Publications Warehouse

    Laubhan, Murray K.

    1992-01-01

    Seeds of native herbaceous vegetation adapted to germination in hydric soils (i.e., moist-soil plants) provide waterfowl with nutritional resources including essential amino acids, vitamins, and minerals that occur only in small amounts or are absent in other foods. These elements are essential for waterfowl to successfully complete aspects of the annual cycle such as molt and reproduction. Moist-soil vegetation also has the advantages of consistent production of foods across years with varying water availability, low management costs, high tolerance to diverse environmental conditions, and low deterioration rates of seeds after flooding. The amount of seed produced differs among plant species and varies annually depending on environmental conditions and management practices. Further, many moist-soil impoundments contain diverse vegetation, and seed production by a particular plant species usually is not uniform across an entire unit. Consequently, estimating total seed production within an impoundment is extremely difficult. The chemical composition of seeds also varies among plant species. For example, beggartick seeds contain high amounts of protein but only an intermediate amount of minerals. In contrast, barnyardgrass is a good source of minerals but is low in protein. Because of these differences, it is necessary to know the amount of seed produced by each plant species if the nutritional resources provided in an impoundment are to be estimated. The following technique for estimating seed production takes into account the variation resulting from different environmental conditions and management practices as well as differences in the amount of seed produced by various plant species. The technique was developed to provide resource managers with the ability to make quick and reliable estimates of seed production. Although on-site information must be collected, the amount of field time required is small (i.e., about 1 min per sample); sampling normally is

  8. A technique for estimating complicated power spectra from time series with gaps. [Solar oscillations study

    SciTech Connect

    Brown, T.M.; Christensen-Dalsgaard, J. Aarhus Universitet )

    1990-02-01

    Fahlman and Ulrych (1982) describe a method for estimating the power and phase spectra of gapped time series, using a maximum-entropy reconstruction of the data in the gaps. It has proved difficult to apply this technique to solar oscillations data, because of the great complexity of the solar oscillations spectrum. A means for avoiding this difficulty is described, and the results of a series of blind tests of the modified technique are reported. The main results of these tests are: (1) gap filling gives good results, provided that the signal-to-noise ratio in the original data is large enough, and provided the gaps are short enough. For low-noise data, the duty cycle of the observations should not be less than about 50 percent. (2) the frequencies and widths of narrow spectrum features are well reproduced by the technique. (3) The technique systematically reduces the apparent amplitudes of small features in the spectrum relative to large ones. 14 refs.

  9. Rainfall Estimation over the Nile Basin using Multi-Spectral, Multi- Instrument Satellite Techniques

    NASA Astrophysics Data System (ADS)

    Habib, E.; Kuligowski, R.; Sazib, N.; Elshamy, M.; Amin, D.; Ahmed, M.

    2012-04-01

    Management of Egypt's Aswan High Dam is critical not only for flood control on the Nile but also for ensuring adequate water supplies for most of Egypt since rainfall is scarce over the vast majority of its land area. However, reservoir inflow is driven by rainfall over Sudan, Ethiopia, Uganda, and several other countries from which routine rain gauge data are sparse. Satellite- derived estimates of rainfall offer a much more detailed and timely set of data to form a basis for decisions on the operation of the dam. A single-channel infrared (IR) algorithm is currently in operational use at the Egyptian Nile Forecast Center (NFC). In this study, the authors report on the adaptation of a multi-spectral, multi-instrument satellite rainfall estimation algorithm (Self- Calibrating Multivariate Precipitation Retrieval, SCaMPR) for operational application by NFC over the Nile Basin. The algorithm uses a set of rainfall predictors that come from multi-spectral Infrared cloud top observations and self-calibrate them to a set of predictands that come from the more accurate, but less frequent, Microwave (MW) rain rate estimates. For application over the Nile Basin, the SCaMPR algorithm uses multiple satellite IR channels that have become recently available to NFC from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). Microwave rain rates are acquired from multiple sources such as the Special Sensor Microwave/Imager (SSM/I), the Special Sensor Microwave Imager and Sounder (SSMIS), the Advanced Microwave Sounding Unit (AMSU), the Advanced Microwave Scanning Radiometer on EOS (AMSR-E), and the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm has two main steps: rain/no-rain separation using discriminant analysis, and rain rate estimation using stepwise linear regression. We test two modes of algorithm calibration: real- time calibration with continuous updates of coefficients with newly coming MW rain rates, and calibration using static

  10. Spectral Estimation Techniques for time series with Long Gaps: Applications to Paleomagnetism and Geomagnetic Depth Sounding

    NASA Astrophysics Data System (ADS)

    Smith-Boughner, Lindsay

    Many Earth systems cannot be studied directly. One cannot measure the velocities of convecting fluid in the Earth's core but can measure the magnetic field generated by these motions on the surface. Examining how the magnetic field changes over long periods of time, using power spectral density estimation provides insight into the dynamics driving the system. The changes in the magnetic field can also be used to study Earth properties - variations in magnetic fields outside of Earth like the ring-current induce currents to flow in the Earth, generating magnetic fields. Estimating the transfer function between the external changes and the induced response characterizes the electromagnetic response of the Earth. From this response inferences can be made about the electrical conductivity of the Earth. However, these types of time series, and many others have long breaks in the record with no samples available and limit the analysis. Standard methods require interpolation or section averaging, with associated problems of introducing bias or reducing the frequency resolution. Extending the methods of Fodor and Stark (2000), who adapt a set of orthogonal multi-tapers to compensate for breaks in sampling- an algorithm and software package for applying these techniques is developed. Methods of empirically estimating the average transfer function of a set of tapers and confidence intervals are also tested. These methods are extended for cross-spectral, coherence and transfer function estimation in the presence of noise. With these methods, new analysis of a highly interrupted ocean sediment core from the Oligocene (Hartl et al., 1993) reveals a quasi-periodic signal in the calibrated paleointensity time series at 2.5 cpMy. The power in the magnetic field during this period appears to be dominated by reversal rate processes with less overall power than the early Oligocene. Previous analysis of the early Oligocene by Constable et al. (1998) detected a signal near 8 cp

  11. Techniques for estimating flood-peak discharges of rural, unregulated streams in Ohio

    USGS Publications Warehouse

    Koltun, G.F.; Roberts, J.W.

    1990-01-01

    Multiple-regression equations are presented for estimating flood-peak discharges having recurrence intervals of 2, 5, 10, 25, 50, and 100 years at ungaged sites on rural, unregulated streams in Ohio. The average standard errors of prediction for the equations range from 33.4% to 41.4%. Peak discharge estimates determined by log-Pearson Type III analysis using data collected through the 1987 water year are reported for 275 streamflow-gaging stations. Ordinary least-squares multiple-regression techniques were used to divide the State into three regions and to identify a set of basin characteristics that help explain station-to- station variation in the log-Pearson estimates. Contributing drainage area, main-channel slope, and storage area were identified as suitable explanatory variables. Generalized least-square procedures, which include historical flow data and account for differences in the variance of flows at different gaging stations, spatial correlation among gaging station records, and variable lengths of station record were used to estimate the regression parameters. Weighted peak-discharge estimates computed as a function of the log-Pearson Type III and regression estimates are reported for each station. A method is provided to adjust regression estimates for ungaged sites by use of weighted and regression estimates for a gaged site located on the same stream. Limitations and shortcomings cited in an earlier report on the magnitude and frequency of floods in Ohio are addressed in this study. Geographic bias is no longer evident for the Maumee River basin of northwestern Ohio. No bias is found to be associated with the forested-area characteristic for the range used in the regression analysis (0.0 to 99.0%), nor is this characteristic significant in explaining peak discharges. Surface-mined area likewise is not significant in explaining peak discharges, and the regression equations are not biased when applied to basins having approximately 30% or less

  12. TRAC-PF1: an advanced best-estimate computer program for pressurized water reactor analysis

    SciTech Connect

    Liles, D.R.; Mahaffy, J.H.

    1984-02-01

    The Transient Reactor Analysis Code (TRAC) is being developed at the Los Alamos National Laboratory to provide advanced best-estimate predictions of postulated accidents in light water reactors. The TRAC-PF1 program provides this capability for pressurized water reactors and for many thermal-hydraulic experimental facilities. The code features either a one-dimensional or a three-dimensional treatment of the pressure vessel and its associated internals; a two-phase, two-fluid nonequilibrium hydrodynamics model with a noncondensable gas field; flow-regime-dependent constitutive equation treatment; optional reflood tracking capability for both bottom flood and falling-film quench fronts; and consistent treatment of entire accident sequences including the generation of consistent initial conditions. This report describes the thermal-hydraulic models and the numerical solution methods used in the code. Detailed programming and user information also are provided.

  13. Utilization of advanced calibration techniques in stochastic rock fall analysis of quarry slopes

    NASA Astrophysics Data System (ADS)

    Preh, Alexander; Ahmadabadi, Morteza; Kolenprat, Bernd

    2016-04-01

    In order to study rock fall dynamics, a research project was conducted by the Vienna University of Technology and the Austrian Central Labour Inspectorate (Federal Ministry of Labour, Social Affairs and Consumer Protection). A part of this project included 277 full-scale drop tests at three different quarries in Austria and recording key parameters of the rock fall trajectories. The tests involved a total of 277 boulders ranging from 0.18 to 1.8 m in diameter and from 0.009 to 8.1 Mg in mass. The geology of these sites included strong rock belonging to igneous, metamorphic and volcanic types. In this paper the results of the tests are used for calibration and validation a new stochastic computer model. It is demonstrated that the error of the model (i.e. the difference between observed and simulated results) has a lognormal distribution. Selecting two parameters, advanced calibration techniques including Markov Chain Monte Carlo Technique, Maximum Likelihood and Root Mean Square Error (RMSE) are utilized to minimize the error. Validation of the model based on the cross validation technique reveals that in general, reasonable stochastic approximations of the rock fall trajectories are obtained in all dimensions, including runout, bounce heights and velocities. The approximations are compared to the measured data in terms of median, 95% and maximum values. The results of the comparisons indicate that approximate first-order predictions, using a single set of input parameters, are possible and can be used to aid practical hazard and risk assessment.

  14. Visualizing epigenetics: current advances and advantages in HDAC PET imaging techniques.

    PubMed

    Wang, C; Schroeder, F A; Hooker, J M

    2014-04-01

    Abnormal gene regulation as a consequence of flawed epigenetic mechanisms may be central to the initiation and persistence of many human diseases. However, the association of epigenetic dysfunction with disease and the development of therapeutic agents for treatment are slow. Developing new methodologies used to visualize chromatin-modifying enzymes and their function in the human brain would be valuable for the diagnosis of brain disorders and drug discovery. We provide an overview of current invasive and noninvasive techniques for measuring expression and functions of chromatin-modifying enzymes in the brain, emphasizing tools applicable to histone deacetylase (HDAC) enzymes as a leading example. The majority of current techniques are invasive and difficult to translate to what is happening within a human brain in vivo. However, recent progress in molecular imaging provides new, noninvasive ways to visualize epigenetics in the human brain. Neuroimaging tool development presents a unique set of challenges in order to identify and validate CNS radiotracers for HDACs and other histone-modifying enzymes. We summarize advances in the effort to image HDACs and HDAC inhibitory effects in the brain using positron emission tomography (PET) and highlight generalizable techniques that can be adapted to investigate other specific components of epigenetic machinery. Translational tools like neuroimaging by PET and magnetic resonance imaging provide the best way to link our current understanding of epigenetic changes with in vivo function in normal and diseased brains. These tools will be a critical addition to ex vivo methods to evaluate - and intervene - in CNS dysfunction.

  15. Measurements of the subcriticality using advanced technique of shooting source during operation of NPP reactors

    SciTech Connect

    Lebedev, G. V. Petrov, V. V.; Bobylyov, V. T.; Butov, R. I.; Zhukov, A. M.; Sladkov, A. A.

    2014-12-15

    According to the rules of nuclear safety, the measurements of the subcriticality of reactors should be carried out in the process of performing nuclear hazardous operations. An advanced technique of shooting source of neutrons is proposed to meet this requirement. As such a source, a pulsed neutron source (PNS) is used. In order to realize this technique, it is recommended to enable a PNS with a frequency of 1–20 Hz. The PNS is stopped after achieving a steady-state (on average) number of neutrons in the reactor volume. The change in the number of neutrons in the reactor volume is measured in time with an interval of discreteness of ∼0.1 s. The results of these measurements with the application of a system of point-kinetics equations are used in order to calculate the sought subcriticality. The basic idea of the proposed technique used to measure the subcriticality is elaborated in a series of experiments on the Kvant assembly. The conditions which should be implemented in order to obtain a positive result of measurements are formulated. A block diagram of the basic version of the experimental setup is presented, whose main element is a pulsed neutron generator.

  16. New advanced surface modification technique: titanium oxide ceramic surface implants: long-term clinical results

    NASA Astrophysics Data System (ADS)

    Szabo, Gyorgy; Kovacs, Lajos; Barabas, Jozsef; Nemeth, Zsolt; Maironna, Carlo

    2001-11-01

    The purpose of this paper is to discuss the background to advanced surface modification technologies and to present a new technique, involving the formation of a titanium oxide ceramic coating, with relatively long-term results of its clinical utilization. Three general techniques are used to modify surfaces: the addition or removal of material and the change of material already present. Surface properties can also be changed without the addition or removal of material, through the laser or electron beam thermal treatment. The new technique outlined in this paper relates to the production of a corrosion-resistant 2000-2500 A thick, ceramic oxide layer with a coherent crystalline structure on the surface of titanium implants. The layer is grown electrochemically from the bulk of the metal and is modified by heat treatment. Such oxide ceramic-coated implants have a number of advantageous properties relative to implants covered with various other coatings: a higher external hardness, a greater force of adherence between the titanium and the oxide ceramic coating, a virtually perfect insulation between the organism and the metal (no possibility of metal allergy), etc. The coated implants were subjected to various physical, chemical, electronmicroscopic, etc. tests for a qualitative characterization. Finally, these implants (plates, screws for maxillofacial osteosynthesis and dental root implants) were applied in surgical practice for a period of 10 years. Tests and the experience acquired demonstrated the good properties of the titanium oxide ceramic-coated implants.

  17. Measurements of the subcriticality using advanced technique of shooting source during operation of NPP reactors

    NASA Astrophysics Data System (ADS)

    Lebedev, G. V.; Petrov, V. V.; Bobylyov, V. T.; Butov, R. I.; Zhukov, A. M.; Sladkov, A. A.

    2014-12-01

    According to the rules of nuclear safety, the measurements of the subcriticality of reactors should be carried out in the process of performing nuclear hazardous operations. An advanced technique of shooting source of neutrons is proposed to meet this requirement. As such a source, a pulsed neutron source (PNS) is used. In order to realize this technique, it is recommended to enable a PNS with a frequency of 1-20 Hz. The PNS is stopped after achieving a steady-state (on average) number of neutrons in the reactor volume. The change in the number of neutrons in the reactor volume is measured in time with an interval of discreteness of ˜0.1 s. The results of these measurements with the application of a system of point-kinetics equations are used in order to calculate the sought subcriticality. The basic idea of the proposed technique used to measure the subcriticality is elaborated in a series of experiments on the Kvant assembly. The conditions which should be implemented in order to obtain a positive result of measurements are formulated. A block diagram of the basic version of the experimental setup is presented, whose main element is a pulsed neutron generator.

  18. Recent Advances of Portable Multi-Sensor Technique of Volcanic Plume Measurement

    NASA Astrophysics Data System (ADS)

    Shinohara, H.

    2005-12-01

    A technique has been developed to estimate chemical composition volcanic gases based on the measurement of volcanic plumes at distance from a source vent by the use of a portable multi-sensor system consisting a humidity sensor, an SO2 electrochemical sensor and a CO2 IR analyzer (Shinohara, 2005). Since volcanic plume is a mixture of the atmosphere and volcanic gases, the volcanic gas composition can be estimated by subtracting the atmospheric background from the plume data. This technique enabled us to estimate concentration ratios of major volcanic gas species (i.e., H2O, CO2 and SO2) without any complicated chemical analyses even for gases emitted from an inaccessible open vent. Since the portable multi-sensor system was light (~ 5 kg) and small enough to carry in a medium size backpack, we could apply this technique to measure volcanic plumes at summit of various volcanoes including those which require us a tough climbing, such as Villarrica volcano, Chile. We further improved the sensor system and the measurements techniques, including application of LI-840 IR H2O and CO2 analyzer, H2S electrochemical sensor and H2 semi-conductor sensor. Application of the new LI-840 analyzer enabled us to measure H2O concentration in the plume with similar response time with CO2 concentration. The H2S electrochemical sensor of Komyo Co. has a chemical filter to removed SO2 to achieve a low sensitivity (0.1%) to SO2, and we can measure a high SO2/H2S ratio up to 1000. The semi-conductor sensor can measure H2 concentration in the range from the background level in the atmosphere (~0.5 ppm) to ~50 ppm. Response of the H2 sensor is slower (90% response time = ~90 sec) than other sensors in particular in low concentration range, and the measurement is still semi-quantitative with errors up to ±50%. The H2/H2O ratios are quite variable in volcanic gases ranging from less than 10-5 up to 10-1, and the ratio is largely controlled by temperature and pressure condition of the

  19. Estimating increases in outpatient dialysis costs resulting from scientific and technological advancement.

    PubMed

    Ozminkowski, R J; Hassol, A; Firkusny, I; Noether, M; Miles, M A; Newmann, J; Sharda, C; Guterman, S; Schmitz, R

    1995-04-01

    The Medicare program's base payment rate for outpatient dialysis services has never been adjusted for the effects of inflation, productivity changes, or scientific and technological advancement on the costs of treating patients with end-stage renal disease. In recognition of this, Congress asked the Prospective Payment Assessment Commission to annually recommend an adjustment to Medicare's base payment rate to dialysis facilities. One component of this adjustment addresses the cost-increasing effects of technological change--the scientific and technological advances (S&TA) component. The S&TA component is intended to encourage dialysis facilities to adopt technologies that, when applied appropriately, enhance the quality of patient care, even though they may also increase costs. We found the appropriate increase to the composite payment rate for Medicare outpatient dialysis services in fiscal year 1995 to vary from 0.18% to 2.18%. These estimates depend on whether one accounts for the lack of previous adjustments to the composite rate. Mathematically, the S&TA adjustment also depends on whether one considers the likelihood of missing some dialysis sessions because of illness or hospitalization. The S&TA estimates also allow for differences in the incremental costs of technological change that are based on the varying advice of experts in the dialysis industry. The major contributors to the cost of technological change in dialysis services are the use of twin-bag disconnect peritoneal dialysis systems, automated peritoneal dialysis cyclers, and the new generation of hemodialysis machines currently on the market. Factors beyond the control of dialysis facility personnel that influence the cost of patient care should be considered when payment rates are set, and those rates should be updated as market conditions change. The S&TA adjustment is one example of how the composite rate payment system for outpatient dialysis services can be modified to provide appropriate

  20. Estimating snow leopard population abundance using photography and capture-recapture techniques

    USGS Publications Warehouse

    Jackson, R.M.; Roe, J.D.; Wangchuk, R.; Hunter, D.O.

    2006-01-01

    Conservation and management of snow leopards (Uncia uncia) has largely relied on anecdotal evidence and presence-absence data due to their cryptic nature and the difficult terrain they inhabit. These methods generally lack the scientific rigor necessary to accurately estimate population size and monitor trends. We evaluated the use of photography in capture-mark-recapture (CMR) techniques for estimating snow leopard population abundance and density within Hemis National Park, Ladakh, India. We placed infrared camera traps along actively used travel paths, scent-sprayed rocks, and scrape sites within 16- to 30-km2 sampling grids in successive winters during January and March 2003-2004. We used head-on, oblique, and side-view camera configurations to obtain snow leopard photographs at varying body orientations. We calculated snow leopard abundance estimates using the program CAPTURE. We obtained a total of 66 and 49 snow leopard captures resulting in 8.91 and 5.63 individuals per 100 trap-nights during 2003 and 2004, respectively. We identified snow leopards based on the distinct pelage patterns located primarily on the forelimbs, flanks, and dorsal surface of the tail. Capture probabilities ranged from 0.33 to 0.67. Density estimates ranged from 8.49 (SE = 0.22; individuals per 100 km2 in 2003 to 4.45 (SE = 0.16) in 2004. We believe the density disparity between years is attributable to different trap density and placement rather than to an actual decline in population size. Our results suggest that photographic capture-mark-recapture sampling may be a useful tool for monitoring demographic patterns. However, we believe a larger sample size would be necessary for generating a statistically robust estimate of population density and abundance based on CMR models.

  1. A spectral reflectance estimation technique using multispectral data from the Viking lander camera

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Huck, F. O.

    1976-01-01

    A technique is formulated for constructing spectral reflectance curve estimates from multispectral data obtained with the Viking lander camera. The multispectral data are limited to six spectral channels in the wavelength range from 0.4 to 1.1 micrometers and most of these channels exhibit appreciable out-of-band response. The output of each channel is expressed as a linear (integral) function of the (known) solar irradiance, atmospheric transmittance, and camera spectral responsivity and the (unknown) spectral responsivity and the (unknown) spectral reflectance. This produces six equations which are used to determine the coefficients in a representation of the spectral reflectance as a linear combination of known basis functions. Natural cubic spline reflectance estimates are produced for a variety of materials that can be reasonably expected to occur on Mars. In each case the dominant reflectance features are accurately reproduced, but small period features are lost due to the limited number of channels. This technique may be a valuable aid in selecting the number of spectral channels and their responsivity shapes when designing a multispectral imaging system.

  2. Using CloudSat and MODIS for exploring a hurricane intensity estimation technique

    NASA Astrophysics Data System (ADS)

    Alexander, R. J.

    2012-12-01

    Observing Tropical Cyclones (TC) using satellites is a common and successful endeavor. However, using satellites to accurately measure storm intensity is a more difficult and involved task. Our research aim to accurately measure hurricane intensity using only satellite obtained data. Modeling a hurricane as a balanced convectively neutral vortex, along with assumptions on the contributing factors to moist static energy, we explore techniques for estimating hurricane intensity. We used maximum sustained wind to characterize hurricane intensity. We calculated maximum sustained wind using the Wong and Emanuel expression for peak wind speed in a storm. CloudSat cloud profiling radar was used for obtaining cloud-top height and cloud composition information, and the MODIS instrument on-board Aqua was used to obtain cloud-top temperature. This technique requires eye or near eye overpass and simultaneous data collection and as a result have a limited sample size. We compare our results to the best track database and analyze the validity of our estimations.

  3. An advanced shape-fitting algorithm applied to quadrupedal mammals: improving volumetric mass estimates

    PubMed Central

    Brassey, Charlotte A.; Gardiner, James D.

    2015-01-01

    Body mass is a fundamental physical property of an individual and has enormous bearing upon ecology and physiology. Generating reliable estimates for body mass is therefore a necessary step in many palaeontological studies. Whilst early reconstructions of mass in extinct species relied upon isolated skeletal elements, volumetric techniques are increasingly applied to fossils when skeletal completeness allows. We apply a new ‘alpha shapes’ (α-shapes) algorithm to volumetric mass estimation in quadrupedal mammals. α-shapes are defined by: (i) the underlying skeletal structure to which they are fitted; and (ii) the value α, determining the refinement of fit. For a given skeleton, a range of α-shapes may be fitted around the individual, spanning from very coarse to very fine. We fit α-shapes to three-dimensional models of extant mammals and calculate volumes, which are regressed against mass to generate predictive equations. Our optimal model is characterized by a high correlation coefficient and mean square error (r2=0.975, m.s.e.=0.025). When applied to the woolly mammoth (Mammuthus primigenius) and giant ground sloth (Megatherium americanum), we reconstruct masses of 3635 and 3706 kg, respectively. We consider α-shapes an improvement upon previous techniques as resulting volumes are less sensitive to uncertainties in skeletal reconstructions, and do not require manual separation of body segments from skeletons. PMID:26361559

  4. Estimation techniques and simulation platforms for 77 GHz FMCW ACC radars

    NASA Astrophysics Data System (ADS)

    Bazzi, A.; Kärnfelt, C.; Péden, A.; Chonavel, T.; Galaup, P.; Bodereau, F.

    2012-01-01

    This paper presents two radar simulation platforms that have been developed and evaluated. One is based on the Advanced Design System (ADS) and the other on Matlab. Both platforms are modeled using homodyne front-end 77 GHz radar, based on commercially available monolithic microwave integrated circuits (MMIC). Known linear modulation formats such as the frequency modulation continuous wave (FMCW) and three-segment FMCW have been studied, and a new variant, the dual FMCW, is proposed for easier association between beat frequencies, while maintaining an excellent distance estimation of the targets. In the signal processing domain, new algorithms are proposed for the three-segment FMCW and for the dual FMCW. While both of these algorithms present the choice of either using complex or real data, the former allows faster signal processing, whereas the latter enables a simplified front-end architecture. The estimation performance of the modulation formats has been evaluated using the Cramer-Rao and Barankin bounds. It is found that the dual FMCW modulation format is slightly better than the other two formats tested in this work. A threshold effect is found at a signal-to-noise ratio (SNR) of 12 dB which means that, to be able to detect a target, the SNR should be above this value. In real hardware, the SNR detection limit should be set to about at least 15 dB.

  5. Techniques for estimating the percutaneous absorption of chemicals due to occupational and environmental exposure

    SciTech Connect

    Leung, Hon-Wing; Paustenbach, D.J.

    1994-03-01

    This article reviews the scientific principles involved in determining the percutaneous absorption of chemicals. To assist industrial hygienists in assessing the risks of dermal uptake of chemicals in workplaces, lists of absorption rates and example calculations including the use of wipe sampling to estimate skin exposure are presented. Recent advances in the use of mathematical models to examine the various factors influencing the percutaneous absorption of chemicals from matrices are discussed. Results from various models suggest that the skin uptake of nonvolatile, highly lipophilic chemicals in soil will range from about 30 percent to 50 percent, while the uptake of volatile chemicals will usually be less than 5 percent. The available published information suggests the following rules of thumb: (1) the bioavailability of chemicals in media vary widely; consequently, it is important to account for matrix effects; (2) proper wipe sampling should be conducted to estimate the degree of skin contact with contaminated surfaces; (3) the hazards posed by dermal contact with certain chemicals in the workplace, particularly those with a high n-octanol:water partition coefficient, can produce an appreciable degree of the daily absorbed dose, and the dose from percutaneous absorption can often be as much as one-half that due to inhalation; and (4) the contribution to overall uptake from percutaneous absorption of chemical vapors can be significant if the atmospheric concentration of the chemicals is tenfold to one thousandfold higher than the threshold limit value, even when the worker wears protective clothing and adequate respiratory protection. 92 refs., 5 tabs.

  6. Event triggered state estimation techniques for power systems with integrated variable energy resources.

    PubMed

    Francy, Reshma C; Farid, Amro M; Youcef-Toumi, Kamal

    2015-05-01

    For many decades, state estimation (SE) has been a critical technology for energy management systems utilized by power system operators. Over time, it has become a mature technology that provides an accurate representation of system state under fairly stable and well understood system operation. The integration of variable energy resources (VERs) such as wind and solar generation, however, introduces new fast frequency dynamics and uncertainties into the system. Furthermore, such renewable energy is often integrated into the distribution system thus requiring real-time monitoring all the way to the periphery of the power grid topology and not just the (central) transmission system. The conventional solution is two fold: solve the SE problem (1) at a faster rate in accordance with the newly added VER dynamics and (2) for the entire power grid topology including the transmission and distribution systems. Such an approach results in exponentially growing problem sets which need to be solver at faster rates. This work seeks to address these two simultaneous requirements and builds upon two recent SE methods which incorporate event-triggering such that the state estimator is only called in the case of considerable novelty in the evolution of the system state. The first method incorporates only event-triggering while the second adds the concept of tracking. Both SE methods are demonstrated on the standard IEEE 14-bus system and the results are observed for a specific bus for two difference scenarios: (1) a spike in the wind power injection and (2) ramp events with higher variability. Relative to traditional state estimation, the numerical case studies showed that the proposed methods can result in computational time reductions of 90%. These results were supported by a theoretical discussion of the computational complexity of three SE techniques. The work concludes that the proposed SE techniques demonstrate practical improvements to the computational complexity of

  7. Event triggered state estimation techniques for power systems with integrated variable energy resources.

    PubMed

    Francy, Reshma C; Farid, Amro M; Youcef-Toumi, Kamal

    2015-05-01

    For many decades, state estimation (SE) has been a critical technology for energy management systems utilized by power system operators. Over time, it has become a mature technology that provides an accurate representation of system state under fairly stable and well understood system operation. The integration of variable energy resources (VERs) such as wind and solar generation, however, introduces new fast frequency dynamics and uncertainties into the system. Furthermore, such renewable energy is often integrated into the distribution system thus requiring real-time monitoring all the way to the periphery of the power grid topology and not just the (central) transmission system. The conventional solution is two fold: solve the SE problem (1) at a faster rate in accordance with the newly added VER dynamics and (2) for the entire power grid topology including the transmission and distribution systems. Such an approach results in exponentially growing problem sets which need to be solver at faster rates. This work seeks to address these two simultaneous requirements and builds upon two recent SE methods which incorporate event-triggering such that the state estimator is only called in the case of considerable novelty in the evolution of the system state. The first method incorporates only event-triggering while the second adds the concept of tracking. Both SE methods are demonstrated on the standard IEEE 14-bus system and the results are observed for a specific bus for two difference scenarios: (1) a spike in the wind power injection and (2) ramp events with higher variability. Relative to traditional state estimation, the numerical case studies showed that the proposed methods can result in computational time reductions of 90%. These results were supported by a theoretical discussion of the computational complexity of three SE techniques. The work concludes that the proposed SE techniques demonstrate practical improvements to the computational complexity of

  8. Correlation techniques as applied to pose estimation in space station docking

    NASA Astrophysics Data System (ADS)

    Rollins, John M.; Juday, Richard D.; Monroe, Stanley E., Jr.

    2002-08-01

    The telerobotic assembly of space-station components has become the method of choice for the International Space Station (ISS) because it offers a safe alternative to the more hazardous option of space walks. The disadvantage of telerobotic assembly is that it does not necessarily provide for direct arbitrary views of mating interfaces for the teleoperator. Unless cameras are present very close to the interface positions, such views must be generated graphically, based on calculated pose relationships derived from images. To assist in this photogrammetric pose estimation, circular targets, or spots, of high contrast have been affixed on each connecting module at carefully surveyed positions. The appearance of a subset of spots must form a constellation of specific relative positions in the incoming image stream in order for the docking to proceed. Spot positions are expressed in terms of their apparent centroids in an image. The precision of centroid estimation is required to be as fine as 1/20th pixel, in some cases. This paper presents an approach to spot centroid estimation using cross correlation between spot images and synthetic spot models of precise centration. Techniques for obtaining sub-pixel accuracy and for shadow and lighting irregularity compensation are discussed.

  9. Correlation Techniques as Applied to Pose Estimation in Space Station Docking

    NASA Technical Reports Server (NTRS)

    Rollins, J. Michael; Juday, Richard D.; Monroe, Stanley E., Jr.

    2002-01-01

    The telerobotic assembly of space-station components has become the method of choice for the International Space Station (ISS) because it offers a safe alternative to the more hazardous option of space walks. The disadvantage of telerobotic assembly is that it does not provide for direct arbitrary views of mating interfaces for the teleoperator. Unless cameras are present very close to the interface positions, such views must be generated graphically, based on calculated pose relationships derived from images. To assist in this photogrammetric pose estimation, circular targets, or spots, of high contrast have been affixed on each connecting module at carefully surveyed positions. The appearance of a subset of spots essentially must form a constellation of specific relative positions in the incoming digital image stream in order for the docking to proceed. Spot positions are expressed in terms of their apparent centroids in an image. The precision of centroid estimation is required to be as fine as 1I20th pixel, in some cases. This paper presents an approach to spot centroid estimation using cross correlation between spot images and synthetic spot models of precise centration. Techniques for obtaining sub-pixel accuracy and for shadow, obscuration and lighting irregularity compensation are discussed.

  10. Fuel Distribution Estimate via Spin Period to Precession Period Ratio for the Advanced Composition Explorer

    NASA Technical Reports Server (NTRS)

    DeHart, Russell; Smith, Eric; Lakin, John

    2015-01-01

    The spin period to precession period ratio of a non-axisymmetric spin-stabilized spacecraft, the Advanced Composition Explorer (ACE), was used to estimate the remaining mass and distribution of fuel within its propulsion system. This analysis was undertaken once telemetry suggested that two of the four fuel tanks had no propellant remaining, contrary to pre-launch expectations of the propulsion system performance. Numerical integration of possible fuel distributions was used to calculate moments of inertia for the spinning spacecraft. A Fast Fourier Transform (FFT) of output from a dynamics simulation was employed to relate calculated moments of inertia to spin and precession periods. The resulting modeled ratios were compared to the actual spin period to precession period ratio derived from the effect of post-maneuver nutation angle on sun sensor measurements. A Monte Carlo search was performed to tune free parameters using the observed spin period to precession period ratio over the life of the mission. This novel analysis of spin and precession periods indicates that at the time of launch, propellant was distributed unevenly between the two pairs of fuel tanks, with one pair having approximately 20% more propellant than the other pair. Furthermore, it indicates the pair of the tanks with less fuel expelled all of its propellant by 2014 and that approximately 46 kg of propellant remains in the other two tanks, an amount that closely matches the operational fuel accounting estimate. Keywords: Fuel Distribution, Moments of Inertia, Precession, Spin, Nutation

  11. Planning and scheduling the Hubble Space Telescope: Practical application of advanced techniques

    NASA Technical Reports Server (NTRS)

    Miller, Glenn E.

    1994-01-01

    NASA's Hubble Space Telescope (HST) is a major astronomical facility that was launched in April, 1990. In late 1993, the first of several planned servicing missions refurbished the telescope, including corrections for a manufacturing flaw in the primary mirror. Orbiting above the distorting effects of the Earth's atmosphere, the HST provides an unrivaled combination of sensitivity, spectral coverage and angular resolution. The HST is arguably the most complex scientific observatory ever constructed and effective use of this valuable resource required novel approaches to astronomical observation and the development of advanced software systems including techniques to represent scheduling preferences and constraints, a constraint satisfaction problem (CSP) based scheduler and a rule based planning system. This paper presents a discussion of these systems and the lessons learned from operational experience.

  12. Vibrio parahaemolyticus: a review on the pathogenesis, prevalence, and advance molecular identification techniques

    PubMed Central

    Letchumanan, Vengadesh; Chan, Kok-Gan; Lee, Learn-Han

    2014-01-01

    Vibrio parahaemolyticus is a Gram-negative halophilic bacterium that is found in estuarine, marine and coastal environments. V. parahaemolyticus is the leading causal agent of human acute gastroenteritis following the consumption of raw, undercooked, or mishandled marine products. In rare cases, V. parahaemolyticus causes wound infection, ear infection or septicaemia in individuals with pre-existing medical conditions. V. parahaemolyticus has two hemolysins virulence factors that are thermostable direct hemolysin (tdh)-a pore-forming protein that contributes to the invasiveness of the bacterium in humans, and TDH-related hemolysin (trh), which plays a similar role as tdh in the disease pathogenesis. In addition, the bacterium is also encodes for adhesions and type III secretion systems (T3SS1 and T3SS2) to ensure its survival in the environment. This review aims at discussing the V. parahaemolyticus growth and characteristics, pathogenesis, prevalence and advances in molecular identification techniques. PMID:25566219

  13. Integrating advanced materials simulation techniques into an automated data analysis workflow at the Spallation Neutron Source

    SciTech Connect

    Borreguero Calvo, Jose M; Campbell, Stuart I; Delaire, Olivier A; Doucet, Mathieu; Goswami, Monojoy; Hagen, Mark E; Lynch, Vickie E; Proffen, Thomas E; Ren, Shelly; Savici, Andrei T; Sumpter, Bobby G

    2014-01-01

    This presentation will review developments on the integration of advanced modeling and simulation techniques into the analysis step of experimental data obtained at the Spallation Neutron Source. A workflow framework for the purpose of refining molecular mechanics force-fields against quasi-elastic neutron scattering data is presented. The workflow combines software components to submit model simulations to remote high performance computers, a message broker interface for communications between the optimizer engine and the simulation production step, and tools to convolve the simulated data with the experimental resolution. A test application shows the correction to a popular fixed-charge water model in order to account polarization effects due to the presence of solvated ions. Future enhancements to the refinement workflow are discussed. This work is funded through the DOE Center for Accelerating Materials Modeling.

  14. Recent advances in molecular medicine techniques for the diagnosis, prevention, and control of infectious diseases.

    PubMed

    França, R F O; da Silva, C C; De Paula, S O

    2013-06-01

    In recent years we have observed great advances in our ability to combat infectious diseases. Through the development of novel genetic methodologies, including a better understanding of pathogen biology, pathogenic mechanisms, advances in vaccine development, designing new therapeutic drugs, and optimization of diagnostic tools, significant infectious diseases are now better controlled. Here, we briefly describe recent reports in the literature concentrating on infectious disease control. The focus of this review is to describe the molecular methods widely used in the diagnosis, prevention, and control of infectious diseases with regard to the innovation of molecular techniques. Since the list of pathogenic microorganisms is extensive, we emphasize some of the major human infectious diseases (AIDS, tuberculosis, malaria, rotavirus, herpes virus, viral hepatitis, and dengue fever). As a consequence of these developments, infectious diseases will be more accurately and effectively treated; safe and effective vaccines are being developed and rapid detection of infectious agents now permits countermeasures to avoid potential outbreaks and epidemics. But, despite considerable progress, infectious diseases remain a strong challenge to human survival. PMID:23339016

  15. Recent advances in molecular medicine techniques for the diagnosis, prevention, and control of infectious diseases.

    PubMed

    França, R F O; da Silva, C C; De Paula, S O

    2013-06-01

    In recent years we have observed great advances in our ability to combat infectious diseases. Through the development of novel genetic methodologies, including a better understanding of pathogen biology, pathogenic mechanisms, advances in vaccine development, designing new therapeutic drugs, and optimization of diagnostic tools, significant infectious diseases are now better controlled. Here, we briefly describe recent reports in the literature concentrating on infectious disease control. The focus of this review is to describe the molecular methods widely used in the diagnosis, prevention, and control of infectious diseases with regard to the innovation of molecular techniques. Since the list of pathogenic microorganisms is extensive, we emphasize some of the major human infectious diseases (AIDS, tuberculosis, malaria, rotavirus, herpes virus, viral hepatitis, and dengue fever). As a consequence of these developments, infectious diseases will be more accurately and effectively treated; safe and effective vaccines are being developed and rapid detection of infectious agents now permits countermeasures to avoid potential outbreaks and epidemics. But, despite considerable progress, infectious diseases remain a strong challenge to human survival.

  16. Quantitative coronary angiography using image recovery techniques for background estimation in unsubtracted images

    SciTech Connect

    Wong, Jerry T.; Kamyar, Farzad; Molloi, Sabee

    2007-10-15

    Densitometry measurements have been performed previously using subtracted images. However, digital subtraction angiography (DSA) in coronary angiography is highly susceptible to misregistration artifacts due to the temporal separation of background and target images. Misregistration artifacts due to respiration and patient motion occur frequently, and organ motion is unavoidable. Quantitative densitometric techniques would be more clinically feasible if they could be implemented using unsubtracted images. The goal of this study is to evaluate image recovery techniques for densitometry measurements using unsubtracted images. A humanoid phantom and eight swine (25-35 kg) were used to evaluate the accuracy and precision of the following image recovery techniques: Local averaging (LA), morphological filtering (MF), linear interpolation (LI), and curvature-driven diffusion image inpainting (CDD). Images of iodinated vessel phantoms placed over the heart of the humanoid phantom or swine were acquired. In addition, coronary angiograms were obtained after power injections of a nonionic iodinated contrast solution in an in vivo swine study. Background signals were estimated and removed with LA, MF, LI, and CDD. Iodine masses in the vessel phantoms were quantified and compared to known amounts. Moreover, the total iodine in left anterior descending arteries was measured and compared with DSA measurements. In the humanoid phantom study, the average root mean square errors associated with quantifying iodine mass using LA and MF were approximately 6% and 9%, respectively. The corresponding average root mean square errors associated with quantifying iodine mass using LI and CDD were both approximately 3%. In the in vivo swine study, the root mean square errors associated with quantifying iodine in the vessel phantoms with LA and MF were approximately 5% and 12%, respectively. The corresponding average root mean square errors using LI and CDD were both 3%. The standard deviations

  17. Techniques for estimating peak-flow frequency relations for North Dakota streams

    USGS Publications Warehouse

    Williams-Sether, Tara

    1992-01-01

    This report presents techniques for estimating peak-flow frequency relations for North Dakota streams. In addition, a generalized skew coefficient analysis was completed for North Dakota to test the validity of using the generalized skew coefficient map in Bulletin 17B of the Hydrology Subcommittee of the Interagency Advisory Committee on Water Data, 1982, 'Guidelines for Determining Flood Flow Frequency.' The analysis indicates that the generalized skew coefficient map in Bulletin 17B provides accurate estimates of generalized skew coefficient values for natural-flow streams in North Dakota. Peak-flow records through 1988 for 192 continuous- and partial-record streamflow gaging stations that had 10 or more years of record were used in a generalized least-squares regression analysis that relates peak flows for selected recurrence intervals to selected basin characteristics. Peak-flow equations were developed for recurrence intervals of 2, 10, 15, 25, 50, 100, and 500 years for three hydrologic regions in North Dakota. The peak-flow equations are applicable to natural-flow streams that have drainage areas of less than or equal to 1,000 square miles. The standard error of estimate for the three hydrologic regions ranges from 60 to 70 percent for the 100-year peak-flow equations. Methods are presented for transferring peak-flow data from gaging stations to ungaged sites on the same stream and for determining peak flows for ungaged sites on ungaged streams. Peak-flow relations, weighted estimates of peak flow, and selected basin characteristics are tabulated for the 192 gaging stations used in the generalized skew coefficient and regression analyses. Peak-flow relations also are provided for 63 additional gaging stations that were not used in the generalized skew coefficient and regression analyses. These 63 gaging stations generally represent streams that are significantly controlled by regulation and those that have drainage areas greater than 1,000 square miles.

  18. Advancements in sensing and perception using structured lighting techniques :an LDRD final report.

    SciTech Connect

    Novick, David Keith; Padilla, Denise D.; Davidson, Patrick A. Jr.; Carlson, Jeffrey J.

    2005-09-01

    This report summarizes the analytical and experimental efforts for the Laboratory Directed Research and Development (LDRD) project entitled ''Advancements in Sensing and Perception using Structured Lighting Techniques''. There is an ever-increasing need for robust, autonomous ground vehicles for counterterrorism and defense missions. Although there has been nearly 30 years of government-sponsored research, it is undisputed that significant advancements in sensing and perception are necessary. We developed an innovative, advanced sensing technology for national security missions serving the Department of Energy, the Department of Defense, and other government agencies. The principal goal of this project was to develop an eye-safe, robust, low-cost, lightweight, 3D structured lighting sensor for use in broad daylight outdoor applications. The market for this technology is wide open due to the unavailability of such a sensor. Currently available laser scanners are slow, bulky and heavy, expensive, fragile, short-range, sensitive to vibration (highly problematic for moving platforms), and unreliable for outdoor use in bright sunlight conditions. Eye-safety issues are a primary concern for currently available laser-based sensors. Passive, stereo-imaging sensors are available for 3D sensing but suffer from several limitations : computationally intensive, require a lighted environment (natural or man-made light source), and don't work for many scenes or regions lacking texture or with ambiguous texture. Our approach leveraged from the advanced capabilities of modern CCD camera technology and Center 6600's expertise in 3D world modeling, mapping, and analysis, using structured lighting. We have a diverse customer base for indoor mapping applications and this research extends our current technology's lifecycle and opens a new market base for outdoor 3D mapping. Applications include precision mapping, autonomous navigation, dexterous manipulation, surveillance and

  19. Advanced Modeling Techniques to Study Anthropogenic Influences on Atmospheric Chemical Budgets

    NASA Technical Reports Server (NTRS)

    Mathur, Rohit

    1997-01-01

    This research work is a collaborative effort between research groups at MCNC and the University of North Carolina at Chapel Hill. The overall objective of this research is to improve the level of understanding of the processes that determine the budgets of chemically and radiatively active compounds in the atmosphere through development and application of advanced methods for calculating the chemical change in atmospheric models. The research performed during the second year of this project focused on four major aspects: (1) The continued development and refinement of multiscale modeling techniques to address the issue of the disparate scales of the physico-chemical processes that govern the fate of atmospheric pollutants; (2) Development and application of analysis methods utilizing process and mass balance techniques to increase the interpretive powers of atmospheric models and to aid in complementary analysis of model predictions and observations; (3) Development of meteorological and emission inputs for initial application of the chemistry/transport model over the north Atlantic region; and, (4) The continued development and implementation of a totally new adaptive chemistry representation that changes the details of what is represented as the underlying conditions change.

  20. Advancing the frontiers in nanocatalysis, biointerfaces, and renewable energy conversion by innovations of surface techniques.

    PubMed

    Somorjai, Gabor A; Frei, Heinz; Park, Jeong Y

    2009-11-25

    The challenge of chemistry in the 21st century is to achieve 100% selectivity of the desired product molecule in multipath reactions ("green chemistry") and develop renewable energy based processes. Surface chemistry and catalysis play key roles in this enterprise. Development of in situ surface techniques such as high-pressure scanning tunneling microscopy, sum frequency generation (SFG) vibrational spectroscopy, time-resolved Fourier transform infrared methods, and ambient pressure X-ray photoelectron spectroscopy enabled the rapid advancement of three fields: nanocatalysts, biointerfaces, and renewable energy conversion chemistry. In materials nanoscience, synthetic methods have been developed to produce monodisperse metal and oxide nanoparticles (NPs) in the 0.8-10 nm range with controlled shape, oxidation states, and composition; these NPs can be used as selective catalysts since chemical selectivity appears to be dependent on all of these experimental parameters. New spectroscopic and microscopic techniques have been developed that operate under reaction conditions and reveal the dynamic change of molecular structure of catalysts and adsorbed molecules as the reactions proceed with changes in reaction intermediates, catalyst composition, and oxidation states. SFG vibrational spectroscopy detects amino acids, peptides, and proteins adsorbed at hydrophobic and hydrophilic interfaces and monitors the change of surface structure and interactions with coadsorbed water. Exothermic reactions and photons generate hot electrons in metal NPs that may be utilized in chemical energy conversion. The photosplitting of water and carbon dioxide, an important research direction in renewable energy conversion, is discussed.

  1. Pilot-scale investigation of drinking water ultrafiltration membrane fouling rates using advanced data analysis techniques.

    PubMed

    Chen, Fei; Peldszus, Sigrid; Peiris, Ramila H; Ruhl, Aki S; Mehrez, Renata; Jekel, Martin; Legge, Raymond L; Huck, Peter M

    2014-01-01

    A pilot-scale investigation of the performance of biofiltration as a pre-treatment to ultrafiltration for drinking water treatment was conducted between 2008 and 2010. The objective of this study was to further understand the fouling behaviour of ultrafiltration at pilot scale and assess the utility of different foulant monitoring tools. Various fractions of natural organic matter (NOM) and colloidal/particulate matter of raw water, biofilter effluents, and membrane permeate were characterized by employing two advanced NOM characterization techniques: liquid chromatography - organic carbon detection (LC-OCD) and fluorescence excitation-emission matrices (FEEM) combined with principal component analysis (PCA). A framework of fouling rate quantification and classification was also developed and utilized in this study. In cases such as the present one where raw water quality and therefore fouling potential vary substantially, such classification can be considered essential for proper data interpretation. The individual and combined contributions of various NOM fractions and colloidal/particulate matter to hydraulically reversible and irreversible fouling were investigated using various multivariate statistical analysis techniques. Protein-like substances and biopolymers were identified as major contributors to both reversible and irreversible fouling, whereas colloidal/particulate matter can alleviate the extent of irreversible fouling. Humic-like substances contributed little to either reversible or irreversible fouling at low level fouling rates. The complementary nature of FEEM-PCA and LC-OCD for assessing the fouling potential of complex water matrices was also illustrated by this pilot-scale study.

  2. Investigation to advance prediction techniques of the low-speed aerodynamics of V/STOL aircraft

    NASA Technical Reports Server (NTRS)

    Maskew, B.; Strash, D.; Nathman, J.; Dvorak, F. A.

    1985-01-01

    A computer program, VSAERO, has been applied to a number of V/STOL configurations with a view to advancing prediction techniques for the low-speed aerodynamic characteristics. The program couples a low-order panel method with surface streamline calculation and integral boundary layer procedures. The panel method--which uses piecewise constant source and doublet panels-includes an iterative procedure for wake shape and models boundary layer displacement effect using the source transpiration technique. Certain improvements to a basic vortex tube jet model were installed in the code prior to evaluation. Very promising results were obtained for surface pressures near a jet issuing at 90 deg from a flat plate. A solid core model was used in the initial part of the jet with a simple entrainment model. Preliminary representation of the downstream separation zone significantly improve the correlation. The program accurately predicted the pressure distribution inside the inlet on the Grumman 698-411 design at a range of flight conditions. Furthermore, coupled viscous/potential flow calculations gave very close correlation with experimentally determined operational boundaries dictated by the onset of separation inside the inlet. Experimentally observed degradation of these operational boundaries between nacelle-alone tests and tests on the full configuration were also indicated by the calculation. Application of the program to the General Dynamics STOL fighter design were equally encouraging. Very close agreement was observed between experiment and calculation for the effects of power on pressure distribution, lift and lift curve slope.

  3. Advancing the Frontiers in Nanocatalysis, Biointerfaces, and Renewable Energy Conversion by Innovations of Surface Techniques

    SciTech Connect

    Somorjai, G.A.; Frei, H.; Park, J.Y.

    2009-07-23

    The challenge of chemistry in the 21st century is to achieve 100% selectivity of the desired product molecule in multipath reactions ('green chemistry') and develop renewable energy based processes. Surface chemistry and catalysis play key roles in this enterprise. Development of in situ surface techniques such as high-pressure scanning tunneling microscopy, sum frequency generation (SFG) vibrational spectroscopy, time-resolved Fourier transform infrared methods, and ambient pressure X-ray photoelectron spectroscopy enabled the rapid advancement of three fields: nanocatalysts, biointerfaces, and renewable energy conversion chemistry. In materials nanoscience, synthetic methods have been developed to produce monodisperse metal and oxide nanoparticles (NPs) in the 0.8-10 nm range with controlled shape, oxidation states, and composition; these NPs can be used as selective catalysts since chemical selectivity appears to be dependent on all of these experimental parameters. New spectroscopic and microscopic techniques have been developed that operate under reaction conditions and reveal the dynamic change of molecular structure of catalysts and adsorbed molecules as the reactions proceed with changes in reaction intermediates, catalyst composition, and oxidation states. SFG vibrational spectroscopy detects amino acids, peptides, and proteins adsorbed at hydrophobic and hydrophilic interfaces and monitors the change of surface structure and interactions with coadsorbed water. Exothermic reactions and photons generate hot electrons in metal NPs that may be utilized in chemical energy conversion. The photosplitting of water and carbon dioxide, an important research direction in renewable energy conversion, is discussed.

  4. Procedural guidance using advance imaging techniques for percutaneous edge-to-edge mitral valve repair.

    PubMed

    Quaife, Robert A; Salcedo, Ernesto E; Carroll, John D

    2014-02-01

    The complexity of structural heart disease interventions such as edge-to edge mitral valve repair requires integration of multiple highly technical imaging modalities. Real time imaging with 3-dimensional (3D) echocardiography is a relatively new technique that first, allows clear volumetric imaging of target structures such as the mitral valve for both pre-procedural diagnosis and planning in patients with degenerative or functional mitral valve regurgitation. Secondly it provides intra-procedural, real-time panoramic volumetric 3D view of structural heart disease targets that facilitates eye-hand coordination while manipulating devices within the heart. X-ray fluoroscopy and RT 3D TEE images are used in combination to display specific targets and movement of catheter based technologies in 3D space. This integration requires at least two different image display monitors and mentally fusing the individual datasets by the operator. Combined display technology such as this, allow rotation and orientation of both dataset perspectives necessary to define targets and guidance of structural disease device procedures. The inherently easy concept of direct visual feedback and eye-hand coordination allows safe and efficient completion of MitraClip procedures. This technology is now merged into a single structural heart disease guidance mode called EchoNavigator(TM) (Philips Medical Imaging Andover, MA). These advanced imaging techniques have revolutionized the field of structural heart disease interventions and this experience is exemplified by a cooperative imaging approach used for guidance of edge-to-edge mitral valve repair procedures.

  5. EPS in Environmental Microbial Biofilms as Examined by Advanced Imaging Techniques

    NASA Astrophysics Data System (ADS)

    Neu, T. R.; Lawrence, J. R.

    2006-12-01

    Biofilm communities are highly structured associations of cellular and polymeric components which are involved in biogenic and geogenic environmental processes. Furthermore, biofilms are also important in medical (infection), industrial (biofouling) and technological (biofilm engineering) processes. The interfacial microbial communities in a specific habitat are highly dynamic and change according to the environmental parameters affecting not only the cellular but also the polymeric constituents of the system. Through their EPS biofilms interact with dissolved, colloidal and particulate compounds from the bulk water phase. For a long time the focus in biofilm research was on the cellular constituents in biofilms and the polymer matrix in biofilms has been rather neglected. The polymer matrix is produced not only by different bacteria and archaea but also by eukaryotic micro-organisms such as algae and fungi. The mostly unidentified mixture of EPS compounds is responsible for many biofilm properties and is involved in biofilm functionality. The chemistry of the EPS matrix represents a mixture of polymers including polysaccharides, proteins, nucleic acids, neutral polymers, charged polymers, amphiphilic polymers and refractory microbial polymers. The analysis of the EPS may be done destructively by means of extraction and subsequent chemical analysis or in situ by means of specific probes in combination with advanced imaging. In the last 15 years laser scanning microscopy (LSM) has been established as an indispensable technique for studying microbial communities. LSM with 1-photon and 2-photon excitation in combination with fluorescence techniques allows 3-dimensional investigation of fully hydrated, living biofilm systems. This approach is able to reveal data on biofilm structural features as well as biofilm processes and interactions. The fluorescent probes available allow the quantitative assessment of cellular as well as polymer distribution. For this purpose

  6. Inversion Technique for Estimating Emissions of Volcanic Ash from Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Pelley, Rachel; Cooke, Michael; Manning, Alistair; Thomson, David; Witham, Claire; Hort, Matthew

    2014-05-01

    When using dispersion models such as NAME (Numerical Atmospheric-dispersion Modelling Environment) to predict the dispersion of volcanic ash, a source term defining the mass release rate of ash is required. Inversion modelling using observations of the ash plume provides a method of estimating the source term for use in NAME. Our inversion technique makes use of satellite retrievals, calculated using data from the SEVIRI (Spinning Enhanced Visible and Infrared Imager) instrument on-board the MSG (Meteosat Second Generation) satellite, as the ash observations. InTEM (Inversion Technique for Emission Modelling) is the UK Met Office's inversion modelling system. Recently the capability to estimate time and height varying source terms has been implemented and applied to volcanic ash. InTEM uses a probabilistic approach to fit NAME model concentrations to satellite retrievals. This is achieved by applying Bayes Theorem to give a cost function for the source term. Source term profiles with lower costs generate model concentrations that better fit the satellite retrievals. InTEM uses the global optimisation technique, simulated annealing, to find the minimum of the cost function. The use of a probabilistic approach allows the uncertainty in the satellite retrievals to be incorporated into the inversion technique. InTEM makes use of satellite retrievals of both ash column loadings and of cloud free regions. We present a system that allows InTEM to be used during an eruption. The system is automated and can produce source term updates up to four times a day. To allow automation hourly satellite retrievals of ash are routinely produced using conservative detection limits. The conservative detection limits provide good detection of the ash plume while limiting the number of false alarms. Regions which are flagged as ash contaminated or free from cloud (both meteorological and ash) are used in the InTEM system. This approach is shown to improve the concentrations in the

  7. PREFACE: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013)

    NASA Astrophysics Data System (ADS)

    Wang, Jianxiong

    2014-06-01

    This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF

  8. Techniques and Tools for Estimating Ionospheric Effects in Interferometric and Polarimetric SAR Data

    NASA Technical Reports Server (NTRS)

    Rosen, P.; Lavalle, M.; Pi, X.; Buckley, S.; Szeliga, W.; Zebker, H.; Gurrola, E.

    2011-01-01

    The InSAR Scientific Computing Environment (ISCE) is a flexible, extensible software tool designed for the end-to-end processing and analysis of synthetic aperture radar data. ISCE inherits the core of the ROI_PAC interferometric tool, but contains improvements at all levels of the radar processing chain, including a modular and extensible architecture, new focusing approach, better geocoding of the data, handling of multi-polarization data, radiometric calibration, and estimation and correction of ionospheric effects. In this paper we describe the characteristics of ISCE with emphasis on the ionospheric modules. To detect ionospheric anomalies, ISCE implements the Faraday rotation method using quadpolarimetric images, and the split-spectrum technique using interferometric single-, dual- and quad-polarimetric images. The ability to generate co-registered time series of quad-polarimetric images makes ISCE also an ideal tool to be used for polarimetric-interferometric radar applications.

  9. Estimation of gastric emptying time (GET) in clownfish (Amphiprion ocellaris) using X-radiography technique

    SciTech Connect

    Ling, Khoo Mei; Ghaffar, Mazlan Abd.

    2014-09-03

    This study examines the movement of food item and the estimation of gastric emptying time using the X-radiography techniques, in the clownfish (Amphiprion ocellaris) fed in captivity. Fishes were voluntarily fed to satiation after being deprived of food for 72 hours, using pellets that were tampered with barium sulphate (BaSO{sub 4}). The movement of food item was monitored over different time of feeding. As a result, a total of 36 hours were needed for the food items to be evacuated completely from the stomach. Results on the modeling of meal satiation were also discussed. The size of satiation meal to body weight relationship was allometric, with the power value equal to 1.28.

  10. A field technique for estimating aquifer parameters using flow log data

    USGS Publications Warehouse

    Paillet, Frederick L.

    2000-01-01

    A numerical model is used to predict flow along intervals between producing zones in open boreholes for comparison with measurements of borehole flow. The model gives flow under quasi-steady conditions as a function of the transmissivity and hydraulic head in an arbitrary number of zones communicating with each other along open boreholes. The theory shows that the amount of inflow to or outflow from the borehole under any one flow condition may not indicate relative zone transmissivity. A unique inversion for both hydraulic-head and transmissivity values is possible if flow is measured under two different conditions such as ambient and quasi-steady pumping, and if the difference in open-borehole water level between the two flow conditions is measured. The technique is shown to give useful estimates of water levels and transmissivities of two or more water-producing zones intersecting a single interval of open borehole under typical field conditions. Although the modeling technique involves some approximation, the principle limit on the accuracy of the method under field conditions is the measurement error in the flow log data. Flow measurements and pumping conditions are usually adjusted so that transmissivity estimates are most accurate for the most transmissive zones, and relative measurement error is proportionately larger for less transmissive zones. The most effective general application of the borehole-flow model results when the data are fit to models that systematically include more production zones of progressively smaller transmissivity values until model results show that all accuracy in the data set is exhausted.A numerical model is used to predict flow along intervals between producing zones in open boreholes for comparison with measurements of borehole flow. The model gives flow under quasi-steady conditions as a function of the transmissivity and hydraulic head in an arbitrary number of zones communicating with each other along open boreholes. The

  11. Estimation of gastric emptying time (GET) in clownfish (Amphiprion ocellaris) using X-radiography technique

    NASA Astrophysics Data System (ADS)

    Ling, Khoo Mei; Ghaffar, Mazlan Abd.

    2014-09-01

    This study examines the movement of food item and the estimation of gastric emptying time using the X-radiography techniques, in the clownfish (Amphiprion ocellaris) fed in captivity. Fishes were voluntarily fed to satiation after being deprived of food for 72 hours, using pellets that were tampered with barium sulphate (BaSO4). The movement of food item was monitored over different time of feeding. As a result, a total of 36 hours were needed for the food items to be evacuated completely from the stomach. Results on the modeling of meal satiation were also discussed. The size of satiation meal to body weight relationship was allometric, with the power value equal to 1.28.

  12. A Technique for Estimating Intensity of Emotional Expressions and Speaking Styles in Speech Based on Multiple-Regression HSMM

    NASA Astrophysics Data System (ADS)

    Nose, Takashi; Kobayashi, Takao

    In this paper, we propose a technique for estimating the degree or intensity of emotional expressions and speaking styles appearing in speech. The key idea is based on a style control technique for speech synthesis using a multiple regression hidden semi-Markov model (MRHSMM), and the proposed technique can be viewed as the inverse of the style control. In the proposed technique, the acoustic features of spectrum, power, fundamental frequency, and duration are simultaneously modeled using the MRHSMM. We derive an algorithm for estimating explanatory variables of the MRHSMM, each of which represents the degree or intensity of emotional expressions and speaking styles appearing in acoustic features of speech, based on a maximum likelihood criterion. We show experimental results to demonstrate the ability of the proposed technique using two types of speech data, simulated emotional speech and spontaneous speech with different speaking styles. It is found that the estimated values have correlation with human perception.

  13. Exploiting Measurement Uncertainty Estimation in Evaluation of GOES-R ABI Image Navigation Accuracy Using Image Registration Techniques

    NASA Technical Reports Server (NTRS)

    Haas, Evan; DeLuccia, Frank

    2016-01-01

    In evaluating GOES-R Advanced Baseline Imager (ABI) image navigation quality, upsampled sub-images of ABI images are translated against downsampled Landsat 8 images of localized, high contrast earth scenes to determine the translations in the East-West and North-South directions that provide maximum correlation. The native Landsat resolution is much finer than that of ABI, and Landsat navigation accuracy is much better than ABI required navigation accuracy and expected performance. Therefore, Landsat images are considered to provide ground truth for comparison with ABI images, and the translations of ABI sub-images that produce maximum correlation with Landsat localized images are interpreted as ABI navigation errors. The measured local navigation errors from registration of numerous sub-images with the Landsat images are averaged to provide a statistically reliable measurement of the overall navigation error of the ABI image. The dispersion of the local navigation errors is also of great interest, since ABI navigation requirements are specified as bounds on the 99.73rd percentile of the magnitudes of per pixel navigation errors. However, the measurement uncertainty inherent in the use of image registration techniques tends to broaden the dispersion in measured local navigation errors, masking the true navigation performance of the ABI system. We have devised a novel and simple method for estimating the magnitude of the measurement uncertainty in registration error for any pair of images of the same earth scene. We use these measurement uncertainty estimates to filter out the higher quality measurements of local navigation error for inclusion in statistics. In so doing, we substantially reduce the dispersion in measured local navigation errors, thereby better approximating the true navigation performance of the ABI system.

  14. Advanced sensing and control techniques to facilitate semi-autonomous decommissioning. 1998 annual progress report

    SciTech Connect

    Schalkoff, R.J.; Geist, R.M.; Dawson, D.M.

    1998-06-01

    'This research is intended to advance the technology of semi-autonomous teleoperated robotics as applied to Decontamination and Decommissioning (D and D) tasks. Specifically, research leading to a prototype dual-manipulator mobile work cell is underway. This cell is supported and enhanced by computer vision, virtual reality and advanced robotics technology. This report summarizes work after approximately 1.5 years of a 3-year project. The autonomous, non-contact creation of a virtual environment from an existing, real environment (virtualization) is an integral part of the workcell functionality. This requires that the virtual world be geometrically correct. To this end, the authors have encountered severe sensitivity in quadric estimation. As a result, alternative procedures for geometric rendering, iterative correction approaches, new calibration methods and associated hardware, and calibration quality examination software have been developed. Following geometric rendering, the authors have focused on improving the color and texture recognition components of the system. In particular, the authors have moved beyond first-order illumination modeling to include higher order diffuse effects. This allows us to combine the surface geometric information, obtained from the laser projection and surface recognition components of the system, with a stereo camera image. Low-level controllers for Puma 560 robotic arms were designed and implemented using QNX. The resulting QNX/PC based low-level robot control system is called QRobot. A high-level trajectory generator and application programming interface (API) as well as a new, flexible robot control API was required. Force/torque sensors and interface hardware have been identified and ordered. A simple 3-D OpenGL-based graphical Puma 560 robot simulator was developed and interfaced with ARCL and RCCL to assist in the development of robot motion programs.'

  15. Estimation of seismic building structural types using multi-sensor remote sensing and machine learning techniques

    NASA Astrophysics Data System (ADS)

    Geiß, Christian; Aravena Pelizari, Patrick; Marconcini, Mattia; Sengara, Wayan; Edwards, Mark; Lakes, Tobia; Taubenböck, Hannes

    2015-06-01

    Detailed information about seismic building structural types (SBSTs) is crucial for accurate earthquake vulnerability and risk modeling as it reflects the main load-bearing structures of buildings and, thus, the behavior under seismic load. However, for numerous urban areas in earthquake prone regions this information is mostly outdated, unavailable, or simply not existent. To this purpose, we present an effective approach to estimate SBSTs by combining scarce in situ observations, multi-sensor remote sensing data and machine learning techniques. In particular, an approach is introduced, which deploys a sequential procedure comprising five main steps, namely calculation of features from remote sensing data, feature selection, outlier detection, generation of synthetic samples, and supervised classification under consideration of both Support Vector Machines and Random Forests. Experimental results obtained for a representative study area, including large parts of the city of Padang (Indonesia), assess the capabilities of the presented approach and confirm its great potential for a reliable area-wide estimation of SBSTs and an effective earthquake loss modeling based on remote sensing, which should be further explored in future research activities.

  16. New Algorithms for Estimating Spacecraft Position Using Scanning Techniques for Deep Space Network Antennas

    NASA Technical Reports Server (NTRS)

    Chen, Lingli; Fathpour, Nanaz; Mehra, Raman K.

    2005-01-01

    As more and more nonlinear estimation techniques become available, our interest is in finding out what performance improvement, if any, they can provide for practical nonlinear problems that have been traditionally solved using linear methods. In this paper we examine the problem of estimating spacecraft position using conical scan (conscan) for NASA's Deep Space Network antennas. We show that for additive disturbances on antenna power measurement, the problem can be transformed into a linear one, and we present a general solution to this problem, with the least square solution reported in literature as a special case. We also show that for additive disturbances on antenna position, the problem is a truly nonlinear one, and we present two approximate solutions based on linearization and Unscented Transformation respectively, and one 'exact' solution based on Markov Chain Monte Carlo (MCMC) method. Simulations show that, with the amount of data collected in practice, linear methods perform almost the same as MCMC methods. It is only when we artificially reduce the amount of collected data and increase the level of noise that nonlinear methods show significantly better accuracy than that achieved by linear methods, at the expense of more computation.

  17. HPC Usage Behavior Analysis and Performance Estimation with Machine Learning Techniques

    SciTech Connect

    Zhang, Hao; You, Haihang; Hadri, Bilel; Fahey, Mark R

    2012-01-01

    Most researchers with little high performance computing (HPC) experience have difficulties productively using the supercomputing resources. To address this issue, we investigated usage behaviors of the world s fastest academic Kraken supercomputer, and built a knowledge-based recommendation system to improve user productivity. Six clustering techniques, along with three cluster validation measures, were implemented to investigate the underlying patterns of usage behaviors. Besides manually defining a category for very large job submissions, six behavior categories were identified, which cleanly separated the data intensive jobs and computational intensive jobs. Then, job statistics of each behavior category were used to develop a knowledge-based recommendation system that can provide users with instructions about choosing appropriate software packages, setting job parameter values, and estimating job queuing time and runtime. Experiments were conducted to evaluate the performance of the proposed recommendation system, which included 127 job submissions by users from different research fields. Great feedback indicated the usefulness of the provided information. The average runtime estimation accuracy of 64.2%, with 28.9% job termination rate, was achieved in the experiments, which almost doubled the average accuracy in the Kraken dataset.

  18. Estimating the vibration level of an L-shaped beam using power flow techniques

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.; Mccollum, M.; Rassineux, J. L.; Gilbert, T.

    1986-01-01

    The response of one component of an L-shaped beam, with point force excitation on the other component, is estimated using the power flow method. The transmitted power from the source component to the receiver component is expressed in terms of the transfer and input mobilities at the excitation point and the joint. The response is estimated both in narrow frequency bands, using the exact geometry of the beams, and as a frequency averaged response using infinite beam models. The results using this power flow technique are compared to the results obtained using finite element analysis (FEA) of the L-shaped beam for the low frequency response and to results obtained using statistical energy analysis (SEA) for the high frequencies. The agreement between the FEA results and the power flow method results at low frequencies is very good. SEA results are in terms of frequency averaged levels and these are in perfect agreement with the results obtained using the infinite beam models in the power flow method. The narrow frequency band results from the power flow method also converge to the SEA results at high frequencies. The advantage of the power flow method is that detail of the response can be retained while reducing computation time, which will allow the narrow frequency band analysis of the response to be extended to higher frequencies.

  19. Advanced fabrication techniques for hydrogen-cooled engine structures. Final report, October 1975-June 1982

    SciTech Connect

    Buchmann, O.A.; Arefian, V.V.; Warren, H.A.; Vuigner, A.A.; Pohlman, M.J.

    1985-11-01

    Described is a program for development of coolant passage geometries, material systems, and joining processes that will produce long-life hydrogen-cooled structures for scramjet applications. Tests were performed to establish basic material properties, and samples constructed and evaluated to substantiate fabrication processes and inspection techniques. Results of the study show that the basic goal of increasing the life of hydrogen-cooled structures two orders of magnitude relative to that of the Hypersonic Research Engine can be reached with available means. Estimated life is 19000 cycles for the channels and 16000 cycles for pin-fin coolant passage configurations using Nickel 201. Additional research is required to establish the fatigue characteristics of dissimilar-metal coolant passages (Nickel 201/Inconel 718) and to investigate the embrittling effects of the hydrogen coolant.

  20. Recent advances and on-going challenges of estimating past elevation from climate proxy data

    NASA Astrophysics Data System (ADS)

    Snell, K. E.; Peppe, D. J.; Eiler, J. M.; Wernicke, B. P.; Koch, P. L.

    2012-12-01

    The methods currently available to reconstruct paleoelevation dominantly rely on diverse sedimentary archives of past climate. The spatial and temporal distributions of these records are used to extract information about differences in elevation from site to site, and through geologic time. As such, our understanding of past elevations is only as good as our ability to understand past climate and to put these records into a reasonable chronologic framework. Currently, most techniques either exploit the difference in temperature or the difference in the hydrogen and/or oxygen isotopic composition of precipitation between high and low elevation sites. Temperature data dominantly come from leaf margin analysis of fossil plants; biomarkers preserved in sediments; and clumped isotope thermometry of paleosol and lacustrine carbonates and carbonate cements. Constraints on the isotopic composition of precipitation come from many of the same sedimentary archives: paleosol and lacustrine carbonates, carbonate cements and authigenic clays. Reconstructed gradients in temperature and isotopic composition are then compared with modern "lapse rates" to translate climate proxy data into elevation estimates. There are still many challenges in reconstructing past elevations from paleoclimate proxy data in this way. For example, modern lapse rates are generally empirical rather than based on thermodynamic principles alone, and so may vary for reasons that are not always understood. In addition, unrecognized differences in seasonal bias for the different sedimentary archives can lead to inaccurately averaged records and/or over-estimates of errors in each method. Finally, to appropriately estimate elevation, the effects of climate change must be accounted for by matching inferred high-elevation sites with known low-elevation sites of similar age and geographic location. This requires excellent chronologic control and correlation across terrestrial basins (or independent knowledge of

  1. An alternative method to estimate zero flow temperature differences for Granier's thermal dissipation technique.

    PubMed

    Regalado, Carlos M; Ritter, Axel

    2007-08-01

    Calibration of the Granier thermal dissipation technique for measuring stem sap flow in trees requires determination of the temperature difference (DeltaT) between a heated and an unheated probe when sap flow is zero (DeltaT(max)). Classically, DeltaT(max) has been estimated from the maximum predawn DeltaT, assuming that sap flow is negligible at nighttime. However, because sap flow may continue during the night, the maximum predawn DeltaT value may underestimate the true DeltaT(max). No alternative method has yet been proposed to estimate DeltaT(max) when sap flow is non-zero at night. A sensitivity analysis is presented showing that errors in DeltaT(max) may amplify through sap flux density computations in Granier's approach, such that small amounts of undetected nighttime sap flow may lead to large diurnal sap flux density errors, hence the need for a correct estimate of DeltaT(max). By rearranging Granier's original formula, an optimization method to compute DeltaT(max) from simultaneous measurements of diurnal DeltaT and micrometeorological variables, without assuming that sap flow is negligible at night, is presented. Some illustrative examples are shown for sap flow measurements carried out on individuals of Erica arborea L., which has needle-like leaves, and Myrica faya Ait., a broadleaf species. We show that, although DeltaT(max) values obtained by the proposed method may be similar in some instances to the DeltaT(max) predicted at night, in general the values differ. The procedure presented has the potential of being applied not only to Granier's method, but to other heat-based sap flow systems that require a zero flow calibration, such as the Cermák et al. (1973) heat balance method and the T-max heat pulse system of Green et al. (2003).

  2. An alternative method to estimate zero flow temperature differences for Granier's thermal dissipation technique.

    PubMed

    Regalado, Carlos M; Ritter, Axel

    2007-08-01

    Calibration of the Granier thermal dissipation technique for measuring stem sap flow in trees requires determination of the temperature difference (DeltaT) between a heated and an unheated probe when sap flow is zero (DeltaT(max)). Classically, DeltaT(max) has been estimated from the maximum predawn DeltaT, assuming that sap flow is negligible at nighttime. However, because sap flow may continue during the night, the maximum predawn DeltaT value may underestimate the true DeltaT(max). No alternative method has yet been proposed to estimate DeltaT(max) when sap flow is non-zero at night. A sensitivity analysis is presented showing that errors in DeltaT(max) may amplify through sap flux density computations in Granier's approach, such that small amounts of undetected nighttime sap flow may lead to large diurnal sap flux density errors, hence the need for a correct estimate of DeltaT(max). By rearranging Granier's original formula, an optimization method to compute DeltaT(max) from simultaneous measurements of diurnal DeltaT and micrometeorological variables, without assuming that sap flow is negligible at night, is presented. Some illustrative examples are shown for sap flow measurements carried out on individuals of Erica arborea L., which has needle-like leaves, and Myrica faya Ait., a broadleaf species. We show that, although DeltaT(max) values obtained by the proposed method may be similar in some instances to the DeltaT(max) predicted at night, in general the values differ. The procedure presented has the potential of being applied not only to Granier's method, but to other heat-based sap flow systems that require a zero flow calibration, such as the Cermák et al. (1973) heat balance method and the T-max heat pulse system of Green et al. (2003). PMID:17472936

  3. Rain estimation from satellites: An examination of the Griffith-Woodley technique

    NASA Technical Reports Server (NTRS)

    Negri, A. J.; Adler, R. F.; Wetzel, P. J.

    1983-01-01

    The Griffith-Woodley Technique (GWT) is an approach to estimating precipitation using infrared observations of clouds from geosynchronous satellites. It is examined in three ways: an analysis of the terms in the GWT equations; a case study of infrared imagery portraying convective development over Florida; and the comparison of a simplified equation set and resultant rain map to results using the GWT. The objective is to determine the dominant factors in the calculation of GWT rain estimates. Analysis of a single day's convection over Florida produced a number of significant insights into various terms in the GWT rainfall equations. Due to the definition of clouds by a threshold isotherm the majority of clouds on this day did not go through an idealized life cycle before losing their identity through merger, splitting, etc. As a result, 85% of the clouds had a defined life of 0.5 or 1 h. For these clouds the terms in the GWT which are dependent on cloud life history become essentially constant. The empirically derived ratio of radar echo area to cloud area is given a singular value (0.02) for 43% of the sample, while the rainrate term is 20.7 mmh-1 for 61% of the sample. For 55% of the sampled clouds the temperature weighting term is identically 1.0. Cloud area itself is highly correlated (r=0.88) with GWT computed rain volume. An important, discriminating parameter in the GWT is the temperature defining the coldest 10% cloud area. The analysis further shows that the two dominant parameters in rainfall estimation are the existence of cold cloud and the duration of cloud over a point.

  4. Site-effect modelling and estimation using microtremor measurements and robust H/V technique.

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Zaslavsky, Y.; Dan, H.

    2003-04-01

    Site-effect inferred from horizontal-to-vertical spectral ratio H/V of microtremor (Nakamura's technique) is wide spread, in spite of imbedded shortcuts and related restrictions. H/V is usually determined in a series of time windows and then averaged. In majority of practical studies this gives satisfactory result. However, many researchers complain that this way obtained estimates of the first mode frequency F0 and especially its amplitude A0 are unstable or in poor agreement with earthquake inferred observations. We deal with the problem in terms of Signal-to-noise ratio (SNR). According general approach we can consider microtremor recordings, obtained by vertical and horizontal seismometers as input and output signals of a linear dynamic system (LDS) observed with input and output additive noise. Theoretically, averaged H/V is a biased estimate, but tending to the real transfer function when input and output SNR is large enough. The SNR depends on a number of factors responsible for deviation of the 1D model from the true 3D process, determined by the geological and wave velocity structure as well as useful and interfering microtremor sources, their configuration, spectral intensity, remoteness etc. In practice SNR may vary significantly during short period of time, thus spoiling the H/V results. Evidently, the time periods were SNR is large are preferable, but we have no means to compute SNR from observations. However, it is possible to judge about it indirectly. For this purpose we compute H/V in time-frequency domain, which helps to determine and collect time windows, where H/V is stationary or show evidence of data clustering. The left H/V are deleted or taken with smaller weights for averaging, thus providing robust F0 and A0 estimations. The interactive and automatic procedures, using this principal have been designed and verified on a number of simulated and real data, showing good performance.

  5. A TECHNIQUE FOR ASSESSING THE ACCURACY OF SUB-PIXEL IMPERVIOUS SURFACE ESTIMATES DERIVED FROM LANDSAT TM IMAGERY

    EPA Science Inventory

    We developed a technique for assessing the accuracy of sub-pixel derived estimates of impervious surface extracted from LANDSAT TM imagery. We utilized spatially coincident
    sub-pixel derived impervious surface estimates, high-resolution planimetric GIS data, vector--to-
    r...

  6. Estimation of coronary reserve in left anterior descending and circumflex coronary arteries by regional thermodilution technique.

    PubMed

    Kurita, A; Azorin, J; Granier, A; Bourassa, M G

    1982-09-01

    The present study was attempted to determine whether a reduction in regional venous maximal coronary flow can indicate the presence of significant coronary stenosis. The great cardiac vein flow and the coronary sinus outflow were measured simultaneously in 8 open-chest dogs by a continuous thermodilution technique using a triple thermister catheter or two separate thermister catheters. The left anterior descending and circumflex coronary inflows were recorded using electromagnetic flow probes. Successive 70% coronary arterial stenosis maximal coronary flow and coronary reserve decreased significantly in the great cardiac vein and the coronary sinus. Significant correlations were found between the flows in the left anterior descending artery and in the great cardiac vein (r = 0.81) and between those in the circumflex artery and in the coronary sinus minus the great cardiac vein (r = 0.79) throughout the periods of preocclusion, occlusion and reactive hyperemic response. There were no significant changes in heart rate and hemodynamics. Using continuous thermodilution techniques, the inflows of the left anterior descending and the circumflex coronary arteries at a stenosis greater than 70% could be estimated from the changes in regional venous outflows.

  7. Uncertainty in Estimation of Bioenergy Induced Lulc Change: Development of a New Change Detection Technique.

    NASA Astrophysics Data System (ADS)

    Singh, N.; Vatsavai, R. R.; Patlolla, D.; Bhaduri, B. L.; Lim, S. J.

    2015-12-01

    Recent estimates of bioenergy induced land use land cover change (LULCC) have large uncertainty due to misclassification errors in the LULC datasets used for analysis. These uncertainties are further compounded when data is modified by merging classes, aggregating pixels and change in classification methods over time. Hence the LULCC computed using these derived datasets is more a reflection of change in classification methods, change in input data and data manipulation rather than reflecting actual changes ion ground. Furthermore results are constrained by geographic extent, update frequency and resolution of the dataset. To overcome this limitation we have developed a change detection system to identify yearly as well as seasonal changes in LULC patterns. Our method uses hierarchical clustering which works by grouping objects into a hierarchy based on phenological similarity of different vegetation types. The algorithm explicitly models vegetation phenology to reduce spurious changes. We apply our technique on globally available Moderate Resolution Imaging Spectroradiometer (MODIS) NDVI data at 250-meter resolution. We analyze 10 years of bi-weekly data to predict changes in the mid-western US as a case study. The results of our analysis are presented and its advantages over existing techniques are discussed.

  8. Use of binary logistic regression technique with MODIS data to estimate wild fire risk

    NASA Astrophysics Data System (ADS)

    Fan, Hong; Di, Liping; Yang, Wenli; Bonnlander, Brian; Li, Xiaoyan

    2007-11-01

    Many forest fires occur across the globe each year, which destroy life and property, and strongly impact ecosystems. In recent years, wildland fires and altered fire disturbance regimes have become a significant management and science problem affecting ecosystems and wildland/urban interface cross the United States and global. In this paper, we discuss the estimation of 504 probability models for forecasting fire risk for 14 fuel types, 12 months, one day/week/month in advance, which use 19 years of historical fire data in addition to meteorological and vegetation variables. MODIS land products are utilized as a major data source, and a logistical binary regression was adopted to solve fire forecast probability. In order to better modeling the change of fire risk along with the transition of seasons, some spatial and temporal stratification strategies were applied. In order to explore the possibilities of real time prediction, the Matlab distributing computing toolbox was used to accelerate the prediction. Finally, this study give an evaluation and validation of predict based on the ground truth collected. Validating results indicate these fire risk models have achieved nearly 70% accuracy of prediction and as well MODIS data are potential data source to implement near real-time fire risk prediction.

  9. ROV advanced magnetic survey for revealing archaeological targets and estimating medium magnetization

    NASA Astrophysics Data System (ADS)

    Eppelbaum, Lev

    2013-04-01

    Magnetic survey is one of most applied geophysical method for searching and localization of any objects with contrast magnetic properties (for instance, in Israel detailed magneric survey has been succesfully applied at more than 60 archaeological sites (Eppelbaum, 2010, 2011; Eppelbaum et al., 2011, 2010)). However, land magnetic survey at comparatively large archaeological sites (with observation grids 0.5 x 0.5 or 1 x 1 m) may occupy 5-10 days. At the same time the new Remote Operation Vehicle (ROV) generation - small and maneuvering vehicles - can fly at levels of few (and even one) meters over the earth's surface (flowing the relief forms or straight). Such ROV with precise magnetic field measurements (with a frequency of 20-25 observations per second) may be performed during 10-30 minutes, moreover at different levels over the earth's surface. Such geophysical investigations should have an extremely low exploitation cost. Finally, measurements of geophysical fields at different observation levels could provide new unique geophysical-archaeological information (Eppelbaum, 2005; Eppelbaum and Mishne, 2011). The developed interpretation methodology for magnetic anomalies advanced analysis (Khesin et al., 1996; Eppelbaum et al., 2001; Eppelbaum et al., 2011) may be successfully applied for ROV magnetic survey for delineation of archaeological objects and estimation averaged magnetization of geological medium. This methodology includes: (1) non-conventional procedure for elimination of secondary effect of magnetic temporary variations, (2) calculation of rugged relief influence by the use of a correlation method, (3) estimation of medium magnetization, (4) application of various informational and wavelet algorithms for revealing low anomalous effects against the strong noise background, (5) advanced procedures for magnetic anomalies quantitative analysis (they are applicable in conditions of rugged relief, inclined magnetization, and an unknown level of the total

  10. Techniques for estimating the magnitude and frequency of floods in rural basins of South Carolina, 1999

    USGS Publications Warehouse

    Feaster, Toby D.; Tasker, Gary D.

    2002-01-01

    Data from 167 streamflow-gaging stations in or near South Carolina with 10 or more years of record through September 30, 1999, were used to develop two methods for estimating the magnitude and frequency of floods in South Carolina for rural ungaged basins that are not significantly affected by regulation. Flood frequency estimates for 54 gaged sites in South Carolina were computed by fitting the water-year peak flows for each site to a log-Pearson Type III distribution. As part of the computation of flood-frequency estimates for gaged sites, new values for generalized skew coefficients were developed. Flood-frequency analyses also were made for gaging stations that drain basins from more than one physiographic province. The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, updated these data from previous flood-frequency reports to aid officials who are active in floodplain management as well as those who design bridges, culverts, and levees, or other structures near streams where flooding is likely to occur. Regional regression analysis, using generalized least squares regression, was used to develop a set of predictive equations that can be used to estimate the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows for rural ungaged basins in the Blue Ridge, Piedmont, upper Coastal Plain, and lower Coastal Plain physiographic provinces of South Carolina. The predictive equations are all functions of drainage area. Average errors of prediction for these regression equations ranged from -16 to 19 percent for the 2-year recurrence-interval flow in the upper Coastal Plain to -34 to 52 percent for the 500-year recurrence interval flow in the lower Coastal Plain. A region-of-influence method also was developed that interactively estimates recurrence- interval flows for rural ungaged basins in the Blue Ridge of South Carolina. The region-of-influence method uses regression techniques to develop a unique

  11. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    SciTech Connect

    Unal, Cetin; Williams, Brian; Mc Clure, Patrick; Nelson, Ralph A

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M&S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for cost

  12. Statistical Technique for Intermediate and Long-Range Estimation of 13-Month Smoothed Solar Flux and Geomagnetic Index

    NASA Technical Reports Server (NTRS)

    Niehuss, K. O.; Euler, H. C., Jr.; Vaughan, W. W.

    1996-01-01

    This report documents the Marshall Space Flight Center (MSFC) 13-month smoothed solar flux (F(sub 10.7)) and geomagnetic index (A(sub p)) intermediate (months) and long-range (years) statistical estimation technique, referred to as the MSFC Lagrangian Linear Regression Technique (MLLRT). Estimates of future solar activity are needed as updated input to upper atmosphere density models used for satellite and spacecraft orbital lifetime predictions. An assessment of the MLLRT computer program's products is provided for 5-year periods from the date estimates were made. This was accomplished for a number of past solar cycles.

  13. Comparison of the egg flotation and egg candling techniques for estimating incubation day of Canada Goose nests

    USGS Publications Warehouse

    Reiter, M.E.; Andersen, D.E.

    2008-01-01

    Both egg flotation and egg candling have been used to estimate incubation day (often termed nest age) in nesting birds, but little is known about the relative accuracy of these two techniques. We used both egg flotation and egg candling to estimate incubation day for Canada Geese (Branta canadensis interior) nesting near Cape Churchill, Manitoba, from 2000 to 2007. We modeled variation in the difference between estimates of incubation day using each technique as a function of true incubation day, as well as, variation in error rates with each technique as a function of the true incubation day. We also evaluated the effect of error in the estimated incubation day on estimates of daily survival rate (DSR) and nest success using simulations. The mean difference between concurrent estimates of incubation day based on egg flotation minus egg candling at the same nest was 0.85 ?? 0.06 (SE) days. The positive difference in favor of egg flotation and the magnitude of the difference in estimates of incubation day did not vary as a function of true incubation day. Overall, both egg flotation and egg candling overestimated incubation day early in incubation and underestimated incubation day later in incubation. The average difference between true hatch date and estimated hatch date did not differ from zero (days) for egg flotation, but egg candling overestimated true hatch date by about 1 d (true - estimated; days). Our simulations suggested that error associated with estimating the incubation day of nests and subsequently exposure days using either egg candling or egg flotation would have minimal effects on estimates of DSR and nest success. Although egg flotation was slightly less biased, both methods provided comparable and accurate estimates of incubation day and subsequent estimates of hatch date and nest success throughout the entire incubation period. ?? 2008 Association of Field Ornithologists.

  14. Irrigated rice area estimation using remote sensing techniques: Project's proposal and preliminary results. [Rio Grande do Sul, Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Deassuncao, G. V.; Moreira, M. A.; Novaes, R. A.

    1984-01-01

    The development of a methodology for annual estimates of irrigated rice crop in the State of Rio Grande do Sul, Brazil, using remote sensing techniques is proposed. The project involves interpretation, digital analysis, and sampling techniques of LANDSAT imagery. Results are discussed from a preliminary phase for identifying and evaluating irrigated rice crop areas in four counties of the State, for the crop year 1982/1983. This first phase involved just visual interpretation techniques of MSS/LANDSAT images.

  15. Dynamic rain fade compensation techniques for the advanced communications technology satellite

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    1992-01-01

    The dynamic and composite nature of propagation impairments that are incurred on earth-space communications links at frequencies in and above the 30/20 GHz Ka band necessitate the use of dynamic statistical identification and prediction processing of the fading signal in order to optimally estimate and predict the levels of each of the deleterious attenuation components. Such requirements are being met in NASA's Advanced Communications Technology Satellite (ACTS) project by the implementation of optimal processing schemes derived through the use of the ACTS Rain Attenuation Prediction Model and nonlinear Markov filtering theory. The ACTS Rain Attenuation Prediction Model discerns climatological variations on the order of 0.5 deg in latitude and longitude in the continental U.S. The time-dependent portion of the model gives precise availability predictions for the 'spot beam' links of ACTS. However, the structure of the dynamic portion of the model, which yields performance parameters such as fade duration probabilities, is isomorphic to the state-variable approach of stochastic control theory and is amenable to the design of such statistical fade processing schemes which can be made specific to the particular climatological location at which they are employed.

  16. Craniospinal Irradiation Techniques: A Dosimetric Comparison of Proton Beams With Standard and Advanced Photon Radiotherapy

    SciTech Connect

    Yoon, Myonggeun; Shin, Dong Ho; Kim, Jinsung; Kim, Jong Won; Kim, Dae Woong; Park, Sung Yong; Lee, Se Byeong; Kim, Joo Young; Park, Hyeon-Jin; Park, Byung Kiu; Shin, Sang Hoon

    2011-11-01

    Purpose: To evaluate the dosimetric benefits of advanced radiotherapy techniques for craniospinal irradiation in cancer in children. Methods and Materials: Craniospinal irradiation (CSI) using three-dimensional conformal radiotherapy (3D-CRT), tomotherapy (TOMO), and proton beam treatment (PBT) in the scattering mode was planned for each of 10 patients at our institution. Dosimetric benefits and organ-specific radiation-induced cancer risks were based on comparisons of dose-volume histograms (DVHs) and on the application of organ equivalent doses (OEDs), respectively. Results: When we analyzed the organ-at-risk volumes that received 30%, 60%, and 90% of the prescribed dose (PD), we found that PBT was superior to TOMO and 3D-CRT. On average, the doses delivered by PBT to the esophagus, stomach, liver, lung, pancreas, and kidney were 19.4 Gy, 0.6 Gy, 0.3 Gy, 2.5 Gy, 0.2 Gy, and 2.2 Gy for the PD of 36 Gy, respectively, which were significantly lower than the doses delivered by TOMO (22.9 Gy, 4.5 Gy, 6.1 Gy, 4.0 Gy, 13.3 Gy, and 4.9 Gy, respectively) and 3D-CRT (34.6 Gy, 3.6 Gy, 8.0 Gy, 4.6 Gy, 22.9 Gy, and 4.3 Gy, respectively). Although the average doses delivered by PBT to the chest and abdomen were significantly lower than those of 3D-CRT or TOMO, these differences were reduced in the head-and-neck region. OED calculations showed that the risk of secondary cancers in organs such as the stomach, lungs, thyroid, and pancreas was much higher when 3D-CRT or TOMO was used than when PBT was used. Conclusions: Compared with photon techniques, PBT showed improvements in most dosimetric parameters for CSI patients, with lower OEDs to organs at risk.

  17. Advances in turbulent mixing techniques to study microsecond protein folding reactions

    PubMed Central

    Kathuria, Sagar V.; Chan, Alexander; Graceffa, Rita; Nobrega, R. Paul; Matthews, C. Robert; Irving, Thomas C.; Perot, Blair; Bilsel, Osman

    2013-01-01

    Recent experimental and computational advances in the protein folding arena have shown that the readout of the one-dimensional sequence information into three-dimensional structure begins within the first few microseconds of folding. The initiation of refolding reactions has been achieved by several means, including temperature jumps, flash photolysis, pressure jumps and rapid mixing methods. One of the most commonly used means of initiating refolding of chemically-denatured proteins is by turbulent flow mixing with refolding dilution buffer, where greater than 99% mixing efficiency has been achieved within 10’s of microseconds. Successful interfacing of turbulent flow mixers with complementary detection methods, including time-resolved Fluorescence Spectroscopy (trFL), Förster Resonance Energy Transfer (FRET), Circular Dichroism (CD), Small-Angle X-ray Scattering (SAXS), Hydrogen Exchange (HX) followed by Mass Spectrometry (MS) and Nuclear Magnetic Resonance Spectroscopy (NMR), Infrared Spectroscopy (IR), and Fourier Transform IR Spectroscopy (FTIR), has made this technique very attractive for monitoring various aspects of structure formation during folding. Although continuous-flow (CF) mixing devices interfaced with trFL detection have a dead time of only 30 µs, burst-phases have been detected in this time scale during folding of peptides and of large proteins (e.g., CheY and TIM barrels). Furthermore, a major limitation of CF mixing technique has been the requirement of large quantities of sample. In this brief communication, we will discuss the recent flurry of activity in micromachining and microfluidics, guided by computational simulations, that are likely to lead to dramatic improvements in time resolution and sample consumption for CF mixers over the next few years. PMID:23868289

  18. Application of Energy Integration Techniques to the Design of Advanced Life Support Systems

    NASA Technical Reports Server (NTRS)

    Levri, Julie; Finn, Cory

    2000-01-01

    Exchanging heat between hot and cold streams within an advanced life support system can save energy. This savings will reduce the equivalent system mass (ESM) of the system. Different system configurations are examined under steady-state conditions for various percentages of food growth and waste treatment. The scenarios investigated represent possible design options for a Mars reference mission. Reference mission definitions are drawn from the ALSS Modeling and Analysis Reference Missions Document, which includes definitions for space station evolution, Mars landers, and a Mars base. For each scenario, streams requiring heating or cooling are identified and characterized by mass flow, supply and target temperatures and heat capacities. The Pinch Technique is applied to identify good matches for energy exchange between the hot and cold streams and to calculate the minimum external heating and cooling requirements for the system. For each pair of hot and cold streams that are matched, there will be a reduction in the amount of external heating and cooling required, and the original heating and cooling equipment will be replaced with a heat exchanger. The net cost savings can be either positive or negative for each stream pairing, and the priority for implementing each pairing can be ranked according to its potential cost savings. Using the Pinch technique, a complete system heat exchange network is developed and heat exchangers are sized to allow for calculation of ESM. The energy-integrated design typically has a lower total ESM than the original design with no energy integration. A comparison of ESM savings in each of the scenarios is made to direct future Pinch Analysis efforts.

  19. Development of Flight-Test Performance Estimation Techniques for Small Unmanned Aerial Systems

    NASA Astrophysics Data System (ADS)

    McCrink, Matthew Henry

    This dissertation provides a flight-testing framework for assessing the performance of fixed-wing, small-scale unmanned aerial systems (sUAS) by leveraging sub-system models of components unique to these vehicles. The development of the sub-system models, and their links to broader impacts on sUAS performance, is the key contribution of this work. The sub-system modeling and analysis focuses on the vehicle's propulsion, navigation and guidance, and airframe components. Quantification of the uncertainty in the vehicle's power available and control states is essential for assessing the validity of both the methods and results obtained from flight-tests. Therefore, detailed propulsion and navigation system analyses are presented to validate the flight testing methodology. Propulsion system analysis required the development of an analytic model of the propeller in order to predict the power available over a range of flight conditions. The model is based on the blade element momentum (BEM) method. Additional corrections are added to the basic model in order to capture the Reynolds-dependent scale effects unique to sUAS. The model was experimentally validated using a ground based testing apparatus. The BEM predictions and experimental analysis allow for a parameterized model relating the electrical power, measurable during flight, to the power available required for vehicle performance analysis. Navigation system details are presented with a specific focus on the sensors used for state estimation, and the resulting uncertainty in vehicle state. Uncertainty quantification is provided by detailed calibration techniques validated using quasi-static and hardware-in-the-loop (HIL) ground based testing. The HIL methods introduced use a soft real-time flight simulator to provide inertial quality data for assessing overall system performance. Using this tool, the uncertainty in vehicle state estimation based on a range of sensors, and vehicle operational environments is

  20. Analysis of deformation patterns through advanced DINSAR techniques in Istanbul megacity

    NASA Astrophysics Data System (ADS)

    Balik Sanli, F.; Calò, F.; Abdikan, S.; Pepe, A.; Gorum, T.

    2014-09-01

    As result of the Turkey's economic growth and heavy migration processes from rural areas, Istanbul has experienced a high urbanization rate, with severe impacts on the environment in terms of natural resources pressure, land-cover changes and uncontrolled sprawl. As a consequence, the city became extremely vulnerable to natural and man-made hazards, inducing ground deformation phenomena that threaten buildings and infrastructures and often cause significant socio-economic losses. Therefore, the detection and monitoring of such deformation patterns is of primary importance for hazard and risk assessment as well as for the design and implementation of effective mitigation strategies. Aim of this work is to analyze the spatial distribution and temporal evolution of deformations affecting the Istanbul metropolitan area, by exploiting advanced Differential SAR Interferometry (DInSAR) techniques. In particular, we apply the Small BAseline Subset (SBAS) approach to a dataset of 43 TerraSAR-X images acquired, between November 2010 and June 2012, along descending orbits with an 11-day revisit time and a 3 m × 3 m spatial resolution. The SBAS processing allowed us to remotely detect and monitor subsidence patterns over all the urban area as well as to provide detailed information at the scale of the single building. Such SBAS measurements, effectively integrated with ground-based monitoring data and thematic maps, allows to explore the relationship between the detected deformation phenomena and urbanization, contributing to improve the urban planning and management.

  1. Advanced real-time dynamic scene generation techniques for improved performance and fidelity

    NASA Astrophysics Data System (ADS)

    Bowden, Mark H.; Buford, James A.; Mayhall, Anthony J.

    2000-07-01

    Recent advances in real-time synthetic scene generation for Hardware-in-the-loop (HWIL) testing at the U.S. Army Aviation and Missile Command (AMCOM) Aviation and Missile Research, Development, and Engineering Center (AMRDEC) improve both performance and fidelity. Modeling ground target scenarios requires tradeoffs because of limited texture memory for imagery and limited main memory for elevation data. High- resolution insets have been used in the past to provide better fidelity in specific areas, such as in the neighborhood of a target. Improvements for ground scenarios include smooth transitions for high-resolution insets to reduce high spatial frequency artifacts at the borders of the inset regions and dynamic terrain paging to support large area databases. Transport lag through the scene generation system, including sensor emulation and interface components, has been dealt with in the past through the use of sub-window extraction from oversize scenes. This compensates for spatial effects of transport lag but not temporal effects. A new system has been developed and used successfully to compensate for a flashing coded beacon in the scene. Other techniques have been developed to synchronize the scene generator with the seeker under test (SUT) and to model atmospheric effects, sensor optic and electronics, and angular emissivity attenuation.

  2. Classification of human colonic tissues using FTIR spectra and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Zwielly, A.; Argov, S.; Salman, A.; Bogomolny, E.; Mordechai, S.

    2010-04-01

    One of the major public health hazards is colon cancer. There is a great necessity to develop new methods for early detection of cancer. If colon cancer is detected and treated early, cure rate of more than 90% can be achieved. In this study we used FTIR microscopy (MSP), which has shown a good potential in the last 20 years in the fields of medical diagnostic and early detection of abnormal tissues. Large database of FTIR microscopic spectra was acquired from 230 human colonic biopsies. Five different subgroups were included in our database, normal and cancer tissues as well as three stages of benign colonic polyps, namely, mild, moderate and severe polyps which are precursors of carcinoma. In this study we applied advanced mathematical and statistical techniques including principal component analysis (PCA) and linear discriminant analysis (LDA), on human colonic FTIR spectra in order to differentiate among the mentioned subgroups' tissues. Good classification accuracy between normal, polyps and cancer groups was achieved with approximately 85% success rate. Our results showed that there is a great potential of developing FTIR-micro spectroscopy as a simple, reagent-free viable tool for early detection of colon cancer in particular the early stages of premalignancy among the benign colonic polyps.

  3. Characterization techniques for the high-brightness particle beams of the Advanced Photon Source (APS)

    SciTech Connect

    Lumpkin, A.H.

    1993-08-01

    The Advanced Photon Source (APS) will be a third-generation synchrotron radiation (SR) user facility in the hard x-ray regime (10--100 keV). The design objectives for the 7-GeV storage ring include a positron beam natural emittance of 8 {times} 10{sup {minus}9} m-rad at an average current of 100 mA. Proposed methods for measuring the transverse and longitudinal profiles will be described. Additionally, a research and development effort using an rf gun as a low-emittance source of electrons for injection into the 200- to 650-MeV linac subsystem is underway. This latter system is projected to produce electron beams with a normalized, rms emittance of {approximately}2 {pi} mm-mrad at peak currents of near one hundred amps. This interesting characterization problem will also be briefly discussed. The combination of both source types within one laboratory facility will stimulate the development of diagnostic techniques in these parameter spaces.

  4. Application of different techniques to obtain spatial estimates of debris flows erosion and deposition depths

    NASA Astrophysics Data System (ADS)

    Boreggio, Mauro; Gregoretti, Carlo; Degetto, Massimo; Bernard, Martino

    2016-04-01

    In Alpine regions, debris flows endanger settlements and human life. Danger mitigation strategies based on the preparation of hazard maps are necessary tools for the current land planning. To date, hazard maps are obtained by using one- or two-dimensional numerical models that are able to forecast the potential inundated areas, after careful calibration of those input parameters that directly affect the flow motion and its interaction with the ground surface (sediments entrainment or deposition). In principle, the reliability of these numerical models can be tested by flume experiments in laboratory using, for example, particles and water mixtures. However, for more realistic materials including coarse particles, the scaling effects are still difficult to account for. In some cases, where there are enough data (for example, point measures of flow depths and velocities or spatial estimation of erosion and deposition depths), these models can be tested against field observations. As it regards the spatial estimates of debris flows erosion and deposition depths, different approaches can be followed to obtain them, mainly depending on both the type and accuracy of the available initial data. In this work, we explain the methods that have been employed to obtain the maps of erosion and deposition depths for three occurred debris flows in the Dolomites area (North-Eastern Italian Alps). The three events are those occurred at Rio Lazer (Trento) on the 4th of November 1966, at Fiames (Belluno) on the 5th of July 2006 and at Rio Val Molinara (Trento) on the 15th of August 2010. For each case study, we present the available initial data and the related problems, the techniques that have been used to overcome them and finally the results obtained.

  5. Technique development for estimation of CO2 and CH4 concentration using radiative transfer modeling

    NASA Astrophysics Data System (ADS)

    Prasad, Prabhunath; Rastogi, Shantanu; Singh, Rp; Panigrahy, S.

    2012-07-01

    Atmospheric carbon dioxide (CO _{2}) and methane (CH _{4}) are the well-known green house gases. Accurate spatially and temporally continuous estimation of these gases is essential for many environmental studies. Satellite observations provide important input in global monitoring of green house gases, and the area of scientific research which can help to reduce the uncertainty in measurement of trace gas concentration evolved rapidly. Spectral techniques are used to derive GHG concentration by developing the relation between satellite derived differential absorption indices with gaseous concentration. In the present work spectra were simulated to study the effect of different CO _{2} and CH _{4} concentration for estimation of atmospheric transmittances in the near infrared spectral region. Forward simulation was carried out using GUI based PcLnWin3.1 model. It uses calculations made by the FASCODE using HITRAN, molecular spectroscopic databases. It was found that for study of CO _{2} suitable spectral window is the 1.6 μm (6250 cm ^{-1}) band where there are two prominent absorptions at 1.576 μm (6348 cm ^{-1}) due to 2v _{1}+2v _{2}+v _{3} and 1.606 μm (6228 cm ^{-1}) due to v _{1}+4v _{2}+v _{3}. Of these 1.606 μm is dominant and more suitable. The suitable CH _{4} spectral window is 1.666 μm where although most of the P branch dissolves in water absorption but Q and R bands are sensitive and not affected by any other absorption. Optimum spectral resolution for correct determination of gas concentrations was found. It was observed from that the resolving power about 8000 (Δ λ ˜ 0.2 nm) is needed for detection of atmospheric CO _{2} and CH _{4} concentration at the 1.60 μm and 1.66 μm spectral region respectively.

  6. Integrating auxiliary data and geophysical techniques for the estimation of soil clay content using CHAID algorithm

    NASA Astrophysics Data System (ADS)

    Abbaszadeh Afshar, Farideh; Ayoubi, Shamsollah; Besalatpour, Ali Asghar; Khademi, Hossein; Castrignano, Annamaria

    2016-03-01

    This study was conducted to estimate soil clay content in two depths using geophysical techniques (Ground Penetration Radar-GPR and Electromagnetic Induction-EMI) and ancillary variables (remote sensing and topographic data) in an arid region of the southeastern Iran. GPR measurements were performed throughout ten transects of 100 m length with the line spacing of 10 m, and the EMI measurements were done every 10 m on the same transect in six sites. Ten soil cores were sampled randomly in each site and soil samples were taken from the depth of 0-20 and 20-40 cm, and then the clay fraction of each of sixty soil samples was measured in the laboratory. Clay content was predicted using three different sets of properties including geophysical data, ancillary data, and a combination of both as inputs to multiple linear regressions (MLR) and decision tree-based algorithm of Chi-Squared Automatic Interaction Detection (CHAID) models. The results of the CHAID and MLR models with all combined data showed that geophysical data were the most important variables for the prediction of clay content in two depths in the study area. The proposed MLR model, using the combined data, could explain only 0.44 and 0.31% of the total variability of clay content in 0-20 and 20-40 cm depths, respectively. Also, the coefficient of determination (R2) values for the clay content prediction, using the constructed CHAID model with the combined data, was 0.82 and 0.76 in 0-20 and 20-40 cm depths, respectively. CHAID models, therefore, showed a greater potential in predicting soil clay content from geophysical and ancillary data, while traditional regression methods (i.e. the MLR models) did not perform as well. Overall, the results may encourage researchers in using georeferenced GPR and EMI data as ancillary variables and CHAID algorithm to improve the estimation of soil clay content.

  7. Development of the Estimation Service of the Earth's Surface Fluid Load Effects for Space Geodetic Techniques

    NASA Astrophysics Data System (ADS)

    Takiguchi, H.; Gotoh, T.; Otsubo, T.

    2010-12-01

    Temporal changes of surface loadings due to the mass redistribution of the fluid envelope of the Earth, i.e., the atmosphere, hydrosphere, and cryosphere, cause the Earth to deform and consequently change the coordinates of observation sites. The coordinate changes can be measured by space geodetic techniques such as VLBI and GPS. From the viewpoint of crustal movements, such displacements due to these noises should be eliminated. In 2006, for the reduction of these influences, we estimated the crustal displacements due to atmospheric loading (AL), non-tidal ocean loading (NTOL), continental water loading (CWL) and snow loading (SL) influences. And we showed that a combination of AL, NTOL, and CWL can eliminate about 20% of the annual signal in the GPS coordinate time series (Takiguchi et al., 2006). We also applied the correction to the data of 1997 Bungo channel slow slip event and confirmed that the loading correction can be well applied for the analysis of the slow slip event. In this study, we are developing the calculation service about the displacement of the Earth's surface loads for space geodetic techniques. Previous study, we showed the influences of several loads and the necessity to correct loads for precise geodetic analysis. However it is not easy to calculate the influences of loads. So, we are planning to develop the displacement database based on the web. This database runs as a service to calculate the load displacements at arbitrary time and arbitrary location by arbitrary users. This service can calculate the several loads such as AL, NTOL and CWL. We are also planning to provide the load corrected site coordinates about world wide GPS sites analyzed by the ‘concerto’ program version 4 for GPS developed by NICT. In the presentation, we will introduce the calculation service and the result of load correction analysis. This work was supported by JSPS KAKENHI (Grant-in-Aid for Young Scientists (B) 21740333).

  8. Comparison of two non-linear prediction techniques for estimation of some intact rock parameters

    NASA Astrophysics Data System (ADS)

    Yagiz, Saffet; Sezer, Ebru; Gokceoglu, Candan

    2010-05-01

    Traditionally, some regression techniques have been used for prediction of some rock properties using their physical and index parameters. For this purpose, numerous models and empirical equations have been proposed in the literature to predict the uniaxial compressive strength (UCS) and the elasticity modules (E) of intact rocks. Two of the powerful modeling techniques for this purpose is that the non-linear multivariable regression (NLMR) and the artificial neural networks (ANN). The aim of the study is to develop some models to predict the UCS and E of rocks using predictive tools. Further, to investigate whether two-cycle or four-cycle slake durability index as an input parameter into the models demonstrates better characterization capacity for carbonate rocks, and also, to introduce two new performance ranking approaches via performance index and degree of consistency to select the best predictor among the developed models, complex and their rank cannot be solved by using a simple ranking approach introduced previously in the literature. To obtain these purposes, seven type of carbonate rocks was collected from quarries in the southwestern Turkey and their properties including the uniaxial compressive strength, the Schmidt hammer, effective porosity, dry unit weight, P-wave velocity, the modulus of elasticity, and both two and four-cycle of slake durability indices were determined for establishing a dataset used for construction of the models. As a result of this study, it is found that four-cycle slake durability index exhibits more characterization capacity for carbonate rock in the models in comparison with two-cycle slake durability index. Also, the ANN models having two outputs (UCS and E) exhibit more accurate estimation capacity than the NLMR models. In addition, newly introduced performance ranking index and degree of consistency may be accepted as useful indicators to be considered to obtain the performance ranking of complex models. Consequently

  9. The development of optical microscopy techniques for the advancement of single-particle studies

    SciTech Connect

    Marchuk, Kyle

    2013-05-15

    Single particle orientation and rotational tracking (SPORT) has recently become a powerful optical microscopy tool that can expose many molecular motions. Unfortunately, there is not yet a single microscopy technique that can decipher all particle motions in all environmental conditions, thus there are limitations to current technologies. Within, the two powerful microscopy tools of total internal reflection and interferometry are advanced to determine the position, orientation, and optical properties of metallic nanoparticles in a variety of environments. Total internal reflection is an optical phenomenon that has been applied to microscopy to produce either fluorescent or scattered light. The non-invasive far-field imaging technique is coupled with a near-field illumination scheme that allows for better axial resolution than confocal microscopy and epi-fluorescence microscopy. By controlling the incident illumination angle using total internal reflection fluorescence (TIRF) microscopy, a new type of imaging probe called “non-blinking” quantum dots (NBQDs) were super-localized in the axial direction to sub-10-nm precision. These particles were also used to study the rotational motion of microtubules being propelled by the motor protein kinesin across the substrate surface. The same instrument was modified to function under total internal reflection scattering (TIRS) microscopy to study metallic anisotropic nanoparticles and their dynamic interactions with synthetic lipid bilayers. Utilizing two illumination lasers with opposite polarization directions at wavelengths corresponding to the short and long axis surface plasmon resonance (SPR) of the nanoparticles, both the in-plane and out-of-plane movements of many particles could be tracked simultaneously. When combined with Gaussian point spread function (PSF) fitting for particle super-localization, the binding status and rotational movement could be resolved without degeneracy. TIRS microscopy was also used to

  10. The development of optical microscopy techniques for the advancement of single-particle studies

    NASA Astrophysics Data System (ADS)

    Marchuk, Kyle

    Single particle orientation and rotational tracking (SPORT) has recently become a powerful optical microscopy tool that can expose many molecular motions. Unfortunately, there is not yet a single microscopy technique that can decipher all particle motions in all environmental conditions, thus there are limitations to current technologies. Within, the two powerful microscopy tools of total internal reflection and interferometry are advanced to determine the position, orientation, and optical properties of metallic nanoparticles in a variety of environments. Total internal reflection is an optical phenomenon that has been applied to microscopy to produce either fluorescent or scattered light. The non-invasive far-field imaging technique is coupled with a near-field illumination scheme that allows for better axial resolution than confocal microscopy and epi-fluorescence microscopy. By controlling the incident illumination angle using total internal reflection fluorescence (TIRF) microscopy, a new type of imaging probe called "non-blinking" quantum dots (NBQDs) were super-localized in the axial direction to sub-10-nm precision. These particles were also used to study the rotational motion of microtubules being propelled by the motor protein kinesin across the substrate surface. The same instrument was modified to function under total internal reflection scattering (TIRS) microscopy to study metallic anisotropic nanoparticles and their dynamic interactions with synthetic lipid bilayers. Utilizing two illumination lasers with opposite polarization directions at wavelengths corresponding to the short and long axis surface plasmon resonance (SPR) of the nanoparticles, both the in-plane and out-of-plane movements of many particles could be tracked simultaneously. When combined with Gaussian point spread function (PSF) fitting for particle super-localization, the binding status and rotational movement could be resolved without degeneracy. TIRS microscopy was also used to

  11. Estimation of Sub Hourly Glacier Albedo Values Using Artificial Intelligence Techniques

    NASA Astrophysics Data System (ADS)

    Moya Quiroga, Vladimir; Mano, Akira; Asaoka, Yoshihiro; Udo, Keiko; Kure, Shuichi; Mendoza, Javier

    2013-04-01

    Glaciers are the most important fresh water reservoirs storing about 67% of total fresh water. Unfortunately, they are retreating and some small glaciers have already disappeared. Thus, snow glacier melt (SGM) estimation plays an important role in water resources management. Whether SGM is estimated by complete energy balance or a simplified method, albedo is an important data present in most of the methods. However, this is a variable value depending on the ground surface and local conditions. The present research presents a new approach for estimating sub hourly albedo values using different artificial intelligence techniques such as artificial neural networks and decision trees along with measured and easy to obtain data. . The models were developed using measured data from the Zongo-Ore station located in the Bolivian tropical glacier Zongo (68°10' W, 16°15' S). This station automatically records every 30 minutes several meteorological parameters such as incoming short wave radiation, outgoing short wave radiation, temperature or relative humidity. The ANN model used was the Multi Layer Perceptron, while the decision tree used was the M5 model. Both models were trained using the WEKA software and validated using the cross validation method. After analysing the model performances, it was concluded that the decision tree models have a better performance. The model with the best performance was then validated with measured data from the Equatorian tropical glacier Antizana (78°09'W, 0°28'S). The model predicts the sub hourly albedo with an overall mean absolute error of 0.103. The highest errors occur for albedo measured values higher than 0.9. Considering that this is an extreme value coincident with low measured values of incoming short wave radiation, it is reasonable to assume that such values include errors due to censored data. Assuming a maximum albedo of 0.9 improved the accuracy of the model reducing the MAE to less than 0.1. Considering that the

  12. Advances in regional crop yield estimation over the United States using satellite remote sensing data

    NASA Astrophysics Data System (ADS)

    Johnson, D. M.; Dorn, M. F.; Crawford, C.

    2015-12-01

    Since the dawn of earth observation imagery, particularly from systems like Landsat and the Advanced Very High Resolution Radiometer, there has been an overarching desire to regionally estimate crop production remotely. Research efforts integrating space-based imagery into yield models to achieve this need have indeed paralleled these systems through the years, yet development of a truly useful crop production monitoring system has been arguably mediocre in coming. As a result, relatively few organizations have yet to operationalize the concept, and this is most acute in regions of the globe where there are not even alternative sources of crop production data being collected. However, the National Agricultural Statistics Service (NASS) has continued to push for this type of data source as a means to complement its long-standing, traditional crop production survey efforts which are financially costly to the government and create undue respondent burden on farmers. Corn and soybeans, the two largest field crops in the United States, have been the focus of satellite-based production monitoring by NASS for the past decade. Data from the Moderate Resolution Imaging Spectroradiometer (MODIS) has been seen as the most pragmatic input source for modeling yields primarily based on its daily revisit capabilities and reasonable ground sample resolution. The research methods presented here will be broad but provides a summary of what is useful and adoptable with satellite imagery in terms of crop yield estimation. Corn and soybeans will be of particular focus but other major staple crops like wheat and rice will also be presented. NASS will demonstrate that while MODIS provides a slew of vegetation related products, the traditional normalized difference vegetation index (NDVI) is still ideal. Results using land surface temperature products, also generated from MODIS, will also be shown. Beyond the MODIS data itself, NASS research has also focused efforts on understanding a

  13. Analysis of advanced european nuclear fuel cycle scenarios including transmutation and economical estimates

    SciTech Connect

    Merino Rodriguez, I.; Alvarez-Velarde, F.; Martin-Fuertes, F.

    2013-07-01

    In this work the transition from the existing Light Water Reactors (LWR) to the advanced reactors is analyzed, including Generation III+ reactors in a European framework. Four European fuel cycle scenarios involving transmutation options have been addressed. The first scenario (i.e., reference) is the current fleet using LWR technology and open fuel cycle. The second scenario assumes a full replacement of the initial fleet with Fast Reactors (FR) burning U-Pu MOX fuel. The third scenario is a modification of the second one introducing Minor Actinide (MA) transmutation in a fraction of the FR fleet. Finally, in the fourth scenario, the LWR fleet is replaced using FR with MOX fuel as well as Accelerator Driven Systems (ADS) for MA transmutation. All scenarios consider an intermediate period of GEN-III+ LWR deployment and they extend for a period of 200 years looking for equilibrium mass flows. The simulations were made using the TR-EVOL code, a tool for fuel cycle studies developed by CIEMAT. The results reveal that all scenarios are feasible according to nuclear resources demand (U and Pu). Concerning to no transmutation cases, the second scenario reduces considerably the Pu inventory in repositories compared to the reference scenario, although the MA inventory increases. The transmutation scenarios show that elimination of the LWR MA legacy requires on one hand a maximum of 33% fraction (i.e., a peak value of 26 FR units) of the FR fleet dedicated to transmutation (MA in MOX fuel, homogeneous transmutation). On the other hand a maximum number of ADS plants accounting for 5% of electricity generation are predicted in the fourth scenario (i.e., 35 ADS units). Regarding the economic analysis, the estimations show an increase of LCOE (Levelized cost of electricity) - averaged over the whole period - with respect to the reference scenario of 21% and 29% for FR and FR with transmutation scenarios respectively, and 34% for the fourth scenario. (authors)

  14. APPLICATION OF ADVANCED IN VITRO TECHNIQUES TO MEASURE, UNDERSTAND AND PREDICT THE KINETICS AND MECHANISMS OF XENOBIOTIC METABOLISM

    EPA Science Inventory

    We have developed a research program in metabolism that involves numerous collaborators across EPA as well as other federal and academic labs. A primary goal is to develop and apply advanced in vitro techniques to measure, understand and predict the kinetics and mechanisms of xen...

  15. Assessing the development and application of the accelerometry technique for estimating energy expenditure.

    PubMed

    Halsey, Lewis G; Shepard, Emily L C; Wilson, Rory P

    2011-03-01

    A theoretically valid proxy of energy expenditure is the acceleration of an animal's mass due to the movement of its body parts. Acceleration can be measured by an accelerometer and recorded onto a data logging device. Relevant studies have usually derived a measure of acceleration from the raw data that represents acceleration purely due to movement of the animal. This is termed 'overall dynamic body acceleration' (ODBA) and to date has proved a robust derivation of acceleration for use as an energy expenditure proxy. Acceleration data loggers are generally easy to deploy and the measures recorded appear robust to slight variation in location and orientation. This review discusses important issues concerning the accelerometry technique for estimating energy expenditure and ODBA; deriving ODBA, calibrating ODBA, acceleration logger recording frequencies, scenarios where ODBA is less likely to be valid, and the power in recording acceleration and heart rate together. While present evidence suggests that ODBA may not quantify energy expenditure during diving by birds and mammals, several recent studies have assessed changes in mechanical work in such species qualitatively through variation in ODBA during periods of submergence. The use of ODBA in field metabolic studies is likely to continue growing, supported by its relative ease of use and range of applications. PMID:20837157

  16. Artificial Intelligence Techniques for the Estimation of Direct Methanol Fuel Cell Performance

    NASA Astrophysics Data System (ADS)

    Hasiloglu, Abdulsamet; Aras, Ömür; Bayramoglu, Mahmut

    2016-04-01

    Artificial neural networks and neuro-fuzzy inference systems are well known artificial intelligence techniques used for black-box modelling of complex systems. In this study, Feed-forward artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) are used for modelling the performance of direct methanol fuel cell (DMFC). Current density (I), fuel cell temperature (T), methanol concentration (C), liquid flow-rate (q) and air flow-rate (Q) are selected as input variables to predict the cell voltage. Polarization curves are obtained for 35 different operating conditions according to a statistically designed experimental plan. In modelling study, various subsets of input variables and various types of membership function are considered. A feed -forward architecture with one hidden layer is used in ANN modelling. The optimum performance is obtained with the input set (I, T, C, q) using twelve hidden neurons and sigmoidal activation function. On the other hand, first order Sugeno inference system is applied in ANFIS modelling and the optimum performance is obtained with the input set (I, T, C, q) using sixteen fuzzy rules and triangular membership function. The test results show that ANN model estimates the polarization curve of DMFC more accurately than ANFIS model.

  17. Static torque-angle relation of human elbow joint estimated with artificial neural network technique.

    PubMed

    Uchiyama, T; Bessho, T; Akazawa, K

    1998-06-01

    Static relations between elbow joint angle and torque at constant muscle activity in normal volunteers were investigated with the aid of an artificial neural network technique. A subject sat on a chair and moved his upper- and forearm in a horizontal plane at the height of his shoulder. The subject was instructed to maintain the elbow joint at a pre-determined angle. The wrist was then pulled to extend the elbow joint by the gravitational force of a weight hanging from a pulley. Integrated electromyograms (IEMGs), elbow and shoulder joint angles and elbow joint torque were measured. Then the relation among IEMGs, joint angles and torque was modeled with the aid of the artificial neural network, where IEMGs and joint angles were the inputs and torque was the output. After back propagation learning, we presented various combinations of IEMGs, shoulder and elbow joint angles to the model and estimated the elbow joint torque to obtain the torque-angle relation for constant muscle activation. The elbow joint torque increased and then decreased with extension of the elbow joint. This suggests that if the forearm is displaced from an equilibrium point, the torque angle relation would not act like a simple spring. In a view of the musculoskeletal structure of the elbow joint, the relation between the elbow joint angle and the moment arm of the elbow flexor muscles seems to have a dominant effect on the torque-angle relation. PMID:9755039

  18. Arsenic risk mapping in Bangladesh: a simulation technique of cokriging estimation from regional count data.

    PubMed

    Hassan, M Manzurul; Atkins, Peter J

    2007-10-01

    Risk analysis with spatial interpolation methods from a regional database on to a continuous surface is of contemporary interest. Groundwater arsenic poisoning in Bangladesh and its impact on human health has been one of the "biggest environmental health disasters" in current years. It is ironic that so many tubewells have been installed in recent times for pathogen-free drinking water but the water pumped is often contaminated with toxic levels of arsenic. This paper seeks to analyse the spatial pattern of arsenic risk by mapping composite "problem regions" in southwest Bangladesh. It also examines the cokriging interpolation method in analysing the suitability of isopleth maps for different risk areas. GIS-based data processing and spatial analysis were used for this research, along with state-of-the-art decision-making techniques. Apart from the GIS-based buffering and overlay mapping operations, a cokriging interpolation method was adopted because of its exact interpolation capacity. The paper presents an interpolation of regional estimates of arsenic data for spatial risk mapping that overcomes the areal bias problem for administrative boundaries. Moreover, the functionality of the cokriging method demonstrates the suitability of isopleth maps that are easy to read.

  19. Spectrally efficient direct-detected OFDM transmission employing an iterative estimation and cancellation technique.

    PubMed

    Peng, Wei-Ren; Wu, Xiaoxia; Feng, Kai-Ming; Arbab, Vahid R; Shamee, Bishara; Yang, Jeng-Yuan; Christen, Louis C; Willner, Alan E; Chi, Sien

    2009-05-25

    We demonstrate a linearly field-modulated, direct-detected virtual SSB-OFDM (VSSB-OFDM) transmission with an RF tone placed at the edge of the signal band. By employing the iterative estimation and cancellation technique for the signal-signal beat interference (SSBI) at the receiver, our approach alleviates the need of the frequency gap, which is typically reserved for isolating the SSBI, and saves half the electrical bandwidth, thus being very spectrally efficient. We derive the theoretical model for the VSSB-OFDM system and detail the signal processing for the iterative approach conducted at the receiver. Possible limitations for this iterative approach are also given and discussed. We successfully transmit a 10 Gbps, 4-quadrature-amplitude-modulation (QAM) VSSB-OFDM signal through 340 km of uncompensated standard single mode fiber (SSMF) with almost no penalty. In addition, the simulated results show that the proposed scheme has an approximately 2 dB optical-signal-to-noise-ratio (OSNR) gain and has a better chromatic dispersion (CD) tolerance compared with the previous intensity-modulated SSB-OFDM system.

  20. Estimating the gas and dye quantities for modified tracer technique measurements of stream reaeration coefficients

    USGS Publications Warehouse

    Rathbun, R.E.

    1979-01-01

    Measuring the reaeration coefficient of a stream with a modified tracer technique has been accomplished by injecting either ethylene or ethylene and propane together and a rhodamine-WT dye solution into the stream. The movement of the tracers through the stream reach after injection is described by a one-dimensional diffusion equation. The peak concentrations of the tracers at the downstream end of the reach depend on the concentrations of the tracers in the stream at the injection site, the longitudinal dispersion coefficient, the mean water velocity, the length of the reach, and the duration of the injection period. The downstream gas concentrations also depend on the gas desorption coefficients of the reach. The concentrations of the tracer gases in the stream at the injection site depend on the flow rates of the gases through the injection diffusers, the efficiency of the gas absorption process, and the stream discharge. The concentration of dye in the stream at the injection site depends on the flow rate of the dye solution, the concentration of the dye solution, and the stream discharge. Equations for estimating the gas flow rates, the quantities of the gases, the dye concentration, and the quantity of dye together with procedures for determining the variables in these equations are presented. (Woodard-USGS)