WAATS: A computer program for Weights Analysis of Advanced Transportation Systems
NASA Technical Reports Server (NTRS)
Glatt, C. R.
1974-01-01
A historical weight estimating technique for advanced transportation systems is presented. The classical approach to weight estimation is discussed and sufficient data is presented to estimate weights for a large spectrum of flight vehicles including horizontal and vertical takeoff aircraft, boosters and reentry vehicles. A computer program, WAATS (Weights Analysis for Advanced Transportation Systems) embracing the techniques discussed has been written and user instructions are presented. The program was developed for use in the ODIN (Optimal Design Integration System) system.
Estimating free-body modal parameters from tests of a constrained structure
NASA Technical Reports Server (NTRS)
Cooley, Victor M.
1993-01-01
Hardware advances in suspension technology for ground tests of large space structures provide near on-orbit boundary conditions for modal testing. Further advances in determining free-body modal properties of constrained large space structures have been made, on the analysis side, by using time domain parameter estimation and perturbing the stiffness of the constraints over multiple sub-tests. In this manner, passive suspension constraint forces, which are fully correlated and therefore not usable for spectral averaging techniques, are made effectively uncorrelated. The technique is demonstrated with simulated test data.
Advances in parameter estimation techniques applied to flexible structures
NASA Technical Reports Server (NTRS)
Maben, Egbert; Zimmerman, David C.
1994-01-01
In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.
48 CFR 15.404-1 - Proposal analysis techniques.
Code of Federal Regulations, 2010 CFR
2010-10-01
... are: I Price Analysis, II Quantitative Techniques for Contract Pricing, III Cost Analysis, IV Advanced... estimates. (vi) Comparison of proposed prices with prices obtained through market research for the same or...
48 CFR 15.404-1 - Proposal analysis techniques.
Code of Federal Regulations, 2011 CFR
2011-10-01
... are: I Price Analysis, II Quantitative Techniques for Contract Pricing, III Cost Analysis, IV Advanced... estimates. (vi) Comparison of proposed prices with prices obtained through market research for the same or...
48 CFR 15.404-1 - Proposal analysis techniques.
Code of Federal Regulations, 2012 CFR
2012-10-01
... are: I Price Analysis, II Quantitative Techniques for Contract Pricing, III Cost Analysis, IV Advanced... estimates. (vi) Comparison of proposed prices with prices obtained through market research for the same or...
NASA Technical Reports Server (NTRS)
Klich, P. J.; Macconochie, I. O.
1979-01-01
A study of an array of advanced earth-to-orbit space transportation systems with a focus on mass properties and technology requirements is presented. Methods of estimating weights of these vehicles differ from those used for commercial and military aircraft; the new techniques emphasizing winged horizontal and vertical takeoff advanced systems are described utilizing the space shuttle subsystem data base for the weight estimating equations. The weight equations require information on mission profile, the structural materials, the thermal protection system, and the ascent propulsion system, allowing for the type of construction and various propellant tank shapes. The overall system weights are calculated using this information and incorporated into the Systems Engineering Mass Properties Computer Program.
Quantum correlation measurements in interferometric gravitational-wave detectors
NASA Astrophysics Data System (ADS)
Martynov, D. V.; Frolov, V. V.; Kandhasamy, S.; Izumi, K.; Miao, H.; Mavalvala, N.; Hall, E. D.; Lanza, R.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Adams, C.; Adhikari, R. X.; Anderson, S. B.; Ananyeva, A.; Appert, S.; Arai, K.; Aston, S. M.; Ballmer, S. W.; Barker, D.; Barr, B.; Barsotti, L.; Bartlett, J.; Bartos, I.; Batch, J. C.; Bell, A. S.; Betzwieser, J.; Billingsley, G.; Birch, J.; Biscans, S.; Biwer, C.; Blair, C. D.; Bork, R.; Brooks, A. F.; Ciani, G.; Clara, F.; Countryman, S. T.; Cowart, M. J.; Coyne, D. C.; Cumming, A.; Cunningham, L.; Danzmann, K.; Da Silva Costa, C. F.; Daw, E. J.; DeBra, D.; DeRosa, R. T.; DeSalvo, R.; Dooley, K. L.; Doravari, S.; Driggers, J. C.; Dwyer, S. E.; Effler, A.; Etzel, T.; Evans, M.; Evans, T. M.; Factourovich, M.; Fair, H.; Fernández Galiana, A.; Fisher, R. P.; Fritschel, P.; Fulda, P.; Fyffe, M.; Giaime, J. A.; Giardina, K. D.; Goetz, E.; Goetz, R.; Gras, S.; Gray, C.; Grote, H.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hammond, G.; Hanks, J.; Hanson, J.; Hardwick, T.; Harry, G. M.; Heintze, M. C.; Heptonstall, A. W.; Hough, J.; Jones, R.; Karki, S.; Kasprzack, M.; Kaufer, S.; Kawabe, K.; Kijbunchoo, N.; King, E. J.; King, P. J.; Kissel, J. S.; Korth, W. Z.; Kuehn, G.; Landry, M.; Lantz, B.; Lockerbie, N. A.; Lormand, M.; Lundgren, A. P.; MacInnis, M.; Macleod, D. M.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martin, I. W.; Mason, K.; Massinger, T. J.; Matichard, F.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McIntyre, G.; McIver, J.; Mendell, G.; Merilh, E. L.; Meyers, P. M.; Miller, J.; Mittleman, R.; Moreno, G.; Mueller, G.; Mullavey, A.; Munch, J.; Nuttall, L. K.; Oberling, J.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; Ottaway, D. J.; Overmier, H.; Palamos, J. R.; Paris, H. R.; Parker, W.; Pele, A.; Penn, S.; Phelps, M.; Pierro, V.; Pinto, I.; Principe, M.; Prokhorov, L. G.; Puncken, O.; Quetschke, V.; Quintero, E. A.; Raab, F. J.; Radkins, H.; Raffai, P.; Reid, S.; Reitze, D. H.; Robertson, N. A.; Rollins, J. G.; Roma, V. J.; Romie, J. H.; Rowan, S.; Ryan, K.; Sadecki, T.; Sanchez, E. J.; Sandberg, V.; Savage, R. L.; Schofield, R. M. S.; Sellers, D.; Shaddock, D. A.; Shaffer, T. J.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sigg, D.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; Sorazu, B.; Staley, A.; Strain, K. A.; Tanner, D. B.; Taylor, R.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Torrie, C. I.; Traylor, G.; Vajente, G.; Valdes, G.; van Veggel, A. A.; Vecchio, A.; Veitch, P. J.; Venkateswara, K.; Vo, T.; Vorvick, C.; Walker, M.; Ward, R. L.; Warner, J.; Weaver, B.; Weiss, R.; Weßels, P.; Willke, B.; Wipf, C. C.; Worden, J.; Wu, G.; Yamamoto, H.; Yancey, C. C.; Yu, Hang; Yu, Haocun; Zhang, L.; Zucker, M. E.; Zweizig, J.; LSC Instrument Authors
2017-04-01
Quantum fluctuations in the phase and amplitude quadratures of light set limitations on the sensitivity of modern optical instruments. The sensitivity of the interferometric gravitational-wave detectors, such as the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO), is limited by quantum shot noise, quantum radiation pressure noise, and a set of classical noises. We show how the quantum properties of light can be used to distinguish these noises using correlation techniques. Particularly, in the first part of the paper we show estimations of the coating thermal noise and gas phase noise, hidden below the quantum shot noise in the Advanced LIGO sensitivity curve. We also make projections on the observatory sensitivity during the next science runs. In the second part of the paper we discuss the correlation technique that reveals the quantum radiation pressure noise from the background of classical noises and shot noise. We apply this technique to the Advanced LIGO data, collected during the first science run, and experimentally estimate the quantum correlations and quantum radiation pressure noise in the interferometer.
Image enhancement and advanced information extraction techniques for ERTS-1 data
NASA Technical Reports Server (NTRS)
Malila, W. A. (Principal Investigator); Nalepka, R. F.; Sarno, J. E.
1975-01-01
The author has identified the following significant results. It was demonstrated and concluded that: (1) the atmosphere has significant effects on ERTS MSS data which can seriously degrade recognition performance; (2) the application of selected signature extension techniques serve to reduce the deleterious effects of both the atmosphere and changing ground conditions on recognition performance; and (3) a proportion estimation algorithm for overcoming problems in acreage estimation accuracy resulting from the coarse spatial resolution of the ERTS MSS, was able to significantly improve acreage estimation accuracy over that achievable by conventional techniques, especially for high contrast targets such as lakes and ponds.
1982-06-23
Administration Systems Research and Development Service 14, Spseq Aese Ce ’ Washington, D.C. 20591 It. SeppkW•aae metm The work reported in this document was...consider sophisticated signal processing techniques as an alternative method of improving system performanceH Some work in this area has already taken place...demands on the frequency spectrum. As noted in Table 1-1, there has been considerable work on advanced signal processing in the MLS context
Lee, Minhyun; Koo, Choongwan; Hong, Taehoon; Park, Hyo Seon
2014-04-15
For the effective photovoltaic (PV) system, it is necessary to accurately determine the monthly average daily solar radiation (MADSR) and to develop an accurate MADSR map, which can simplify the decision-making process for selecting the suitable location of the PV system installation. Therefore, this study aimed to develop a framework for the mapping of the MADSR using an advanced case-based reasoning (CBR) and a geostatistical technique. The proposed framework consists of the following procedures: (i) the geographic scope for the mapping of the MADSR is set, and the measured MADSR and meteorological data in the geographic scope are collected; (ii) using the collected data, the advanced CBR model is developed; (iii) using the advanced CBR model, the MADSR at unmeasured locations is estimated; and (iv) by applying the measured and estimated MADSR data to the geographic information system, the MADSR map is developed. A practical validation was conducted by applying the proposed framework to South Korea. It was determined that the MADSR map developed through the proposed framework has been improved in terms of accuracy. The developed MADSR map can be used for estimating the MADSR at unmeasured locations and for determining the optimal location for the PV system installation.
Peak-picking fundamental period estimation for hearing prostheses.
Howard, D M
1989-09-01
A real-time peak-picking fundamental period estimation device is described which is used in advanced hearing prostheses for the totally and profoundly deafened. The operation of the peak picker is compared with three well-established fundamental frequency estimation techniques: the electrolaryngograph, which is used as a "standard" hardware implementations of the cepstral technique, and the Gold/Rabiner parallel processing algorithm. These comparisons illustrate and highlight some of the important advantages and disadvantages that characterize the operation of these techniques. The special requirements of the hearing prostheses are discussed with respect to the operation of each device, and the choice of the peak picker is found to be felicitous in this application.
Development of advanced techniques for rotorcraft state estimation and parameter identification
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.
1980-01-01
An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.
2016-11-15
A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layermore » density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.« less
Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Boyle, Richard D.
2014-01-01
Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.
Rodríguez-Entrena, Macario; Schuberth, Florian; Gelhard, Carsten
2018-01-01
Structural equation modeling using partial least squares (PLS-SEM) has become a main-stream modeling approach in various disciplines. Nevertheless, prior literature still lacks a practical guidance on how to properly test for differences between parameter estimates. Whereas existing techniques such as parametric and non-parametric approaches in PLS multi-group analysis solely allow to assess differences between parameters that are estimated for different subpopulations, the study at hand introduces a technique that allows to also assess whether two parameter estimates that are derived from the same sample are statistically different. To illustrate this advancement to PLS-SEM, we particularly refer to a reduced version of the well-established technology acceptance model.
Results of the 1971 Corn Blight Watch experiment
NASA Technical Reports Server (NTRS)
Macdonald, R. B.; Allen, R. D.; Bauer, M. E.; Clifton, J. W.; Frickson, J. D.; Landgrebe, D. A.
1972-01-01
Advanced remote sensing techniques are used to: (1)Detect development and spread of corn leaf blight during the growing season; (2) assess the extent and severity of blight infection; (3) assess the impact of blight on corn production; and (4) estimate the applicability of these techniques to similar situations occurring in the future.
Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.
2016-01-01
We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322
Environmental chemistry is applied to estimating the exposure of ecosystems and humans to various chemical environmental stressors. Among the stressors of concern are mercury, pesticides, and arsenic. Advanced analytical chemistry techniques are used to measure these stressors ...
Piezocone Penetration Testing Device
DOT National Transportation Integrated Search
2017-01-03
Hydraulic characteristics of soils can be estimated from piezocone penetration test (called PCPT hereinafter) by performing dissipation test or on-the-fly using advanced analytical techniques. This research report presents a method for fast estimatio...
A Technique for Measuring Rotocraft Dynamic Stability in the 40 by 80 Foot Wind Tunnel
NASA Technical Reports Server (NTRS)
Gupta, N. K.; Bohn, J. G.
1977-01-01
An on-line technique is described for the measurement of tilt rotor aircraft dynamic stability in the Ames 40- by 80-Foot Wind Tunnel. The technique is based on advanced system identification methodology and uses the instrumental variables approach. It is particulary applicable to real time estimation problems with limited amounts of noise-contaminated data. Several simulations are used to evaluate the algorithm. Estimated natural frequencies and damping ratios are compared with simulation values. The algorithm is also applied to wind tunnel data in an off-line mode. The results are used to develop preliminary guidelines for effective use of the algorithm.
Evaluation of a technique for satellite-derived area estimation of forest fires
NASA Technical Reports Server (NTRS)
Cahoon, Donald R., Jr.; Stocks, Brian J.; Levine, Joel S.; Cofer, Wesley R., III; Chung, Charles C.
1992-01-01
The advanced very high resolution radiometer (AVHRR), has been found useful for the location and monitoring of both smoke and fires because of the daily observations, the large geographical coverage of the imagery, the spectral characteristics of the instrument, and the spatial resolution of the instrument. This paper will discuss the application of AVHRR data to assess the geographical extent of burning. Methods have been developed to estimate the surface area of burning by analyzing the surface area effected by fire with AVHRR imagery. Characteristics of the AVHRR instrument, its orbit, field of view, and archived data sets are discussed relative to the unique surface area of each pixel. The errors associated with this surface area estimation technique are determined using AVHRR-derived area estimates of target regions with known sizes. This technique is used to evaluate the area burned during the Yellowstone fires of 1988.
Bogani, Giorgio; Ditto, Antonino; Martinelli, Fabio; Lorusso, Domenica; Chiappa, Valentina; Donfrancesco, Cristina; Di Donato, Violante; Indini, Alice; Aletti, Giovanni; Raspagliesi, Francesco
2016-02-01
Optimal cytoreduction is one the main factors improving survival outcomes in patients affected by ovarian cancer (OC). It is estimated that approximately 40% of OC patients have gross disease located on the diaphragm. However, no mature data evaluating outcomes of surgical techniques for the management of diaphragmatic carcinosis exist. In the present study, we aimed to estimate surgery-related morbidity of different surgical techniques for diaphragmatic cytoreduction in advanced or recurrent OC. PubMed (MEDLINE), Web of Science, and Clincaltrials.gov databases were searched for records estimating outcomes of diaphragmatic peritoneal stripping (DPS) or diaphragmatic full-thickness resection (DFTR) for OC. The meta-analysis was performed using the Cochrane Review software. For the final analysis, 5 articles were available, including 272 patients. Diaphragmatic peritoneal stripping and DFTR were performed in 197 patients (72%) and 75 patients (28%), respectively. Pooled analysis suggested that the estimated pleural effusion rate was 43% and 51% after DPS and DFTR, respectively. The need for pleural punctures or chest tube placement was 4% and 9% after DPS and DFTR, respectively. The rate of postoperative pneumothorax (4% vs 9%; odds ratio, 0.31; 95% confidence interval, 0.05-2.08) and subdiaphragmatic abscess (3% vs 3%; odds ratio, 0.45; 95% confidence interval, 0.09-2.31) were similar after the execution of DPS and DFTR. Diaphragmatic surgery is a crucial step during cytoreduction for advanced or recurrent OC. Obviously, the choice to perform DPS or DFTR depends on the infiltration of the diaphragmatic muscle or not. Both the procedures are associated with a low pulmonary complication and chest tube placement rates.
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.
Gauterin, Eckhard; Kammerer, Philipp; Kühn, Martin; Schulte, Horst
2016-05-01
Advanced model-based control of wind turbines requires knowledge of the states and the wind speed. This paper benchmarks a nonlinear Takagi-Sugeno observer for wind speed estimation with enhanced Kalman Filter techniques: The performance and robustness towards model-structure uncertainties of the Takagi-Sugeno observer, a Linear, Extended and Unscented Kalman Filter are assessed. Hence the Takagi-Sugeno observer and enhanced Kalman Filter techniques are compared based on reduced-order models of a reference wind turbine with different modelling details. The objective is the systematic comparison with different design assumptions and requirements and the numerical evaluation of the reconstruction quality of the wind speed. Exemplified by a feedforward loop employing the reconstructed wind speed, the benefit of wind speed estimation within wind turbine control is illustrated. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
An Advanced Framework for Improving Situational Awareness in Electric Power Grid Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Huang, Zhenyu; Zhou, Ning
With the deployment of new smart grid technologies and the penetration of renewable energy in power systems, significant uncertainty and variability is being introduced into power grid operation. Traditionally, the Energy Management System (EMS) operates the power grid in a deterministic mode, and thus will not be sufficient for the future control center in a stochastic environment with faster dynamics. One of the main challenges is to improve situational awareness. This paper reviews the current status of power grid operation and presents a vision of improving wide-area situational awareness for a future control center. An advanced framework, consisting of parallelmore » state estimation, state prediction, parallel contingency selection, parallel contingency analysis, and advanced visual analytics, is proposed to provide capabilities needed for better decision support by utilizing high performance computing (HPC) techniques and advanced visual analytic techniques. Research results are presented to support the proposed vision and framework.« less
Improving travel information products via robust estimation techniques : final report, March 2009.
DOT National Transportation Integrated Search
2009-03-01
Traffic-monitoring systems, such as those using loop detectors, are prone to coverage gaps, arising from sensor noise, processing errors and : transmission problems. Such gaps adversely affect the accuracy of Advanced Traveler Information Systems. Th...
World Ocean Circulation Experiment (WOCE) Young Investigator Workshops
NASA Technical Reports Server (NTRS)
Austin, Meg
2004-01-01
The World Ocean Circulation Experiment (WOCE) Young Investigator Workshops goals and objectives are: a) to familiarize Young Investigators with WOCE models, datasets and estimation procedures; b) to offer intensive hands-on exposure to these models ard methods; c) to build collaborations among junior scientists and more senior WOCE investigators; and finally, d) to generate ideas and projects leading to fundable WOCE synthesis projects. To achieve these goals and objectives, the Workshop will offer a mixture of tutorial lectures on numerical models and estimation procedures, advanced seminars on current WOCE synthesis activities and related projects, and the opportunity to conduct small projects which put into practice the techniques advanced in the lectures.
ERIC Educational Resources Information Center
Courtney, Matthew Gordon Ray
2013-01-01
Exploratory factor analysis (EFA) is a common technique utilized in the development of assessment instruments. The key question when performing this procedure is how to best estimate the number of factors to retain. This is especially important as under- or over-extraction may lead to erroneous conclusions. Although recent advancements have been…
Optimizing focal plane electric field estimation for detecting exoplanets
NASA Astrophysics Data System (ADS)
Groff, T.; Kasdin, N. J.; Riggs, A. J. E.
Detecting extrasolar planets with angular separations and contrast levels similar to Earth requires a large space-based observatory and advanced starlight suppression techniques. This paper focuses on techniques employing an internal coronagraph, which is highly sensitive to optical errors and must rely on focal plane wavefront control techniques to achieve the necessary contrast levels. To maximize the available science time for a coronagraphic mission we demonstrate an estimation scheme using a discrete time Kalman filter. The state estimate feedback inherent to the filter allows us to minimize the number of exposures required to estimate the electric field. We also show progress including a bias estimate into the Kalman filter to eliminate incoherent light from the estimate. Since the exoplanets themselves are incoherent to the star, this has the added benefit of using the control history to gain certainty in the location of exoplanet candidates as the signal-to-noise between the planets and speckles improves. Having established a purely focal plane based wavefront estimation technique, we discuss a sensor fusion concept where alternate wavefront sensors feedforward a time update to the focal plane estimate to improve robustness to time varying speckle. The overall goal of this work is to reduce the time required for wavefront control on a target, thereby improving the observatory's planet detection performance by increasing the number of targets reachable during the lifespan of the mission.
Kopsinis, Yannis; Aboutanios, Elias; Waters, Dean A; McLaughlin, Steve
2010-02-01
In this paper, techniques for time-frequency analysis and investigation of bat echolocation calls are studied. Particularly, enhanced resolution techniques are developed and/or used in this specific context for the first time. When compared to traditional time-frequency representation methods, the proposed techniques are more capable of showing previously unseen features in the structure of bat echolocation calls. It should be emphasized that although the study is focused on bat echolocation recordings, the results are more general and applicable to many other types of signal.
Cost and Economics for Advanced Launch Vehicles
NASA Technical Reports Server (NTRS)
Whitfield, Jeff
1998-01-01
Market sensitivity and weight-based cost estimating relationships are key drivers in determining the financial viability of advanced space launch vehicle designs. Due to decreasing space transportation budgets and increasing foreign competition, it has become essential for financial assessments of prospective launch vehicles to be performed during the conceptual design phase. As part of this financial assessment, it is imperative to understand the relationship between market volatility, the uncertainty of weight estimates, and the economic viability of an advanced space launch vehicle program. This paper reports the results of a study that evaluated the economic risk inherent in market variability and the uncertainty of developing weight estimates for an advanced space launch vehicle program. The purpose of this study was to determine the sensitivity of a business case for advanced space flight design with respect to the changing nature of market conditions and the complexity of determining accurate weight estimations during the conceptual design phase. The expected uncertainty associated with these two factors drives the economic risk of the overall program. The study incorporates Monte Carlo simulation techniques to determine the probability of attaining specific levels of economic performance when the market and weight parameters are allowed to vary. This structured approach toward uncertainties allows for the assessment of risks associated with a launch vehicle program's economic performance. This results in the determination of the value of the additional risk placed on the project by these two factors.
Adaptive neuro fuzzy inference system-based power estimation method for CMOS VLSI circuits
NASA Astrophysics Data System (ADS)
Vellingiri, Govindaraj; Jayabalan, Ramesh
2018-03-01
Recent advancements in very large scale integration (VLSI) technologies have made it feasible to integrate millions of transistors on a single chip. This greatly increases the circuit complexity and hence there is a growing need for less-tedious and low-cost power estimation techniques. The proposed work employs Back-Propagation Neural Network (BPNN) and Adaptive Neuro Fuzzy Inference System (ANFIS), which are capable of estimating the power precisely for the complementary metal oxide semiconductor (CMOS) VLSI circuits, without requiring any knowledge on circuit structure and interconnections. The ANFIS to power estimation application is relatively new. Power estimation using ANFIS is carried out by creating initial FIS modes using hybrid optimisation and back-propagation (BP) techniques employing constant and linear methods. It is inferred that ANFIS with the hybrid optimisation technique employing the linear method produces better results in terms of testing error that varies from 0% to 0.86% when compared to BPNN as it takes the initial fuzzy model and tunes it by means of a hybrid technique combining gradient descent BP and mean least-squares optimisation algorithms. ANFIS is the best suited for power estimation application with a low RMSE of 0.0002075 and a high coefficient of determination (R) of 0.99961.
Cost estimating methods for advanced space systems
NASA Technical Reports Server (NTRS)
Cyr, Kelley
1988-01-01
Parametric cost estimating methods for space systems in the conceptual design phase are developed. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance, and time. The relationship between weight and cost is examined in detail. A theoretical model of cost is developed and tested statistically against a historical data base of major research and development programs. It is concluded that the technique presented is sound, but that it must be refined in order to produce acceptable cost estimates.
On estimating the phase of a periodic waveform in additive Gaussian noise, part 3
NASA Technical Reports Server (NTRS)
Rauch, L. L.
1991-01-01
Motivated by advances in signal processing technology that support more complex algorithms, researchers have taken a new look at the problem of estimating the phase and other parameters of a nearly periodic waveform in additive Gaussian noise, based on observation during a given time interval. Parts 1 and 2 are very briefly reviewed. In part 3, the actual performances of some of the highly nonlinear estimation algorithms of parts 1 and 2 are evaluated by numerical simulation using Monte Carlo techniques.
Planetary Quarantine Annual Review, Space Technology and Research, July 1971 - July 1972
NASA Technical Reports Server (NTRS)
1973-01-01
The effects of planetary quarantine constraints are assessed for advanced missions and unmanned planetary sample return missions. Considered are natural space environment factors, post launch recontamination effects, spacecraft microbial burden estimation and prediction, and spacecraft cleaning and decontamination techniques.
NASA Astrophysics Data System (ADS)
Kitanidis, P. K.
1997-05-01
Introduction to Geostatistics presents practical techniques for engineers and earth scientists who routinely encounter interpolation and estimation problems when analyzing data from field observations. Requiring no background in statistics, and with a unique approach that synthesizes classic and geostatistical methods, this book offers linear estimation methods for practitioners and advanced students. Well illustrated with exercises and worked examples, Introduction to Geostatistics is designed for graduate-level courses in earth sciences and environmental engineering.
Development and Experimental Application of International Affairs Indicators. Volume A
1974-06-01
DEVELOPMENT ’^EXPERIMENTAL APPttcATION OF INTERNATIONAL AFFAIRS INDICATORS Volume A Final Report e June 1974 US I Sponsored by: Defense Advanced...intelligence communities were designed, techniques for estimating the future were developed and tested, and the techniques and indicators were applied to the...past year’s effort is that the intelligence community has become increasingly aware of the potential use- fulness of quantitative indicators. The
Use of Advanced Machine-Learning Techniques for Non-Invasive Monitoring of Hemorrhage
2010-04-01
that state-of-the-art machine learning techniques when integrated with novel non-invasive monitoring technologies could detect subtle, physiological...decompensation. Continuous, non-invasively measured hemodynamic signals (e.g., ECG, blood pressures, stroke volume) were used for the development of machine ... learning algorithms. Accuracy estimates were obtained by building models using 27 subjects and testing on the 28th. This process was repeated 28 times
Recent Advances in Techniques for Starch Esters and the Applications: A Review
Hong, Jing; Zeng, Xin-An; Brennan, Charles S.; Brennan, Margaret; Han, Zhong
2016-01-01
Esterification is one of the most important methods to alter the structure of starch granules and improve its applications. Conventionally, starch esters are prepared by conventional or dual modification techniques, which have the disadvantages of being expensive, have regent overdoses, and are time-consuming. In addition, the degree of substitution (DS) is often considered as the primary factor in view of its contribution to estimate substituted groups of starch esters. In order to improve the detection accuracy and production efficiency, different detection techniques, including titration, nuclear magnetic resonance (NMR), Fourier transform infrared spectroscopy (FT-IR), thermal gravimetric analysis/infrared spectroscopy (TGA/IR) and headspace gas chromatography (HS-GC), have been developed for DS. This paper gives a comprehensive overview on the recent advances in DS analysis and starch esterification techniques. Additionally, the advantages, limitations, some perspectives on future trends of these techniques and the applications of their derivatives in the food industry are also presented. PMID:28231145
Recommended advanced techniques for waterborne pathogen detection in developing countries.
Alhamlan, Fatimah S; Al-Qahtani, Ahmed A; Al-Ahdal, Mohammed N
2015-02-19
The effect of human activities on water resources has expanded dramatically during the past few decades, leading to the spread of waterborne microbial pathogens. The total global health impact of human infectious diseases associated with pathogenic microorganisms from land-based wastewater pollution was estimated to be approximately three million disability-adjusted life years (DALY), with an estimated economic loss of nearly 12 billion US dollars per year. Although clean water is essential for healthy living, it is not equally granted to all humans. Indeed, people who live in developing countries are challenged every day by an inadequate supply of clean water. Polluted water can lead to health crises that in turn spread waterborne pathogens. Taking measures to assess the water quality can prevent these potential risks. Thus, a pressing need has emerged in developing countries for comprehensive and accurate assessments of water quality. This review presents current and emerging advanced techniques for assessing water quality that can be adopted by authorities in developing countries.
Application of split window technique to TIMS data
NASA Technical Reports Server (NTRS)
Matsunaga, Tsuneo; Rokugawa, Shuichi; Ishii, Yoshinori
1992-01-01
Absorptions by the atmosphere in thermal infrared region are mainly due to water vapor, carbon dioxide, and ozone. As the content of water vapor in the atmosphere greatly changes according to weather conditions, it is important to know its amount between the sensor and the ground for atmospheric corrections of thermal Infrared Multispectral Scanner (TIMS) data (i.e. radiosonde). On the other hand, various atmospheric correction techniques were already developed for sea surface temperature estimations from satellites. Among such techniques, Split Window technique, now widely used for AVHRR (Advanced Very High Resolution Radiometer), uses no radiosonde or any kind of supplementary data but a difference between observed brightness temperatures in two channels for estimating atmospheric effects. Applications of Split Window technique to TIMS data are discussed because availability of atmospheric profile data is not clear when ASTER operates. After these theoretical discussions, the technique is experimentally applied to TIMS data at three ground targets and results are compared with atmospherically corrected data using LOWTRAN 7 with radiosonde data.
Non-destructive evaluation of composite materials using ultrasound
NASA Technical Reports Server (NTRS)
Miller, J. G.
1984-01-01
Investigation of the nondestructive evaluation of advanced composite-laminates is summarized. Indices derived from the measurement of fundamental acoustic parameters are used in order to quantitatively estimate the local material properties of the laminate. The following sections describe ongoing studies of phase insensitive attenuation measurements, and discuss several phenomena which influences the previously reported technique of polar backscatter. A simple and effective programmable gate circuit designed for use in estimating attenuation from backscatter is described.
Development of advanced acreage estimation methods
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr. (Principal Investigator)
1980-01-01
The use of the AMOEBA clustering/classification algorithm was investigated as a basis for both a color display generation technique and maximum likelihood proportion estimation procedure. An approach to analyzing large data reduction systems was formulated and an exploratory empirical study of spatial correlation in LANDSAT data was also carried out. Topics addressed include: (1) development of multiimage color images; (2) spectral spatial classification algorithm development; (3) spatial correlation studies; and (4) evaluation of data systems.
NREL Triples Previous Estimates of U.S. Wind Power Potential (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The National Renewable Energy Laboratory (NREL) recently released new estimates of the U.S. potential for wind-generated electricity, using advanced wind mapping and validation techniques to triple previous estimates of the size of the nation's wind resources. The new study, conducted by NREL and AWS TruePower, finds that the contiguous 48 states have the potential to generate up to 37 million gigawatt-hours annually. In comparison, the total U.S. electricity generation from all sources was roughly 4 million gigawatt-hours in 2009.
Conceptual Model Evaluation using Advanced Parameter Estimation Techniques with Heat as a Tracer
NASA Astrophysics Data System (ADS)
Naranjo, R. C.; Morway, E. D.; Healy, R. W.
2016-12-01
Temperature measurements made at multiple depths beneath the sediment-water interface has proven useful for estimating seepage rates from surface-water channels and corresponding subsurface flow direction. Commonly, parsimonious zonal representations of the subsurface structure are defined a priori by interpretation of temperature envelopes, slug tests or analysis of soil cores. However, combining multiple observations into a single zone may limit the inverse model solution and does not take full advantage of the information content within the measured data. Further, simulating the correct thermal gradient, flow paths, and transient behavior of solutes may be biased by inadequacies in the spatial description of subsurface hydraulic properties. The use of pilot points in PEST offers a more sophisticated approach to estimate the structure of subsurface heterogeneity. This presentation evaluates seepage estimation in a cross-sectional model of a trapezoidal canal with intermittent flow representing four typical sedimentary environments. The recent improvements in heat as a tracer measurement techniques (i.e. multi-depth temperature probe) along with use of modern calibration techniques (i.e., pilot points) provides opportunities for improved calibration of flow models, and, subsequently, improved model predictions.
NASA Technical Reports Server (NTRS)
Sidney, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.;
2014-01-01
The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiralonly signals from compact binary systems with a total mass of equal to or less than 20M solar mass and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor approx. equals 20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor approx. equals 1000 longer processing time.
NASA Astrophysics Data System (ADS)
Sidery, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.; Kalogera, V.; Mandel, I.; O'Shaughnessy, R.; Pitkin, M.; Price, L.; Raymond, V.; Röver, C.; Singer, L.; van der Sluys, M.; Smith, R. J. E.; Vecchio, A.; Veitch, J.; Vitale, S.
2014-04-01
The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiral-only signals from compact binary systems with a total mass of ≤20M⊙ and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor ≈20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor ≈1000 longer processing time.
Test Design Project: Studies in Test Adequacy. Annual Report.
ERIC Educational Resources Information Center
Wilcox, Rand R.
These studies in test adequacy focus on two problems: procedures for estimating reliability, and techniques for identifying ineffective distractors. Fourteen papers are presented on recent advances in measuring achievement (a response to Molenaar); "an extension of the Dirichlet-multinomial model that allows true score and guessing to be…
Mass transfer between aquifer material and groundwater is often modeled as first-order rate-limited sorption or diffusive exchange between mobile zones and immobile zones with idealized geometries. Recent improvements in experimental techniques and advances in our understanding o...
NASA Astrophysics Data System (ADS)
Alloin, D. M.; Mariotti, J.-M.
Recent advances in optics and observation techniques for very large astronomical telescopes are discussed in reviews and reports. Topics addressed include Fourier optics and coherence, optical propagation and image formation through a turbulent atmosphere, radio telescopes, continuously deformable telescopes for optical interferometry (I), amplitude estimation from speckle I, noise calibration of speckle imagery, and amplitude estimation from diluted-array I. Consideration is given to first-order imaging methods, speckle imaging with the PAPA detector and the Knox-Thompson algorithm, phase-closure imaging, real-time wavefront sensing and adaptive optics, differential I, astrophysical programs for high-angular-resolution optical I, cophasing telescope arrays, aperture synthesis for space observatories, and lunar occultations for marcsec resolution.
NASA Astrophysics Data System (ADS)
Wood, W. T.; Runyan, T. E.; Palmsten, M.; Dale, J.; Crawford, C.
2016-12-01
Natural Gas (primarily methane) and gas hydrate accumulations require certain bio-geochemical, as well as physical conditions, some of which are poorly sampled and/or poorly understood. We exploit recent advances in the prediction of seafloor porosity and heat flux via machine learning techniques (e.g. Random forests and Bayesian networks) to predict the occurrence of gas and subsequently gas hydrate in marine sediments. The prediction (actually guided interpolation) of key parameters we use in this study is a K-nearest neighbor technique. KNN requires only minimal pre-processing of the data and predictors, and requires minimal run-time input so the results are almost entirely data-driven. Specifically we use new estimates of sedimentation rate and sediment type, along with recently derived compaction modeling to estimate profiles of porosity and age. We combined the compaction with seafloor heat flux to estimate temperature with depth and geologic age, which, with estimates of organic carbon, and models of methanogenesis yield limits on the production of methane. Results include geospatial predictions of gas (and gas hydrate) accumulations, with quantitative estimates of uncertainty. The Generic Earth Modeling System (GEMS) we have developed to derive the machine learning estimates is modular and easily updated with new algorithms or data.
Gamazo-Real, José Carlos; Vázquez-Sánchez, Ernesto; Gómez-Gil, Jaime
2010-01-01
This paper provides a technical review of position and speed sensorless methods for controlling Brushless Direct Current (BLDC) motor drives, including the background analysis using sensors, limitations and advances. The performance and reliability of BLDC motor drivers have been improved because the conventional control and sensing techniques have been improved through sensorless technology. Then, in this paper sensorless advances are reviewed and recent developments in this area are introduced with their inherent advantages and drawbacks, including the analysis of practical implementation issues and applications. The study includes a deep overview of state-of-the-art back-EMF sensing methods, which includes Terminal Voltage Sensing, Third Harmonic Voltage Integration, Terminal Current Sensing, Back-EMF Integration and PWM strategies. Also, the most relevant techniques based on estimation and models are briefly analysed, such as Sliding-mode Observer, Extended Kalman Filter, Model Reference Adaptive System, Adaptive observers (Full-order and Pseudoreduced-order) and Artificial Neural Networks.
An advanced algorithm for deformation estimation in non-urban areas
NASA Astrophysics Data System (ADS)
Goel, Kanika; Adam, Nico
2012-09-01
This paper presents an advanced differential SAR interferometry stacking algorithm for high resolution deformation monitoring in non-urban areas with a focus on distributed scatterers (DSs). Techniques such as the Small Baseline Subset Algorithm (SBAS) have been proposed for processing DSs. SBAS makes use of small baseline differential interferogram subsets. Singular value decomposition (SVD), i.e. L2 norm minimization is applied to link independent subsets separated by large baselines. However, the interferograms used in SBAS are multilooked using a rectangular window to reduce phase noise caused for instance by temporal decorrelation, resulting in a loss of resolution and the superposition of topography and deformation signals from different objects. Moreover, these have to be individually phase unwrapped and this can be especially difficult in natural terrains. An improved deformation estimation technique is presented here which exploits high resolution SAR data and is suitable for rural areas. The implemented method makes use of small baseline differential interferograms and incorporates an object adaptive spatial phase filtering and residual topography removal for an accurate phase and coherence estimation, while preserving the high resolution provided by modern satellites. This is followed by retrieval of deformation via the SBAS approach, wherein, the phase inversion is performed using an L1 norm minimization which is more robust to the typical phase unwrapping errors encountered in non-urban areas. Meter resolution TerraSAR-X data of an underground gas storage reservoir in Germany is used for demonstrating the effectiveness of this newly developed technique in rural areas.
Accoustic waveform logging--Advances in theory and application
Paillet, F.L.; Cheng, C.H.; Pennington , W.D.
1992-01-01
Full-waveform acoustic logging has made significant advances in both theory and application in recent years, and these advances have greatly increased the capability of log analysts to measure the physical properties of formations. Advances in theory provide the analytical tools required to understand the properties of measured seismic waves, and to relate those properties to such quantities as shear and compressional velocity and attenuation, and primary and fracture porosity and permeability of potential reservoir rocks. The theory demonstrates that all parts of recorded waveforms are related to various modes of propagation, even in the case of dipole and quadrupole source logging. However, the theory also indicates that these mode properties can be used to design velocity and attenuation picking schemes, and shows how source frequency spectra can be selected to optimize results in specific applications. Synthetic microseismogram computations are an effective tool in waveform interpretation theory; they demonstrate how shear arrival picks and mode attenuation can be used to compute shear velocity and intrinsic attenuation, and formation permeability for monopole, dipole and quadrupole sources. Array processing of multi-receiver data offers the opportunity to apply even more sophisticated analysis techniques. Synthetic microseismogram data is used to illustrate the application of the maximum-likelihood method, semblance cross-correlation, and Prony's method analysis techniques to determine seismic velocities and attenuations. The interpretation of acoustic waveform logs is illustrated by reviews of various practical applications, including synthetic seismogram generation, lithology determination, estimation of geomechanical properties in situ, permeability estimation, and design of hydraulic fracture operations.
Advanced Communication Processing Techniques Held in Ruidoso, New Mexico on 14-17 May 1989
1990-01-01
Criteria: * Prob. of Detection and False Alarm * Variances of Parameter Estimators * Prob. of Correct Classiflcsation and Rejection 0 2 In the exposure...couple of criteria. The tell? [LAUGHTER] If it was anybody else, I standard Neyman-Pearson approach for de- wouldn’t say .... tection, variances for... VARIANCE AISJ11T UPPER AND0 LOWER PMIOUIESOES FEATUE---OELET!U FETUA1E----WW-4A140 TIME SEOLIENTIAL CORRELATION FEATUE -$-ESTIMATED INA FEATURE-ID--LOW
Economic implications of current systems
NASA Technical Reports Server (NTRS)
Daniel, R. E.; Aster, R. W.
1983-01-01
The primary goals of this study are to estimate the value of R&D to photovoltaic (PV) metallization systems cost, and to provide a method for selecting an optimal metallization method for any given PV system. The value-added cost and relative electrical performance of 25 state-of-the-art (SOA) and advanced metallization system techniques are compared.
ERIC Educational Resources Information Center
Huang, Francis L.; Cornell, Dewey G.
2016-01-01
Advances in multilevel modeling techniques now make it possible to investigate the psychometric properties of instruments using clustered data. Factor models that overlook the clustering effect can lead to underestimated standard errors, incorrect parameter estimates, and model fit indices. In addition, factor structures may differ depending on…
Bayesian Asymmetric Regression as a Means to Estimate and Evaluate Oral Reading Fluency Slopes
ERIC Educational Resources Information Center
Solomon, Benjamin G.; Forsberg, Ole J.
2017-01-01
Bayesian techniques have become increasingly present in the social sciences, fueled by advances in computer speed and the development of user-friendly software. In this paper, we forward the use of Bayesian Asymmetric Regression (BAR) to monitor intervention responsiveness when using Curriculum-Based Measurement (CBM) to assess oral reading…
An estimator-predictor approach to PLL loop filter design
NASA Technical Reports Server (NTRS)
Statman, J. I.; Hurd, W. J.
1986-01-01
An approach to the design of digital phase locked loops (DPLLs), using estimation theory concepts in the selection of a loop filter, is presented. The key concept is that the DPLL closed-loop transfer function is decomposed into an estimator and a predictor. The estimator provides recursive estimates of phase, frequency, and higher order derivatives, while the predictor compensates for the transport lag inherent in the loop. This decomposition results in a straightforward loop filter design procedure, enabling use of techniques from optimal and sub-optimal estimation theory. A design example for a particular choice of estimator is presented, followed by analysis of the associated bandwidth, gain margin, and steady state errors caused by unmodeled dynamics. This approach is under consideration for the design of the Deep Space Network (DSN) Advanced Receiver Carrier DPLL.
Evaluation of wind field statistics near and inside clouds using a coherent Doppler lidar
NASA Astrophysics Data System (ADS)
Lottman, Brian Todd
1998-09-01
This work proposes advanced techniques for measuring the spatial wind field statistics near and inside clouds using a vertically pointing solid state coherent Doppler lidar on a fixed ground based platform. The coherent Doppler lidar is an ideal instrument for high spatial and temporal resolution velocity estimates. The basic parameters of lidar are discussed, including a complete statistical description of the Doppler lidar signal. This description is extended to cases with simple functional forms for aerosol backscatter and velocity. An estimate for the mean velocity over a sensing volume is produced by estimating the mean spectra. There are many traditional spectral estimators, which are useful for conditions with slowly varying velocity and backscatter. A new class of estimators (novel) is introduced that produces reliable velocity estimates for conditions with large variations in aerosol backscatter and velocity with range, such as cloud conditions. Performance of traditional and novel estimators is computed for a variety of deterministic atmospheric conditions using computer simulated data. Wind field statistics are produced for actual data for a cloud deck, and for multi- layer clouds. Unique results include detection of possible spectral signatures for rain, estimates for the structure function inside a cloud deck, reliable velocity estimation techniques near and inside thin clouds, and estimates for simple wind field statistics between cloud layers.
NASA Technical Reports Server (NTRS)
Tomberlin, T. J.
1985-01-01
Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.
Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Michał; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi; Robinet, Florent; Schmidt, Patricia; Smith, Rory; Veitch, John; Wade, Madeline; Aoudia, Sofiane; Bose, Sukanta; Calderon Bustillo, Juan; Canizares, Priscilla; Capano, Colin; Clark, James; Colla, Alberto; Cuoco, Elena; Da Silva Costa, Carlos; Dal Canton, Tito; Evangelista, Edgar; Goetz, Evan; Gupta, Anuradha; Hannam, Mark; Keitel, David; Lackey, Benjamin; Logue, Joshua; Mohapatra, Satyanarayan; Piergiovanni, Francesco; Privitera, Stephen; Prix, Reinhard; Pürrer, Michael; Re, Virginia; Serafinelli, Roberto; Wade, Leslie; Wen, Linqing; Wette, Karl; Whelan, John; Palomba, C; Prodi, G
The Amaldi 10 Parallel Session C2 on gravitational wave (GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.
NASA Technical Reports Server (NTRS)
Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Michal; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi
2015-01-01
The Amaldi 10 Parallel Session C2 on gravitational wave(GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.
NASA Astrophysics Data System (ADS)
Cherry, M.; Dierken, J.; Boehnlein, T.; Pilchak, A.; Sathish, S.; Grandhi, R.
2018-01-01
A new technique for performing quantitative scanning acoustic microscopy imaging of Rayleigh surface wave (RSW) velocity was developed based on b-scan processing. In this technique, the focused acoustic beam is moved through many defocus distances over the sample and excited with an impulse excitation, and advanced algorithms based on frequency filtering and the Hilbert transform are used to post-process the b-scans to estimate the Rayleigh surface wave velocity. The new method was used to estimate the RSW velocity on an optically flat E6 glass sample, and the velocity was measured at ±2 m/s and the scanning time per point was on the order of 1.0 s, which are both improvement from the previous two-point defocus method. The new method was also applied to the analysis of two titanium samples, and the velocity was estimated with very low standard deviation in certain large grains on the sample. A new behavior was observed with the b-scan analysis technique where the amplitude of the surface wave decayed dramatically on certain crystallographic orientations. The new technique was also compared with previous results, and the new technique has been found to be much more reliable and to have higher contrast than previously possible with impulse excitation.
Advanced lighting guidelines: 1993. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eley, C.; Tolen, T.M.; Benya, J.R.
1993-12-31
The 1993 Advanced Lighting Guidelines document consists of twelve guidelines that provide an overview of specific lighting technologies and design application techniques utilizing energy-efficient lighting practice. Lighting Design Practice assesses energy-efficient lighting strategies, discusses lighting issues, and explains how to obtain quality lighting design and consulting services. Luminaires and Lighting Systems surveys luminaire equipment designed to take advantage of advanced technology lamp products and includes performance tables that allow for accurate estimation of luminaire light output and power input. The additional ten guidelines -- Computer-Aided Lighting Design, Energy-Efficient Fluorescent Ballasts, Full-Size Fluorescent Lamps, Compact Fluorescent Lamps, Tungsten-Halogen Lamps, Metal Halidemore » and HPS Lamps, Daylighting and Lumen Maintenance, Occupant Sensors, Time Scheduling Systems, and Retrofit Control Technologies -- each provide a product technology overview, discuss current products on the lighting equipment market, and provide application techniques. This document is intended for use by electric utility personnel involved in lighting programs, lighting designers, electrical engineers, architects, lighting manufacturers` representatives, and other lighting professionals.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somerville, Richard
2013-08-22
The long-range goal of several past and current projects in our DOE-supported research has been the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data, and the implementation and testing of these parameterizations in global models. The main objective of the present project being reported on here has been to develop and apply advanced statistical techniques, including Bayesian posterior estimates, to diagnose and evaluate features of both observed and simulated clouds. The research carried out under this project has been novel in two important ways. The first is that it is a key stepmore » in the development of practical stochastic cloud-radiation parameterizations, a new category of parameterizations that offers great promise for overcoming many shortcomings of conventional schemes. The second is that this work has brought powerful new tools to bear on the problem, because it has been a collaboration between a meteorologist with long experience in ARM research (Somerville) and a mathematician who is an expert on a class of advanced statistical techniques that are well-suited for diagnosing model cloud simulations using ARM observations (Shen).« less
Advanced Natural Language Processing and Temporal Mining for Clinical Discovery
ERIC Educational Resources Information Center
Mehrabi, Saeed
2016-01-01
There has been vast and growing amount of healthcare data especially with the rapid adoption of electronic health records (EHRs) as a result of the HITECH act of 2009. It is estimated that around 80% of the clinical information resides in the unstructured narrative of an EHR. Recently, natural language processing (NLP) techniques have offered…
Computing the Power-Density Spectrum for an Engineering Model
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1982-01-01
Computer program for calculating of power-density spectrum (PDS) from data base generated by Advanced Continuous Simulation Language (ACSL) uses algorithm that employs fast Fourier transform (FFT) to calculate PDS of variable. Accomplished by first estimating autocovariance function of variable and then taking FFT of smoothed autocovariance function to obtain PDS. Fast-Fourier-transform technique conserves computer resources.
An exploratory investigation of weight estimation techniques for hypersonic flight vehicles
NASA Technical Reports Server (NTRS)
Cook, E. L.
1981-01-01
The three basic methods of weight prediction (fixed-fraction, statistical correlation, and point stress analysis) and some of the computer programs that have been developed to implement them are discussed. A modified version of the WAATS (Weights Analysis of Advanced Transportation Systems) program is presented, along with input data forms and an example problem.
Montuno, Michael A; Kohner, Andrew B; Foote, Kelly D; Okun, Michael S
2013-01-01
Deep brain stimulation (DBS) is an effective technique that has been utilized to treat advanced and medication-refractory movement and psychiatric disorders. In order to avoid implanted pulse generator (IPG) failure and consequent adverse symptoms, a better understanding of IPG battery longevity and management is necessary. Existing methods for battery estimation lack the specificity required for clinical incorporation. Technical challenges prevent higher accuracy longevity estimations, and a better approach to managing end of DBS battery life is needed. The literature was reviewed and DBS battery estimators were constructed by the authors and made available on the web at http://mdc.mbi.ufl.edu/surgery/dbs-battery-estimator. A clinical algorithm for management of DBS battery life was constructed. The algorithm takes into account battery estimations and clinical symptoms. Existing methods of DBS battery life estimation utilize an interpolation of averaged current drains to calculate how long a battery will last. Unfortunately, this technique can only provide general approximations. There are inherent errors in this technique, and these errors compound with each iteration of the battery estimation. Some of these errors cannot be accounted for in the estimation process, and some of the errors stem from device variation, battery voltage dependence, battery usage, battery chemistry, impedance fluctuations, interpolation error, usage patterns, and self-discharge. We present web-based battery estimators along with an algorithm for clinical management. We discuss the perils of using a battery estimator without taking into account the clinical picture. Future work will be needed to provide more reliable management of implanted device batteries; however, implementation of a clinical algorithm that accounts for both estimated battery life and for patient symptoms should improve the care of DBS patients. © 2012 International Neuromodulation Society.
Development and validation of a MRgHIFU non-invasive tissue acoustic property estimation technique.
Johnson, Sara L; Dillon, Christopher; Odéen, Henrik; Parker, Dennis; Christensen, Douglas; Payne, Allison
2016-11-01
MR-guided high-intensity focussed ultrasound (MRgHIFU) non-invasive ablative surgeries have advanced into clinical trials for treating many pathologies and cancers. A remaining challenge of these surgeries is accurately planning and monitoring tissue heating in the face of patient-specific and dynamic acoustic properties of tissues. Currently, non-invasive measurements of acoustic properties have not been implemented in MRgHIFU treatment planning and monitoring procedures. This methods-driven study presents a technique using MR temperature imaging (MRTI) during low-temperature HIFU sonications to non-invasively estimate sample-specific acoustic absorption and speed of sound values in tissue-mimicking phantoms. Using measured thermal properties, specific absorption rate (SAR) patterns are calculated from the MRTI data and compared to simulated SAR patterns iteratively generated via the Hybrid Angular Spectrum (HAS) method. Once the error between the simulated and measured patterns is minimised, the estimated acoustic property values are compared to the true phantom values obtained via an independent technique. The estimated values are then used to simulate temperature profiles in the phantoms, and compared to experimental temperature profiles. This study demonstrates that trends in acoustic absorption and speed of sound can be non-invasively estimated with average errors of 21% and 1%, respectively. Additionally, temperature predictions using the estimated properties on average match within 1.2 °C of the experimental peak temperature rises in the phantoms. The positive results achieved in tissue-mimicking phantoms presented in this study indicate that this technique may be extended to in vivo applications, improving HIFU sonication temperature rise predictions and treatment assessment.
Development and use of the incremental twitch subtraction MUNE method in mice.
Hegedus, Janka; Jones, Kelvin E; Gordon, Tessa
2009-01-01
We have used a technique to estimate the number of functioning motor units (MUNE) innervating a muscle in mice based on twitch tension. The MUNE technique was verified by modeling twitch tensions from isolated ventral root stimulation. Analysis by twitch tensions allowed us to identify motor unit fiber types. The MUNE technique was used to compare normal mice with transgenic superoxide dismutase-1 mutation (G94A) mice to assess the time course of motor unit loss with respect to fiber type. Motor unit loss was found to occur well in advance of behavioral changes and the degree of reinnervation is dependent upon motor unit fiber types.
Position and Speed Control of Brushless DC Motors Using Sensorless Techniques and Application Trends
Gamazo-Real, José Carlos; Vázquez-Sánchez, Ernesto; Gómez-Gil, Jaime
2010-01-01
This paper provides a technical review of position and speed sensorless methods for controlling Brushless Direct Current (BLDC) motor drives, including the background analysis using sensors, limitations and advances. The performance and reliability of BLDC motor drivers have been improved because the conventional control and sensing techniques have been improved through sensorless technology. Then, in this paper sensorless advances are reviewed and recent developments in this area are introduced with their inherent advantages and drawbacks, including the analysis of practical implementation issues and applications. The study includes a deep overview of state-of-the-art back-EMF sensing methods, which includes Terminal Voltage Sensing, Third Harmonic Voltage Integration, Terminal Current Sensing, Back-EMF Integration and PWM strategies. Also, the most relevant techniques based on estimation and models are briefly analysed, such as Sliding-mode Observer, Extended Kalman Filter, Model Reference Adaptive System, Adaptive observers (Full-order and Pseudoreduced-order) and Artificial Neural Networks. PMID:22163582
Methodologies for Adaptive Flight Envelope Estimation and Protection
NASA Technical Reports Server (NTRS)
Tang, Liang; Roemer, Michael; Ge, Jianhua; Crassidis, Agamemnon; Prasad, J. V. R.; Belcastro, Christine
2009-01-01
This paper reports the latest development of several techniques for adaptive flight envelope estimation and protection system for aircraft under damage upset conditions. Through the integration of advanced fault detection algorithms, real-time system identification of the damage/faulted aircraft and flight envelop estimation, real-time decision support can be executed autonomously for improving damage tolerance and flight recoverability. Particularly, a bank of adaptive nonlinear fault detection and isolation estimators were developed for flight control actuator faults; a real-time system identification method was developed for assessing the dynamics and performance limitation of impaired aircraft; online learning neural networks were used to approximate selected aircraft dynamics which were then inverted to estimate command margins. As off-line training of network weights is not required, the method has the advantage of adapting to varying flight conditions and different vehicle configurations. The key benefit of the envelope estimation and protection system is that it allows the aircraft to fly close to its limit boundary by constantly updating the controller command limits during flight. The developed techniques were demonstrated on NASA s Generic Transport Model (GTM) simulation environments with simulated actuator faults. Simulation results and remarks on future work are presented.
Development, implementation and evaluation of satellite-aided agricultural monitoring systems
NASA Technical Reports Server (NTRS)
Cicone, R. C.; Crist, E. P.; Metzler, M.; Nuesch, D.
1982-01-01
Research activities in support of AgRISTARS Inventory Technology Development Project in the use of aerospace remote sensing for agricultural inventory described include: (1) corn and soybean crop spectral temporal signature characterization; (2) efficient area estimation techniques development; and (3) advanced satellite and sensor system definition. Studies include a statistical evaluation of the impact of cultural and environmental factors on crop spectral profiles, the development and evaluation of an automatic crop area estimation procedure, and the joint use of SEASAT-SAR and LANDSAT MSS for crop inventory.
NASA Technical Reports Server (NTRS)
Reed, D. L.; Wallace, R. G.
1981-01-01
The results of system analyses and implementation studies of an advanced location and data collection system (ALDCS) , proposed for inclusion on the National Oceanic Satellite System (NOSS) spacecraft are reported. The system applies Doppler processing and radiofrequency interferometer position location technqiues both alone and in combination. Aspects analyzed include: the constraints imposed by random access to the system by platforms, the RF link parameters, geometric concepts of position and velocity estimation by the two techniques considered, and the effects of electrical measurement errors, spacecraft attitude errors, and geometric parameters on estimation accuracy. Hardware techniques and trade-offs for interferometric phase measurement, ambiguity resolution and calibration are considered. A combined Doppler-interferometer ALDCS intended to fulfill the NOSS data validation and oceanic research support mission is also described.
NASA Technical Reports Server (NTRS)
Wolfe, M. G.
1978-01-01
Contents: (1) general study guidelines and assumptions; (2) launch vehicle performance and cost assumptions; (3) satellite programs 1959 to 1979; (4) initiative mission and design characteristics; (5) satellite listing; (6) spacecraft design model; (7) spacecraft cost model; (8) mission cost model; and (9) nominal and optimistic budget program cost summaries.
Machine intelligence and robotics: Report of the NASA study group
NASA Technical Reports Server (NTRS)
1980-01-01
Opportunities for the application of machine intelligence and robotics in NASA missions and systems were identified. The benefits of successful adoption of machine intelligence and robotics techniques were estimated and forecasts were prepared to show their growth potential. Program options for research, advanced development, and implementation of machine intelligence and robot technology for use in program planning are presented.
Autonomous Object Characterization with Large Datasets
2015-10-18
desk, where a substantial amount of effort is required to transform raw photometry into a data product, minimizing the amount of time the analyst has...were used to explore concepts in satellite characterization and satellite state change. The first algorithm provides real- time stability estimation... Timely and effective space object (SO) characterization is a challenge, and requires advanced data processing techniques. Detection and identification
NASA Technical Reports Server (NTRS)
Rowell, Lawrence F.; Swissler, Thomas J.
1991-01-01
The focus of the NASA program in remote sensing is primarily the Earth system science and the monitoring of the Earth global changes. One of NASA's roles is the identification and development of advanced sensing techniques, operational spacecraft, and the many supporting technologies necessary to meet the stringent science requirements. Langley Research Center has identified the elements of its current and proposed advanced technology development program that are relevant to global change science according to three categories: sensors, spacecraft, and information system technologies. These technology proposals are presented as one-page synopses covering scope, objective, approach, readiness timeline, deliverables, and estimated funding. In addition, the global change science requirements and their measurement histories are briefly discussed.
NASA Astrophysics Data System (ADS)
McAlpin, D. B.; Meyer, F. J.; Dehn, J.; Webley, P. W.
2016-12-01
In 1976, "The Great Tolbachik Fissure Eruption," became the largest basaltic eruption in the recorded history of the Kamchatka Peninsula. In November 2012, after thirty-six years of quiescence, Tolbachik again erupted, and continued for nine months until its end in August, 2013. Observers of the 2012-13 eruption reported a mostly effusive eruption from two main fissures, long, rapidly moving lava flows, and ash clouds of up to 6 km. Initial estimates of effusive activity reported an approximate volume of 0.52 km³ over an area of more than 35 km². In this analysis, we provide updated effusion estimates for the Tolbachik eruption, determined by thermal data acquired by the Advanced Very High Resolution Radiometer (AVHRR) satellites. Each of the four AVHRR satellites carries a broad-band, five channel sensor that acquires data in the visible and infrared portions of the electromagnetic spectrum, with each satellite completing 14 daily Earth orbits. A critical component to the volume estimates is a determination of fissure size and the area of lava flow at different times during the eruption. For this purpose, we acquired optical satellite images obtained from three orbiting platforms: the Advanced Land Imager (ALI),) aboard the Earth Observer-1 (EO-1) satellite, the Operational Land Imager (OLI) aboard Landsat 8, and the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) aboard NASA's Terra satellite. From these multiple platforms, lava flow maps were prepared from repeat acquisitions over the course of the eruption. Periodic lava flow measurements clarify effusion rates as instantaneous discharge rates, mean effusion rates over time, and an overall effusion rate over the entire eruption. Given the natural limitations of effusion estimates derived from thermal data, our results are compared to effusion estimates derived by DEM differencing to evaluate accuracy. This analysis is a true multi-sensor technique that affords a method to rapidly quantify effusive volcanic activity in terms of flow temperature, lava volume, and area on a basis coeval to the eruption, and has important implications for scientific and hazard analyses of future volcanic episodes.
A model-based 3D template matching technique for pose acquisition of an uncooperative space object.
Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele
2015-03-16
This paper presents a customized three-dimensional template matching technique for autonomous pose determination of uncooperative targets. This topic is relevant to advanced space applications, like active debris removal and on-orbit servicing. The proposed technique is model-based and produces estimates of the target pose without any prior pose information, by processing three-dimensional point clouds provided by a LIDAR. These estimates are then used to initialize a pose tracking algorithm. Peculiar features of the proposed approach are the use of a reduced number of templates and the idea of building the database of templates on-line, thus significantly reducing the amount of on-board stored data with respect to traditional techniques. An algorithm variant is also introduced aimed at further accelerating the pose acquisition time and reducing the computational cost. Technique performance is investigated within a realistic numerical simulation environment comprising a target model, LIDAR operation and various target-chaser relative dynamics scenarios, relevant to close-proximity flight operations. Specifically, the capability of the proposed techniques to provide a pose solution suitable to initialize the tracking algorithm is demonstrated, as well as their robustness against highly variable pose conditions determined by the relative dynamics. Finally, a criterion for autonomous failure detection of the presented techniques is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Erin A.; Robinson, Sean M.; Anderson, Kevin K.
2015-01-19
Here we present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. Furthermore, this technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry ofmore » response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Our results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search« less
Wijeysundera, Harindra C; Wang, Xuesong; Tomlinson, George; Ko, Dennis T; Krahn, Murray D
2012-01-01
Objective The aim of this study was to review statistical techniques for estimating the mean population cost using health care cost data that, because of the inability to achieve complete follow-up until death, are right censored. The target audience is health service researchers without an advanced statistical background. Methods Data were sourced from longitudinal heart failure costs from Ontario, Canada, and administrative databases were used for estimating costs. The dataset consisted of 43,888 patients, with follow-up periods ranging from 1 to 1538 days (mean 576 days). The study was designed so that mean health care costs over 1080 days of follow-up were calculated using naïve estimators such as full-sample and uncensored case estimators. Reweighted estimators – specifically, the inverse probability weighted estimator – were calculated, as was phase-based costing. Costs were adjusted to 2008 Canadian dollars using the Bank of Canada consumer price index (http://www.bankofcanada.ca/en/cpi.html). Results Over the restricted follow-up of 1080 days, 32% of patients were censored. The full-sample estimator was found to underestimate mean cost ($30,420) compared with the reweighted estimators ($36,490). The phase-based costing estimate of $37,237 was similar to that of the simple reweighted estimator. Conclusion The authors recommend against the use of full-sample or uncensored case estimators when censored data are present. In the presence of heavy censoring, phase-based costing is an attractive alternative approach. PMID:22719214
Fee, David; Izbekov, Pavel; Kim, Keehoon; ...
2017-10-09
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fee, David; Izbekov, Pavel; Kim, Keehoon
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less
Advances in quantifying air-sea gas exchange and environmental forcing.
Wanninkhof, Rik; Asher, William E; Ho, David T; Sweeney, Colm; McGillis, Wade R
2009-01-01
The past decade has seen a substantial amount of research on air-sea gas exchange and its environmental controls. These studies have significantly advanced the understanding of processes that control gas transfer, led to higher quality field measurements, and improved estimates of the flux of climate-relevant gases between the ocean and atmosphere. This review discusses the fundamental principles of air-sea gas transfer and recent developments in gas transfer theory, parameterizations, and measurement techniques in the context of the exchange of carbon dioxide. However, much of this discussion is applicable to any sparingly soluble, non-reactive gas. We show how the use of global variables of environmental forcing that have recently become available and gas exchange relationships that incorporate the main forcing factors will lead to improved estimates of global and regional air-sea gas fluxes based on better fundamental physical, chemical, and biological foundations.
2010-01-01
In clinical neurology, a comprehensive understanding of consciousness has been regarded as an abstract concept - best left to philosophers. However, times are changing and the need to clinically assess consciousness is increasingly becoming a real-world, practical challenge. Current methods for evaluating altered levels of consciousness are highly reliant on either behavioural measures or anatomical imaging. While these methods have some utility, estimates of misdiagnosis are worrisome (as high as 43%) - clearly this is a major clinical problem. The solution must involve objective, physiologically based measures that do not rely on behaviour. This paper reviews recent advances in physiologically based measures that enable better evaluation of consciousness states (coma, vegetative state, minimally conscious state, and locked in syndrome). Based on the evidence to-date, electroencephalographic and neuroimaging based assessments of consciousness provide valuable information for evaluation of residual function, formation of differential diagnoses, and estimation of prognosis. PMID:20113490
Trace elemental analysis of human breast cancerous blood by advanced PC-WDXRF technique
NASA Astrophysics Data System (ADS)
Singh, Ranjit; Kainth, Harpreet Singh; Prasher, Puneet; Singh, Tejbir
2018-03-01
The objective of this work is to quantify the trace elements of healthy and non-healthy blood samples by using advanced polychromatic source based wavelength dispersive X-ray fluorescence (PC-WDXRF) technique. The imbalances in trace elements present in the human blood directly or indirectly lead to the carcinogenic process. The trace elements 11Na, 12Mg, 15P, 16S, 17Cl, 19K, 20Ca, 26Fe, 29Cu and 30Zn are identified and their concentrations are estimated. The experimental results clearly discuss the variation and role of various trace elements present in the non-healthy blood samples relative to the healthy blood samples. These results establish future guidelines to probe the possible roles of essential trace elements in the breast carcinogenic processes. The instrumental sensitivity and detection limits for measuring the elements in the atomic range 11 ≤ Z ≤ 30 have also been discussed in the present work.
Advanced statistical methods for improved data analysis of NASA astrophysics missions
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.
1992-01-01
The investigators under this grant studied ways to improve the statistical analysis of astronomical data. They looked at existing techniques, the development of new techniques, and the production and distribution of specialized software to the astronomical community. Abstracts of nine papers that were produced are included, as well as brief descriptions of four software packages. The articles that are abstracted discuss analytical and Monte Carlo comparisons of six different linear least squares fits, a (second) paper on linear regression in astronomy, two reviews of public domain software for the astronomer, subsample and half-sample methods for estimating sampling distributions, a nonparametric estimation of survival functions under dependent competing risks, censoring in astronomical data due to nondetections, an astronomy survival analysis computer package called ASURV, and improving the statistical methodology of astronomical data analysis.
Wavelet Filter Banks for Super-Resolution SAR Imaging
NASA Technical Reports Server (NTRS)
Sheybani, Ehsan O.; Deshpande, Manohar; Memarsadeghi, Nargess
2011-01-01
This paper discusses Innovative wavelet-based filter banks designed to enhance the analysis of super resolution Synthetic Aperture Radar (SAR) images using parametric spectral methods and signal classification algorithms, SAR finds applications In many of NASA's earth science fields such as deformation, ecosystem structure, and dynamics of Ice, snow and cold land processes, and surface water and ocean topography. Traditionally, standard methods such as Fast-Fourier Transform (FFT) and Inverse Fast-Fourier Transform (IFFT) have been used to extract Images from SAR radar data, Due to non-parametric features of these methods and their resolution limitations and observation time dependence, use of spectral estimation and signal pre- and post-processing techniques based on wavelets to process SAR radar data has been proposed. Multi-resolution wavelet transforms and advanced spectral estimation techniques have proven to offer efficient solutions to this problem.
NASA Technical Reports Server (NTRS)
Srokowski, A. J.
1978-01-01
The problem of obtaining accurate estimates of suction requirements on swept laminar flow control wings was discussed. A fast accurate computer code developed to predict suction requirements by integrating disturbance amplification rates was described. Assumptions and approximations used in the present computer code are examined in light of flow conditions on the swept wing which may limit their validity.
1993-04-01
separation capability. o Demonstrate advanced KKVs in the 6-20 KG weight class. o Test planning for SRAM/LEAP and PATRIOT/LEAP integrated technology...packaging techniques to reduce satellite size, weight , power, and total system costs. Further development of these technologies are absolutely 4...1993 o Developed a master plan with a delivery schedule for each light- weight subassembly in the sensor integration payload. o Finalized a contract for
Statistical and Economic Techniques for Site-specific Nematode Management.
Liu, Zheng; Griffin, Terry; Kirkpatrick, Terrence L
2014-03-01
Recent advances in precision agriculture technologies and spatial statistics allow realistic, site-specific estimation of nematode damage to field crops and provide a platform for the site-specific delivery of nematicides within individual fields. This paper reviews the spatial statistical techniques that model correlations among neighboring observations and develop a spatial economic analysis to determine the potential of site-specific nematicide application. The spatial econometric methodology applied in the context of site-specific crop yield response contributes to closing the gap between data analysis and realistic site-specific nematicide recommendations and helps to provide a practical method of site-specifically controlling nematodes.
NASA Technical Reports Server (NTRS)
Panda, J.; Seasholtz, R. G.
2005-01-01
Recent advancement in the molecular Rayleigh scattering based technique allowed for simultaneous measurement of velocity and density fluctuations with high sampling rates. The technique was used to investigate unheated high subsonic and supersonic fully expanded free jets in the Mach number range of 0.8 to 1.8. The difference between the Favre averaged and Reynolds averaged axial velocity and axial component of the turbulent kinetic energy is found to be small. Estimates based on the Morkovin's "Strong Reynolds Analogy" were found to provide lower values of turbulent density fluctuations than the measured data.
Qumseya, Bashar J; Wang, Haibo; Badie, Nicole; Uzomba, Rosemary N; Parasa, Sravanthi; White, Donna L; Wolfsen, Herbert; Sharma, Prateek; Wallace, Michael B
2013-12-01
US guidelines recommend surveillance of patients with Barrett's esophagus (BE) to detect dysplasia. BE conventionally is monitored via white-light endoscopy (WLE) and a collection of random biopsy specimens. However, this approach does not definitively or consistently detect areas of dysplasia. Advanced imaging technologies can increase the detection of dysplasia and cancer. We investigated whether these imaging technologies can increase the diagnostic yield for the detection of neoplasia in patients with BE, compared with WLE and analysis of random biopsy specimens. We performed a systematic review, using Medline and Embase, to identify relevant peer-review studies. Fourteen studies were included in the final analysis, with a total of 843 patients. Our metameter (estimate) of interest was the paired-risk difference (RD), defined as the difference in yield of the detection of dysplasia or cancer using advanced imaging vs WLE. The estimated paired-RD and 95% confidence interval (CI) were obtained using random-effects models. Heterogeneity was assessed by means of the Q statistic and the I(2) statistic. An exploratory meta-regression was performed to look for associations between the metameter and potential confounders or modifiers. Overall, advanced imaging techniques increased the diagnostic yield for detection of dysplasia or cancer by 34% (95% CI, 20%-56%; P < .0001). A subgroup analysis showed that virtual chromoendoscopy significantly increased the diagnostic yield (RD, 0.34; 95% CI, 0.14-0.56; P < .0001). The RD for chromoendoscopy was 0.35 (95% CI, 0.13-0.56; P = .0001). There was no significant difference between virtual chromoendoscopy and chromoendoscopy, based on Student t test analysis (P = .45). Based on a meta-analysis, advanced imaging techniques such as chromoendoscopy or virtual chromoendoscopy significantly increase the diagnostic yield for identification of dysplasia or cancer in patients with BE. Copyright © 2013 AGA Institute. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Goetz, Alexander F. H.
1992-01-01
Over the last decade, technological advances in airborne imaging spectrometers, having spectral resolution comparable with laboratory spectrometers, have made it possible to estimate biochemical constituents of vegetation canopies. Wessman estimated lignin concentration from data acquired with NASA's Airborne Imaging Spectrometer (AIS) over Blackhawk Island in Wisconsin. A stepwise linear regression technique was used to determine the single spectral channel or channels in the AIS data that best correlated with measured lignin contents using chemical methods. The regression technique does not take advantage of the spectral shape of the lignin reflectance feature as a diagnostic tool nor the increased discrimination among other leaf components with overlapping spectral features. A nonlinear least squares spectral matching technique was recently reported for deriving both the equivalent water thicknesses of surface vegetation and the amounts of water vapor in the atmosphere from contiguous spectra measured with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). The same technique was applied to a laboratory reflectance spectrum of fresh, green leaves. The result demonstrates that the fresh leaf spectrum in the 1.0-2.5 microns region consists of spectral components of dry leaves and the spectral component of liquid water. A linear least squares spectral matching technique for retrieving equivalent water thickness and biochemical components of green vegetation is described.
Determination of the coronal magnetic field from vector magnetograph data
NASA Technical Reports Server (NTRS)
Mikic, Zoran
1991-01-01
A new algorithm was developed, tested, and applied to determine coronal magnetic fields above solar active regions. The coronal field above NOAA active region AR5747 was successfully estimated on 20 Oct. 1989 from data taken at the Mees Solar Observatory of the Univ. of Hawaii. It was shown that observational data can be used to obtain realistic estimates of coronal magnetic fields. The model has significantly extended the realism with which the coronal magnetic field can be inferred from observations. The understanding of coronal phenomena will be greatly advanced by a reliable technique, such as the one presented, for deducing the detailed spatial structure of the coronal field. The payoff from major current and proposed NASA observational efforts is heavily dependent on the success with which the coronal field can be inferred from vector magnetograms. In particular, the present inability to reliably obtain the coronal field has been a major obstacle to the theoretical advancement of solar flare theory and prediction. The results have shown that the evolutional algorithm can be used to estimate coronal magnetic fields.
Estimation of the prevalence of adverse drug reactions from social media.
Nguyen, Thin; Larsen, Mark E; O'Dea, Bridianne; Phung, Dinh; Venkatesh, Svetha; Christensen, Helen
2017-06-01
This work aims to estimate the degree of adverse drug reactions (ADR) for psychiatric medications from social media, including Twitter, Reddit, and LiveJournal. Advances in lightning-fast cluster computing was employed to process large scale data, consisting of 6.4 terabytes of data containing 3.8 billion records from all the media. Rates of ADR were quantified using the SIDER database of drugs and side-effects, and an estimated ADR rate was based on the prevalence of discussion in the social media corpora. Agreement between these measures for a sample of ten popular psychiatric drugs was evaluated using the Pearson correlation coefficient, r, with values between 0.08 and 0.50. Word2vec, a novel neural learning framework, was utilized to improve the coverage of variants of ADR terms in the unstructured text by identifying syntactically or semantically similar terms. Improved correlation coefficients, between 0.29 and 0.59, demonstrates the capability of advanced techniques in machine learning to aid in the discovery of meaningful patterns from medical data, and social media data, at scale. Copyright © 2017 Elsevier B.V. All rights reserved.
Advanced Kalman Filter for Real-Time Responsiveness in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Welch, Gregory Francis; Zhang, Jinghe
2014-06-10
Complex engineering systems pose fundamental challenges in real-time operations and control because they are highly dynamic systems consisting of a large number of elements with severe nonlinearities and discontinuities. Today’s tools for real-time complex system operations are mostly based on steady state models, unable to capture the dynamic nature and too slow to prevent system failures. We developed advanced Kalman filtering techniques and the formulation of dynamic state estimation using Kalman filtering techniques to capture complex system dynamics in aiding real-time operations and control. In this work, we looked at complex system issues including severe nonlinearity of system equations, discontinuitiesmore » caused by system controls and network switches, sparse measurements in space and time, and real-time requirements of power grid operations. We sought to bridge the disciplinary boundaries between Computer Science and Power Systems Engineering, by introducing methods that leverage both existing and new techniques. While our methods were developed in the context of electrical power systems, they should generalize to other large-scale scientific and engineering applications.« less
Dual Nozzle Aerodynamic and Cooling Analysis Study.
1981-02-27
program and to the aerodynamic model computer program. This pro - cedure was used to define two secondary nozzle contours for the baseline con - figuration...both the dual-throat and dual-expander con - cepts. Advanced analytical techniques were utilized to provide quantitative estimates of the bleed flow...preliminary heat transfer analysis of both con - cepts, and (5) engineering analysis of data from the NASA/MSFC hot-fire testing of a dual-throat
Molecular Dynamics Study of Surfactant Self-Assembly on Single-Walled Carbon Nanotubes (SWCNTs)
NASA Astrophysics Data System (ADS)
Phelan, Frederick, Jr.
2015-03-01
Single-walled carbon nanotubes (SWNCTs) are materials with structural, electronic and optical properties that make them attractive for a myriad of advanced technology applications. Increased adaptation of these materials requires advancement in separation techniques which enables them to be sorted with increased reliability into monodisperse fractions with respect to length and chirality. Most separation techniques currently in use rely on dispersion of tubes in aqueous solution using surfactants. This results in a colloidal mixture in which tubes are packed and individually dispersed in a surfactant shell. Understanding the structure and properties of the SWCNT-surfactant complex at the molecular level, and how this is affected by chirality, will help to improve separations processes. In this work, we study the structure and properties of SWCNT-surfactant colloidal complexes using all-atom molecular dynamics. Self-assembled structures are computed for a number of combinations SWCNT/surfactant, and also, co-surfactant mixtures for the bile salt surfactant sodium deoxycholate (DOC) and the anionic surfactant sodium dodecyl sulfate (SDS). From the radial distribution function we estimate the size of the SWCNT hydration layer, and use that information to compute the buoyant densities of unfilled tubes for a number of concentrations. Estimates of the change in hydrodynamic radius with increased surfactant packing and the binding energies of the individual surfactants are also obtained.
Enhancing e-waste estimates: improving data quality by multivariate Input-Output Analysis.
Wang, Feng; Huisman, Jaco; Stevels, Ab; Baldé, Cornelis Peter
2013-11-01
Waste electrical and electronic equipment (or e-waste) is one of the fastest growing waste streams, which encompasses a wide and increasing spectrum of products. Accurate estimation of e-waste generation is difficult, mainly due to lack of high quality data referred to market and socio-economic dynamics. This paper addresses how to enhance e-waste estimates by providing techniques to increase data quality. An advanced, flexible and multivariate Input-Output Analysis (IOA) method is proposed. It links all three pillars in IOA (product sales, stock and lifespan profiles) to construct mathematical relationships between various data points. By applying this method, the data consolidation steps can generate more accurate time-series datasets from available data pool. This can consequently increase the reliability of e-waste estimates compared to the approach without data processing. A case study in the Netherlands is used to apply the advanced IOA model. As a result, for the first time ever, complete datasets of all three variables for estimating all types of e-waste have been obtained. The result of this study also demonstrates significant disparity between various estimation models, arising from the use of data under different conditions. It shows the importance of applying multivariate approach and multiple sources to improve data quality for modelling, specifically using appropriate time-varying lifespan parameters. Following the case study, a roadmap with a procedural guideline is provided to enhance e-waste estimation studies. Copyright © 2013 Elsevier Ltd. All rights reserved.
Overview of Recent Flight Flutter Testing Research at NASA Dryden
NASA Technical Reports Server (NTRS)
Brenner, Martin J.; Lind, Richard C.; Voracek, David F.
1997-01-01
In response to the concerns of the aeroelastic community, NASA Dryden Flight Research Center, Edwards, California, is conducting research into improving the flight flutter (including aeroservoelasticity) test process with more accurate and automated techniques for stability boundary prediction. The important elements of this effort so far include the following: (1) excitation mechanisms for enhanced vibration data to reduce uncertainty levels in stability estimates; (2) investigation of a variety of frequency, time, and wavelet analysis techniques for signal processing, stability estimation, and nonlinear identification; and (3) robust flutter boundary prediction to substantially reduce the test matrix for flutter clearance. These are critical research topics addressing the concerns of a recent AGARD Specialists' Meeting on Advanced Aeroservoelastic Testing and Data Analysis. This paper addresses these items using flight test data from the F/A-18 Systems Research Aircraft and the F/A-18 High Alpha Research Vehicle.
qF-SSOP: real-time optical property corrected fluorescence imaging
Valdes, Pablo A.; Angelo, Joseph P.; Choi, Hak Soo; Gioux, Sylvain
2017-01-01
Fluorescence imaging is well suited to provide image guidance during resections in oncologic and vascular surgery. However, the distorting effects of tissue optical properties on the emitted fluorescence are poorly compensated for on even the most advanced fluorescence image guidance systems, leading to subjective and inaccurate estimates of tissue fluorophore concentrations. Here we present a novel fluorescence imaging technique that performs real-time (i.e., video rate) optical property corrected fluorescence imaging. We perform full field of view simultaneous imaging of tissue optical properties using Single Snapshot of Optical Properties (SSOP) and fluorescence detection. The estimated optical properties are used to correct the emitted fluorescence with a quantitative fluorescence model to provide quantitative fluorescence-Single Snapshot of Optical Properties (qF-SSOP) images with less than 5% error. The technique is rigorous, fast, and quantitative, enabling ease of integration into the surgical workflow with the potential to improve molecular guidance intraoperatively. PMID:28856038
NASA Astrophysics Data System (ADS)
Douša, Jan; Dick, Galina; Kačmařík, Michal; Václavovic, Pavel; Pottiaux, Eric; Zus, Florian; Brenot, Hugues; Moeller, Gregor; Hinterberger, Fabian; Pacione, Rosa; Stuerze, Andrea; Eben, Kryštof; Teferle, Norman; Ding, Wenwu; Morel, Laurent; Kaplon, Jan; Hordyniec, Pavel; Rohm, Witold
2017-04-01
The COST Action ES1206 GNSS4SWEC addresses new exploitations of the synergy between developments in GNSS and meteorological communities. The Working Group 1 (Advanced GNSS processing techniques) deals with implementing and assessing new methods for GNSS tropospheric monitoring and precise positioning exploiting all modern GNSS constellations, signals, products etc. Besides other goals, WG1 coordinates development of advanced tropospheric products in support of weather numerical and non-numerical nowcasting. These are ultra-fast and high-resolution tropospheric products available in real time or in a sub-hourly fashion and parameters in support of monitoring an anisotropy of the troposphere, e.g. horizontal gradients and tropospheric slant path delays. This talk gives an overview of WG1 activities and, particularly, achievements in two activities, Benchmark and Real-time demonstration campaigns. For the Benchmark campaign a complex data set of GNSS observations and various meteorological data were collected for a two-month period in 2013 (May-June) which included severe weather events in central Europe. An initial processing of data sets from GNSS and numerical weather models (NWM) provided independently estimated reference parameters - ZTDs and tropospheric horizontal gradients. The comparison of horizontal tropospheric gradients from GNSS and NWM data demonstrated a very good agreement among independent solutions with negligible biases and an accuracy of about 0.5 mm. Visual comparisons of maps of zenith wet delays and tropospheric horizontal gradients showed very promising results for future exploitations of advanced GNSS tropospheric products in meteorological applications such as severe weather event monitoring and weather nowcasting. The Benchmark data set is also used for an extensive validation of line-of-sight tropospheric Slant Total Delays (STD) from GNSS, NWM-raytracing and Water Vapour Radiometer (WVR) solutions. Seven institutions delivered their STDs estimated based on GNSS observations processed using different software and strategies. STDs from NWM ray-tracing came from three institutions using four different NWM models. Results show generally a very good mutual agreement among all solutions from all techniques. The influence of adding not cleaned GNSS post-fit residuals, i.e. residuals that still contains non-tropospheric systematic effects such as multipath, to estimated STDs will be presented. The Real-time demonstration campaign aims at enhancing and assessing ultra-fast GNSS tropospheric products for severe weather and NWM nowcasting. Results are showed from real-time demonstrations as well as offline production simulating real-time using Benchmark campaign.
Accurate Evaluation Method of Molecular Binding Affinity from Fluctuation Frequency
NASA Astrophysics Data System (ADS)
Hoshino, Tyuji; Iwamoto, Koji; Ode, Hirotaka; Ohdomari, Iwao
2008-05-01
Exact estimation of the molecular binding affinity is significantly important for drug discovery. The energy calculation is a direct method to compute the strength of the interaction between two molecules. This energetic approach is, however, not accurate enough to evaluate a slight difference in binding affinity when distinguishing a prospective substance from dozens of candidates for medicine. Hence more accurate estimation of drug efficacy in a computer is currently demanded. Previously we proposed a concept of estimating molecular binding affinity, focusing on the fluctuation at an interface between two molecules. The aim of this paper is to demonstrate the compatibility between the proposed computational technique and experimental measurements, through several examples for computer simulations of an association of human immunodeficiency virus type-1 (HIV-1) protease and its inhibitor (an example for a drug-enzyme binding), a complexation of an antigen and its antibody (an example for a protein-protein binding), and a combination of estrogen receptor and its ligand chemicals (an example for a ligand-receptor binding). The proposed affinity estimation has proven to be a promising technique in the advanced stage of the discovery and the design of drugs.
NASA Astrophysics Data System (ADS)
Lyon, S. W.; Koutsouris, A. J.
2016-12-01
Robust natural variability and experimental design may help to overcome the data limitations and difficult conditions that typify much of the global south. This, in turn, can facilitate the application of advanced techniques to help inform management with science (which is sorely needed for guiding development). As an example on this concept, we used a limited amount of weekly water chemistry as well as stable water isotope data to perform end-member mixing analysis in a glue frame work (G-EMMA) in one main catchment and two sub-catchments of Kilombero Valley, Tanzania. How water interacts across the various storages in this region, which has been targeted for rapid agricultural intensification and expansion, is still largely unknown making estimation of potential impacts (not to mention sustainability) associated with various development scenarios difficult. Our results showed that there were, as would be expected, considerable uncertainties related to the characterization of end-members in this remote system. Regardless, some robust estimates could be made on contributions to seasonal streamflow variability. For example, it appears that there is a low connectivity between the deep groundwater and the stream system throughout the year. Also, there is a considerable wetting up period required before overland flow occurs. We demonstrate that the apparent miss-match between state-of-the-science techniques and data limitations (not to mention the issues associated with difficult working environments) can be bridged by leveraging experimental design and natural system variability. This is promising as we seek to advance our science in more and more remote (and in particular developing) regions to allow for important improvements for management of less and less available resources. Thus, in spite of large uncertainties this work highlights how research may still provide an improved system understanding of hydrological flows even when working under less than perfect conditions.
An Online Observer for Minimization of Pulsating Torque in SMPM Motors
Roșca, Lucian
2016-01-01
A persistent problem of surface mounted permanent magnet (SMPM) motors is the non-uniformity of the developed torque. Either the motor design or the motor control needs to be improved in order to minimize the periodic disturbances. This paper proposes a new control technique for reducing periodic disturbances in permanent magnet (PM) electro-mechanical actuators, by advancing a new observer/estimator paradigm. A recursive estimation algorithm is implemented for online control. The compensating signal is identified and added as feedback to the control signal of the servo motor. Compensation is evaluated for different values of the input signal, to show robustness of the proposed method. PMID:27089182
Microstructural Modeling of Brittle Materials for Enhanced Performance and Reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teague, Melissa Christine; Teague, Melissa Christine; Rodgers, Theron
Brittle failure is often influenced by difficult to measure and variable microstructure-scale stresses. Recent advances in photoluminescence spectroscopy (PLS), including improved confocal laser measurement and rapid spectroscopic data collection have established the potential to map stresses with microscale spatial resolution (%3C2 microns). Advanced PLS was successfully used to investigate both residual and externally applied stresses in polycrystalline alumina at the microstructure scale. The measured average stresses matched those estimated from beam theory to within one standard deviation, validating the technique. Modeling the residual stresses within the microstructure produced general agreement in comparison with the experimentally measured results. Microstructure scale modelingmore » is primed to take advantage of advanced PLS to enable its refinement and validation, eventually enabling microstructure modeling to become a predictive tool for brittle materials.« less
CHClF/sub 2/ (F-22) in the earth's atmosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rasmussen, R.A.; Khalil, M.A.K.; Penkett, S.A.
1980-10-01
Recent global measurements of CHClF/sub 2/ (F-22) are reported. Originally, GC/MS techniques were used to obtain these data. Since then, significant advances using an O/sub 2/-doped electron capture detector have been made in the analytical techniques, so that F-22 can be measured by EC/GC methods at ambient concentrations. The atmospheric burden of F-22 calculated from these measurements (average mixing ratio, mid-1979, approx.45 pptv) is considerably greater than that expected from the estimates of direct industrial emissions (average mixing ratio, mid-1979, approx.30 pptv). This difference is probably due to underestimates of F-22 emissions.
CHClF2 (F-22) in the Earth's atmosphere
NASA Astrophysics Data System (ADS)
Rasmussen, R. A.; Khalil, M. A. K.; Penkett, S. A.; Prosser, N. J. D.
1980-10-01
Recent global measurements of CHClF2 (F-22) are reported. Originally, GC/MS techniques were used to obtain these data. Since then, significant advances using an O2-doped electron capture detector have been made in the analytical techniques, so that F-22 can be measured by EC/GC methods at ambient concentrations. The atmospheric burden of F-22 calculated from these measurements (average mixing ratio, mid-1979, ˜45 pptv) is considerably greater than that expected from the estimates of direct industrial emissions (average mixing ratio, mid-1979, ˜30 pptv). This difference is probably due to underestimates of F-22 emissions.
CHClF2 /F-22/ in the earth's atmosphere
NASA Technical Reports Server (NTRS)
Rasmussen, R. A.; Khalil, M. A. K.; Penkett, S. A.; Prosser, N. J. D.
1980-01-01
Recent global measurements of CHClF2 (F-22) are reported. Originally, GC/MS techniques were used to obtain these data. Since then, significant advances using an O2-doped electron capture detector have been made in the analytical techniques, so that F-22 can be measured by EC/GC methods at ambient concentrations. The atmospheric burden of F-22 calculated from these measurements (average mixing ratio, mid-1979, approximately 45 pptv) is considerably greater than that expected from the estimates of direct industrial emissions (average mixing ratio, mid-1979, approximately 30 pptv). This difference is probably due to underestimates of F-22 emissions.
NASA Technical Reports Server (NTRS)
Wolfer, B. M.
1977-01-01
Features basic to the integrated utility system, such as solid waste incineration, heat recovery and usage, and water recycling/treatment, are compared in terms of cost, fuel conservation, and efficiency to conventional utility systems in the same mean-climatic area of Washington, D. C. The larger of the two apartment complexes selected for the test showed the more favorable results in the three areas of comparison. Restrictions concerning the sole use of currently available technology are hypothetically removed to consider the introduction and possible advantages of certain advanced techniques in an integrated utility system; recommendations are made and costs are estimated for each type of system.
2004-02-01
UNCLASSIFIED − Conducted experiments to determine the usability of general-purpose anomaly detection algorithms to monitor a large, complex military...reaction and detection modules to perform tailored analysis sequences to monitor environmental conditions, health hazards and physiological states...scalability of lab proven anomaly detection techniques for intrusion detection in real world high volume environments. Narrative Title FY 2003
A Cost Estimation Analysis of U.S. Navy Ship Fuel-Savings Techniques and Technologies
2009-09-01
readings to the boiler operator. The PLC will provide constant automatic trimming of the excess oxygen based upon real time SGA readings. An SCD...the author): The Aegis Combat System is controlled by an advanced, automatic detect-and-track, multi-function three-dimensional passive...subsequently offloaded. An Online Wash System would reduce these maintenance costs and improve fuel efficiency of these engines by keeping the engines
Application of Advanced Nuclear Emulsion Technique to Fusion Neutron Diagnostics
NASA Astrophysics Data System (ADS)
Nakayama, Y.; Tomita, H.; Morishima, K.; Yamashita, F.; Hayashi, S.; Cheon, MunSeong; Isobe, M.; Ogawa, K.; Naka, T.; Nakano, T.; Nakamura, M.; Kawarabayashi, J.; Iguchi, T.; Ochiai, K.
In order to measure the 2.5 MeV neutrons produced by DD nuclear fusion reactions, we have developed a compact neutron detector based on nuclear emulsion. After optimization of development conditions, we evaluated the response of the detector to an accelerator-based DD neutron source. The absolute efficiency at an energy of 2.5 MeV was estimated to be (4.1±0.2)×10-6 tracks/neutron.
NASA Astrophysics Data System (ADS)
Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe
2017-12-01
This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances between the different methane and acetylene sources. The results from these controlled experiments demonstrate that, when the targeted and tracer gases are not well collocated, this new approach provides a better estimate of the emission rates than the tracer release technique. As an example, the relative error between the estimated and actual emission rates is reduced from 32 % with the tracer release technique to 16 % with the combined approach in the case of a tracer located 60 m upwind of a single methane source. Further studies and more complex implementations with more advanced transport models and more advanced optimisations of their configuration will be required to generalise the applicability of the approach and strengthen its robustness.
New Techniques for Radar Altimetry of Sea Ice and the Polar Oceans
NASA Astrophysics Data System (ADS)
Armitage, T. W. K.; Kwok, R.; Egido, A.; Smith, W. H. F.; Cullen, R.
2017-12-01
Satellite radar altimetry has proven to be a valuable tool for remote sensing of the polar oceans, with techniques for estimating sea ice thickness and sea surface height in the ice-covered ocean advancing to the point of becoming routine, if not operational, products. Here, we explore new techniques in radar altimetry of the polar oceans and the sea ice cover. First, we present results from fully-focused SAR (FFSAR) altimetry; by accounting for the phase evolution of scatterers in the scene, the FFSAR technique applies an inter-burst coherent integration, potentially over the entire duration that a scatterer remains in the altimeter footprint, which can narrow the effective along track resolution to just 0.5m. We discuss the improvement of using interleaved operation over burst-more operation for applying FFSAR processing to data acquired by future missions, such as a potential CryoSat follow-on. Second, we present simulated sea ice retrievals from the Ka-band Radar Interferometer (KaRIn), the instrument that will be launched on the Surface Water and Ocean Topography (SWOT) mission in 2021, that is capable of producing swath images of surface elevation. These techniques offer the opportunity to advance our understanding of the physics of the ice-covered oceans, plus new insight into how we interpret more conventional radar altimetry data in these regions.
NASA Astrophysics Data System (ADS)
Mercado, Karla Patricia E.
Tissue engineering holds great promise for the repair or replacement of native tissues and organs. Further advancements in the fabrication of functional engineered tissues are partly dependent on developing new and improved technologies to monitor the properties of engineered tissues volumetrically, quantitatively, noninvasively, and nondestructively over time. Currently, engineered tissues are evaluated during fabrication using histology, biochemical assays, and direct mechanical tests. However, these techniques destroy tissue samples and, therefore, lack the capability for real-time, longitudinal monitoring. The research reported in this thesis developed nondestructive, noninvasive approaches to characterize the structural, biological, and mechanical properties of 3-D engineered tissues using high-frequency quantitative ultrasound and elastography technologies. A quantitative ultrasound technique, using a system-independent parameter known as the integrated backscatter coefficient (IBC), was employed to visualize and quantify structural properties of engineered tissues. Specifically, the IBC was demonstrated to estimate cell concentration and quantitatively detect differences in the microstructure of 3-D collagen hydrogels. Additionally, the feasibility of an ultrasound elastography technique called Single Tracking Location Acoustic Radiation Force Impulse (STL-ARFI) imaging was demonstrated for estimating the shear moduli of 3-D engineered tissues. High-frequency ultrasound techniques can be easily integrated into sterile environments necessary for tissue engineering. Furthermore, these high-frequency quantitative ultrasound techniques can enable noninvasive, volumetric characterization of the structural, biological, and mechanical properties of engineered tissues during fabrication and post-implantation.
NASA Astrophysics Data System (ADS)
Sivaguru, Mayandi; Kabir, Mohammad M.; Gartia, Manas Ranjan; Biggs, David S. C.; Sivaguru, Barghav S.; Sivaguru, Vignesh A.; Berent, Zachary T.; Wagoner Johnson, Amy J.; Fried, Glenn A.; Liu, Gang Logan; Sadayappan, Sakthivel; Toussaint, Kimani C.
2017-02-01
Second-harmonic generation (SHG) microscopy is a label-free imaging technique to study collagenous materials in extracellular matrix environment with high resolution and contrast. However, like many other microscopy techniques, the actual spatial resolution achievable by SHG microscopy is reduced by out-of-focus blur and optical aberrations that degrade particularly the amplitude of the detectable higher spatial frequencies. Being a two-photon scattering process, it is challenging to define a point spread function (PSF) for the SHG imaging modality. As a result, in comparison with other two-photon imaging systems like two-photon fluorescence, it is difficult to apply any PSF-engineering techniques to enhance the experimental spatial resolution closer to the diffraction limit. Here, we present a method to improve the spatial resolution in SHG microscopy using an advanced maximum likelihood estimation (AdvMLE) algorithm to recover the otherwise degraded higher spatial frequencies in an SHG image. Through adaptation and iteration, the AdvMLE algorithm calculates an improved PSF for an SHG image and enhances the spatial resolution by decreasing the full-width-at-halfmaximum (FWHM) by 20%. Similar results are consistently observed for biological tissues with varying SHG sources, such as gold nanoparticles and collagen in porcine feet tendons. By obtaining an experimental transverse spatial resolution of 400 nm, we show that the AdvMLE algorithm brings the practical spatial resolution closer to the theoretical diffraction limit. Our approach is suitable for adaptation in micro-nano CT and MRI imaging, which has the potential to impact diagnosis and treatment of human diseases.
Foster, Katherine T; Beltz, Adriene M
2018-08-01
Ambulatory assessment (AA) methodologies have the potential to increase understanding and treatment of addictive behavior in seemingly unprecedented ways, due in part, to their emphasis on intensive repeated assessments of an individual's addictive behavior in context. But, many analytic techniques traditionally applied to AA data - techniques that average across people and time - do not fully leverage this potential. In an effort to take advantage of the individualized, temporal nature of AA data on addictive behavior, the current paper considers three underutilized person-oriented analytic techniques: multilevel modeling, p-technique, and group iterative multiple model estimation. After reviewing prevailing analytic techniques, each person-oriented technique is presented, AA data specifications are mentioned, an example analysis using generated data is provided, and advantages and limitations are discussed; the paper closes with a brief comparison across techniques. Increasing use of person-oriented techniques will substantially enhance inferences that can be drawn from AA data on addictive behavior and has implications for the development of individualized interventions. Copyright © 2017. Published by Elsevier Ltd.
Ex vivo diffusion MRI of the human brain: Technical challenges and recent advances.
Roebroeck, Alard; Miller, Karla L; Aggarwal, Manisha
2018-06-04
This review discusses ex vivo diffusion magnetic resonance imaging (dMRI) as an important research tool for neuroanatomical investigations and the validation of in vivo dMRI techniques, with a focus on the human brain. We review the challenges posed by the properties of post-mortem tissue, and discuss state-of-the-art tissue preparation methods and recent advances in pulse sequences and acquisition techniques to tackle these. We then review recent ex vivo dMRI studies of the human brain, highlighting the validation of white matter orientation estimates and the atlasing and mapping of large subcortical structures. We also give particular emphasis to the delineation of layered gray matter structure with ex vivo dMRI, as this application illustrates the strength of its mesoscale resolution over large fields of view. We end with a discussion and outlook on future and potential directions of the field. © 2018 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd.
High-speed Civil Transport Aircraft Emissions
NASA Technical Reports Server (NTRS)
Miake-Lye, Richard C.; Matulaitis, J. A.; Krause, F. H.; Dodds, Willard J.; Albers, Martin; Hourmouziadis, J.; Hasel, K. L.; Lohmann, R. P.; Stander, C.; Gerstle, John H.
1992-01-01
Estimates are given for the emissions from a proposed high speed civil transport (HSCT). This advanced technology supersonic aircraft would fly in the lower stratosphere at a speed of roughly Mach 1.6 to 3.2 (470 to 950 m/sec or 920 to 1850 knots). Because it would fly in the stratosphere at an altitude in the range of 15 to 23 km commensurate with its design speed, its exhaust effluents could perturb the chemical balance in the upper atmosphere. The first step in determining the nature and magnitude of any chemical changes in the atmosphere resulting from these proposed aircraft is to identify and quantify the chemically important species they emit. Relevant earlier work is summarized, dating back to the Climatic Impact Assessment Program of the early 1970s and current propulsion research efforts. Estimates are provided of the chemical composition of an HSCT's exhaust, and these emission indices are presented. Other aircraft emissions that are not due to combustion processes are also summarized; these emissions are found to be much smaller than the exhaust emissions. Future advances in propulsion technology, in experimental measurement techniques, and in understanding upper atmospheric chemistry may affect these estimates of the amounts of trace exhaust species or their relative importance.
Multiple sensitive estimation and optimal sample size allocation in the item sum technique.
Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz
2018-01-01
For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Demonstration of passively cooled high-power Yb fiber amplifier
NASA Astrophysics Data System (ADS)
Bradford, Joshua; Cook, Justin; Antonio-Lopez, Jose Enrique; Shah, Larry; Amezcua Correa, Rodrigo; Richardson, Martin
2018-02-01
This work investigates the feasibility of passive cooling in high-power Yb amplifiers. Experimentally, an all-glass airclad step-index (ACSI) amplifier is diode-pumped with 400W and provides 200W power levels. With only natural convection to extract heat, core temperatures are estimated near 130°C with no degradation of performance relative to cooled architectures. Further, advanced analysis techniques allow for core temperature determination using thermal interferometry without the need for complicated stabilization or calibration.
Software engineering project management - A state-of-the-art report
NASA Technical Reports Server (NTRS)
Thayer, R. H.; Lehman, J. H.
1977-01-01
The management of software engineering projects in the aerospace industry was investigated. The survey assessed such features as contract type, specification preparation techniques, software documentation required by customers, planning and cost-estimating, quality control, the use of advanced program practices, software tools and test procedures, the education levels of project managers, programmers and analysts, work assignment, automatic software monitoring capabilities, design and coding reviews, production times, success rates, and organizational structure of the projects.
Image interpolation and denoising for division of focal plane sensors using Gaussian processes.
Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor
2014-06-16
Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.
NASA Astrophysics Data System (ADS)
Carmichael, G. R.; Saide, P. E.; Gao, M.; Streets, D. G.; Kim, J.; Woo, J. H.
2017-12-01
Ambient aerosols are important air pollutants with direct impacts on human health and on the Earth's weather and climate systems through their interactions with radiation and clouds. Their role is dependent on their distributions of size, number, phase and composition, which vary significantly in space and time. There remain large uncertainties in simulated aerosol distributions due to uncertainties in emission estimates and in chemical and physical processes associated with their formation and removal. These uncertainties lead to large uncertainties in weather and air quality predictions and in estimates of health and climate change impacts. Despite these uncertainties and challenges, regional-scale coupled chemistry-meteorological models such as WRF-Chem have significant capabilities in predicting aerosol distributions and explaining aerosol-weather interactions. We explore the hypothesis that new advances in on-line, coupled atmospheric chemistry/meteorological models, and new emission inversion and data assimilation techniques applicable to such coupled models, can be applied in innovative ways using current and evolving observation systems to improve predictions of aerosol distributions at regional scales. We investigate the impacts of assimilating AOD from geostationary satellite (GOCI) and surface PM2.5 measurements on predictions of AOD and PM in Korea during KORUS-AQ through a series of experiments. The results suggest assimilating datasets from multiple platforms can improve the predictions of aerosol temporal and spatial distributions.
Outcomes in the management of esophageal cancer.
Paul, Subroto; Altorki, Nasser
2014-10-01
Esophageal cancer rates have continued to rise in the Western World. Esophageal cancer will be responsible for an estimated 15,450 deaths in the United States in 2014 alone. Esophageal resection with or without preoperative therapy remains the mainstay of treatment. Advances in surgical technique and perioperative care have improved short-term outcomes considerably by decreasing operative mortality. Despite these advances though, esophagectomy remains a procedure associated with considerable morbidity from a wide range of complications. Prompt recognition and treatment of complications can lower overall morbidity and mortality. Unfortunately, long-term outcomes remain poor as the vast majority of patients present with loco-regionally advanced or metastatic disease. Surgery by itself provides poor loco-regional control and fails to address micrometastatic disease. Neoadjuvant chemotherapy or chemoradiation provides a modest survival advantage compared to surgical resection alone. Future gains in understanding the molecular biology of esophageal cancer will hopefully lead to improved therapeutics and resultant outcomes. © 2014 Wiley Periodicals, Inc.
Nonlinear models for estimating GSFC travel requirements
NASA Technical Reports Server (NTRS)
Buffalano, C.; Hagan, F. J.
1974-01-01
A methodology is presented for estimating travel requirements for a particular period of time. Travel models were generated using nonlinear regression analysis techniques on a data base of FY-72 and FY-73 information from 79 GSFC projects. Although the subject matter relates to GSFX activities, the type of analysis used and the manner of selecting the relevant variables would be of interest to other NASA centers, government agencies, private corporations and, in general, any organization with a significant travel budget. Models were developed for each of six types of activity: flight projects (in-house and out-of-house), experiments on non-GSFC projects, international projects, ART/SRT, data analysis, advanced studies, tracking and data, and indirects.
Robust detection, isolation and accommodation for sensor failures
NASA Technical Reports Server (NTRS)
Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.
1986-01-01
The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques
Exploiting Satellite Archives to Estimate Global Glacier Volume Changes
NASA Astrophysics Data System (ADS)
McNabb, R. W.; Nuth, C.; Kääb, A.; Girod, L.
2017-12-01
In the past decade, the availability of, and ability to process, remote sensing data over glaciers has expanded tremendously. Newly opened satellite image archives, combined with new processing techniques as well as increased computing power and storage capacity, have given the glaciological community the ability to observe and investigate glaciological processes and changes on a truly global scale. In particular, the opening of the ASTER archives provides further opportunities to both estimate and monitor glacier elevation and volume changes globally, including potentially on sub-annual timescales. With this explosion of data availability, however, comes the challenge of seeing the forest instead of the trees. The high volume of data available means that automated detection and proper handling of errors and biases in the data becomes critical, in order to properly study the processes that we wish to see. This includes holes and blunders in digital elevation models (DEMs) derived from optical data or penetration of radar signals leading to biases in DEMs derived from radar data, among other sources. Here, we highlight new advances in the ability to sift through high-volume datasets, and apply these techniques to estimate recent glacier volume changes in the Caucasus Mountains, Scandinavia, Africa, and South America. By properly estimating and correcting for these biases, we additionally provide a detailed accounting of the uncertainties in these estimates of volume changes, leading to more reliable results that have applicability beyond the glaciological community.
Current methods and advances in bone densitometry
NASA Technical Reports Server (NTRS)
Guglielmi, G.; Gluer, C. C.; Majumdar, S.; Blunt, B. A.; Genant, H. K.
1995-01-01
Bone mass is the primary, although not the only, determinant of fracture. Over the past few years a number of noninvasive techniques have been developed to more sensitively quantitate bone mass. These include single and dual photon absorptiometry (SPA and DPA), single and dual X-ray absorptiometry (SXA and DXA) and quantitative computed tomography (QCT). While differing in anatomic sites measured and in their estimates of precision, accuracy, and fracture discrimination, all of these methods provide clinically useful measurements of skeletal status. It is the intent of this review to discuss the pros and cons of these techniques and to present the new applications of ultrasound (US) and magnetic resonance (MRI) in the detection and management of osteoporosis.
System identification of jet engines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugiyama, N.
2000-01-01
System identification plays an important role in advanced control systems for jet engines, in which controls are performed adaptively using data from the actual engine and the identified engine. An identification technique for jet engine using the Constant Gain Extended Kalman Filter (CGEKF) is described. The filter is constructed for a two-spool turbofan engine. The CGEKF filter developed here can recognize parameter change in engine components and estimate unmeasurable variables over whole flight conditions. These capabilities are useful for an advanced Full Authority Digital Electric Control (FADEC). Effects of measurement noise and bias, effects of operating point and unpredicted performancemore » change are discussed. Some experimental results using the actual engine are shown to evaluate the effectiveness of CGEKF filter.« less
Study of repeater technology for advanced multifunctional communications satellites
NASA Technical Reports Server (NTRS)
1972-01-01
Investigations are presented concerning design concepts and implementation approaches for the satellite communication repeater subsystems of advanced multifunctional satellites. In such systems the important concepts are the use of multiple antenna beams, repeater switching (routing), and efficient spectrum utilization through frequency reuse. An information base on these techniques was developed and tradeoff analyses were made of repeater design concepts, with the work design taken in a broad sense to include modulation beam coverage patterns. There were five major areas of study: requirements analysis and processing; study of interbeam interference in multibeam systems; characterization of multiple-beam switching repeaters; estimation of repeater weight and power for a number of alternatives; and tradeoff analyses based on these weight and power data.
Conceptual Design of a Communication-Based Deep Space Navigation Network
NASA Technical Reports Server (NTRS)
Anzalone, Evan J.; Chuang, C. H.
2012-01-01
As the need grows for increased autonomy and position knowledge accuracy to support missions beyond Earth orbit, engineers must push and develop more advanced navigation sensors and systems that operate independent of Earth-based analysis and processing. Several spacecraft are approaching this problem using inter-spacecraft radiometric tracking and onboard autonomous optical navigation methods. This paper proposes an alternative implementation to aid in spacecraft position fixing. The proposed method Network-Based Navigation technique takes advantage of the communication data being sent between spacecraft and between spacecraft and ground control to embed navigation information. The navigation system uses these packets to provide navigation estimates to an onboard navigation filter to augment traditional ground-based radiometric tracking techniques. As opposed to using digital signal measurements to capture inherent information of the transmitted signal itself, this method relies on the embedded navigation packet headers to calculate a navigation estimate. This method is heavily dependent on clock accuracy and the initial results show the promising performance of a notional system.
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
Adserias-Garriga, Joe; Hernández, Marta; Quijada, Narciso M; Rodríguez Lázaro, David; Steadman, Dawnie; Garcia-Gil, Jesús
2017-09-01
Understanding human decomposition is critical for its use in postmortem interval (PMI) estimation, having a significant impact on forensic investigations. In recognition of the need to establish the scientific basis for PMI estimation, several studies on decomposition have been carried out in the last years. The aims of the present study were: (i) to identify soil microbiota communities involved in human decomposition through high-throughput sequencing (HTS) of DNA sequences from the different bacteria, (ii) to monitor quantitatively and qualitatively the decay of such signature species, and (iii) to describe succesional changes in bacterial populations from the early putrefaction state until skeletonization. Three donated individuals to the University of Tennessee FAC were studied. Soil samples around the body were taken from the placement of the donor until advanced decay/dry remains stage. Bacterial DNA extracts were obtained from the samples, HTS techniques were applied and bioinformatic data analysis was performed. The three cadavers showed similar overall successional changes. At the beginning of the decomposition process the soil microbiome consisted of diverse indigenous soil bacterial communities. As decomposition advanced, Firmicutes community abundance increased in the soil during the bloat stage. The growth curve of Firmicutes from human remains can be used to estimate time since death during Tennessee summer conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
A distributed automatic target recognition system using multiple low resolution sensors
NASA Astrophysics Data System (ADS)
Yue, Zhanfeng; Lakshmi Narasimha, Pramod; Topiwala, Pankaj
2008-04-01
In this paper, we propose a multi-agent system which uses swarming techniques to perform high accuracy Automatic Target Recognition (ATR) in a distributed manner. The proposed system can co-operatively share the information from low-resolution images of different looks and use this information to perform high accuracy ATR. An advanced, multiple-agent Unmanned Aerial Vehicle (UAV) systems-based approach is proposed which integrates the processing capabilities, combines detection reporting with live video exchange, and swarm behavior modalities that dramatically surpass individual sensor system performance levels. We employ real-time block-based motion analysis and compensation scheme for efficient estimation and correction of camera jitter, global motion of the camera/scene and the effects of atmospheric turbulence. Our optimized Partition Weighted Sum (PWS) approach requires only bitshifts and additions, yet achieves a stunning 16X pixel resolution enhancement, which is moreover parallizable. We develop advanced, adaptive particle-filtering based algorithms to robustly track multiple mobile targets by adaptively changing the appearance model of the selected targets. The collaborative ATR system utilizes the homographies between the sensors induced by the ground plane to overlap the local observation with the received images from other UAVs. The motion of the UAVs distorts estimated homography frame to frame. A robust dynamic homography estimation algorithm is proposed to address this, by using the homography decomposition and the ground plane surface estimation.
Surface soil moisture retrievals from remote sensing: Current status, products & future trends
NASA Astrophysics Data System (ADS)
Petropoulos, George P.; Ireland, Gareth; Barrett, Brian
Advances in Earth Observation (EO) technology, particularly over the last two decades, have shown that soil moisture content (SMC) can be measured to some degree or other by all regions of the electromagnetic spectrum, and a variety of techniques have been proposed to facilitate this purpose. In this review we provide a synthesis of the efforts made during the last 20 years or so towards the estimation of surface SMC exploiting EO imagery, with a particular emphasis on retrievals from microwave sensors. Rather than replicating previous overview works, we provide a comprehensive and critical exploration of all the major approaches employed for retrieving SMC in a range of different global ecosystems. In this framework, we consider the newest techniques developed within optical and thermal infrared remote sensing, active and passive microwave domains, as well as assimilation or synergistic approaches. Future trends and prospects of EO for the accurate determination of SMC from space are subject to key challenges, some of which are identified and discussed within. It is evident from this review that there is potential for more accurate estimation of SMC exploiting EO technology, particularly so, by exploring the use of synergistic approaches between a variety of EO instruments. Given the importance of SMC in Earth's land surface interactions and to a large range of applications, one can appreciate that its accurate estimation is critical in addressing key scientific and practical challenges in today's world such as food security, sustainable planning and management of water resources. The launch of new, more sophisticated satellites strengthens the development of innovative research approaches and scientific inventions that will result in a range of pioneering and ground-breaking advancements in the retrievals of soil moisture from space.
NASA Astrophysics Data System (ADS)
Gunawan, Fergyanto E.; Abbas, Bahtiar S.; Atmadja, Wiedjaja; Yoseph Chandra, Fajar; Agung, Alexander AS; Kusnandar, Erwin
2014-03-01
Traffic congestion in Asian megacities has become extremely worse, and any means to lessen the congestion level is urgently needed. Building an efficient mass transportation system is clearly necessary. However, implementing Intelligent Transportation Systems (ITS) have also been demonstrated effective in various advanced countries. Recently, the floating vehicle technique (FVT), an ITS implementation, has become cost effective to provide real-time traffic information with proliferation of the smartphones. Although many publications have discussed various issues related to the technique, none of them elaborates the discrepancy of a single floating car data (FCD) and the associated fleet data. This work addresses the issue based on an analysis of Sugiyama et al's experimental data. The results indicate that there is an optimum averaging time interval such that the estimated velocity by the FVT reasonably representing the traffic velocity.
An evaluation of the accuracy of some radar wind profiling techniques
NASA Technical Reports Server (NTRS)
Koscielny, A. J.; Doviak, R. J.
1983-01-01
Major advances in Doppler radar measurement in optically clear air have made it feasible to monitor radial velocities in the troposphere and lower stratosphere. For most applications the three dimensional wind vector is monitored rather than the radial velocity. Measurement of the wind vector with a single radar can be made assuming a spatially linear, time invariant wind field. The components and derivatives of the wind are estimated by the parameters of a linear regression of the radial velocities on functions of their spatial locations. The accuracy of the wind measurement thus depends on the locations of the radial velocities. The suitability is evaluated of some of the common retrieval techniques for simultaneous measurement of both the vertical and horizontal wind components. The techniques considered for study are fixed beam, azimuthal scanning (VAD) and elevation scanning (VED).
Dual nozzle aerodynamic and cooling analysis study
NASA Technical Reports Server (NTRS)
Meagher, G. M.
1981-01-01
Analytical models to predict performance and operating characteristics of dual nozzle concepts were developed and improved. Aerodynamic models are available to define flow characteristics and bleed requirements for both the dual throat and dual expander concepts. Advanced analytical techniques were utilized to provide quantitative estimates of the bleed flow, boundary layer, and shock effects within dual nozzle engines. Thermal analyses were performed to define cooling requirements for baseline configurations, and special studies of unique dual nozzle cooling problems defined feasible means of achieving adequate cooling.
Recent advances in the development and transfer of machine vision technologies for space
NASA Technical Reports Server (NTRS)
Defigueiredo, Rui J. P.; Pendleton, Thomas
1991-01-01
Recent work concerned with real-time machine vision is briefly reviewed. This work includes methodologies and techniques for optimal illumination, shape-from-shading of general (non-Lambertian) 3D surfaces, laser vision devices and technology, high level vision, sensor fusion, real-time computing, artificial neural network design and use, and motion estimation. Two new methods that are currently being developed for object recognition in clutter and for 3D attitude tracking based on line correspondence are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuruma, K.; Takamiya, D.; Ota, Y.
We demonstrate precise and quick detection of the positions of quantum dots (QDs) embedded in two-dimensional photonic crystal nanocavities. We apply this technique to investigate the QD position dependence of the optical coupling between the QD and the nanocavity. We use a scanning electron microscope (SEM) operating at a low acceleration voltage to detect surface bumps induced by the QDs buried underneath. This enables QD detection with a sub-10 nm precision. We then experimentally measure the vacuum Rabi spectra to extract the optical coupling strengths (gs) between single QDs and cavities, and compare them to the values estimated by a combinationmore » of the SEM-measured QD positions and electromagnetic cavity field simulations. We found a highly linear relationship between the local cavity field intensities and the QD-cavity gs, suggesting the validity of the point dipole approximation used in the estimation of the gs. The estimation using SEM has a small standard deviation of ±6.2%, which potentially enables the high accuracy prediction of g prior to optical measurements. Our technique will play a key role for deeply understanding the interaction between QDs and photonic nanostructures and for advancing QD-based cavity quantum electrodynamics.« less
Enhancing e-waste estimates: Improving data quality by multivariate Input–Output Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Feng, E-mail: fwang@unu.edu; Design for Sustainability Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628CE Delft; Huisman, Jaco
2013-11-15
Highlights: • A multivariate Input–Output Analysis method for e-waste estimates is proposed. • Applying multivariate analysis to consolidate data can enhance e-waste estimates. • We examine the influence of model selection and data quality on e-waste estimates. • Datasets of all e-waste related variables in a Dutch case study have been provided. • Accurate modeling of time-variant lifespan distributions is critical for estimate. - Abstract: Waste electrical and electronic equipment (or e-waste) is one of the fastest growing waste streams, which encompasses a wide and increasing spectrum of products. Accurate estimation of e-waste generation is difficult, mainly due to lackmore » of high quality data referred to market and socio-economic dynamics. This paper addresses how to enhance e-waste estimates by providing techniques to increase data quality. An advanced, flexible and multivariate Input–Output Analysis (IOA) method is proposed. It links all three pillars in IOA (product sales, stock and lifespan profiles) to construct mathematical relationships between various data points. By applying this method, the data consolidation steps can generate more accurate time-series datasets from available data pool. This can consequently increase the reliability of e-waste estimates compared to the approach without data processing. A case study in the Netherlands is used to apply the advanced IOA model. As a result, for the first time ever, complete datasets of all three variables for estimating all types of e-waste have been obtained. The result of this study also demonstrates significant disparity between various estimation models, arising from the use of data under different conditions. It shows the importance of applying multivariate approach and multiple sources to improve data quality for modelling, specifically using appropriate time-varying lifespan parameters. Following the case study, a roadmap with a procedural guideline is provided to enhance e-waste estimation studies.« less
Summary of Full-Scale Blade Displacement Measurements of the UH- 60A Airloads Rotor
NASA Technical Reports Server (NTRS)
Abrego, Anita I.; Meyn, Larry; Burner, Alpheus W.; Barrows, Danny A.
2016-01-01
Blade displacement measurements using multi-camera photogrammetry techniques were acquired for a full-scale UH-60A rotor, tested in the National Full-Scale Aerodynamic Complex 40-Foot by 80-Foot Wind Tunnel. The measurements, acquired over the full rotor azimuth, encompass a range of test conditions that include advance ratios from 0.15 to 1.0, thrust coefficient to rotor solidity ratios from 0.01 to 0.13, and rotor shaft angles from -10.0 to 8.0 degrees. The objective was to measure the blade displacements and deformations of the four rotor blades and provide a benchmark blade displacement database to be utilized in the development and validation of rotorcraft prediction techniques. An overview of the blade displacement measurement methodology, system development, and data analysis techniques are presented. Sample results based on the final set of camera calibrations, data reduction procedures and estimated corrections that account for registration errors due to blade elasticity are shown. Differences in blade root pitch, flap and lag between the previously reported results and the current results are small. However, even small changes in estimated root flap and pitch can lead to significant differences in the blade elasticity values.
Turbine blade tip durability analysis
NASA Technical Reports Server (NTRS)
Mcknight, R. L.; Laflen, J. H.; Spamer, G. T.
1981-01-01
An air-cooled turbine blade from an aircraft gas turbine engine chosen for its history of cracking was subjected to advanced analytical and life-prediction techniques. The utility of advanced structural analysis techniques and advanced life-prediction techniques in the life assessment of hot section components are verified. Three dimensional heat transfer and stress analyses were applied to the turbine blade mission cycle and the results were input into advanced life-prediction theories. Shortcut analytical techniques were developed. The proposed life-prediction theories are evaluated.
Lock Acquisition and Sensitivity Analysis of Advanced LIGO Interferometers
NASA Astrophysics Data System (ADS)
Martynov, Denis
Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe. The initial phase of LIGO started in 2002, and since then data was collected during the six science runs. Instrument sensitivity improved from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010. In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation of detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted until 2014. This thesis describes results of commissioning work done at the LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers. The first part of this thesis is devoted to the description of methods for bringing the interferometer into linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details. Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in real time. Sensitivity analysis was done to understand and eliminate noise sources of the instrument. The coupling of noise sources to gravitational wave channel can be reduced if robust feedforward and optimal feedback control loops are implemented. Static and adaptive feedforward noise cancellation techniques applied to Advanced LIGO interferometers and tested at the 40m prototype are described in the last part of this thesis. Applications of optimal time domain feedback control techniques and estimators to aLIGO control loops are also discussed. Commissioning work is still ongoing at the sites. First science run of advanced LIGO is planned for September 2015 and will last for 3-4 months. This run will be followed by a set of small instrument upgrades that will be installed on a time scale of few months. Second science run will start in spring 2016 and last for about six months. Since current sensitivity of advanced LIGO is already more than a factor of 3 higher compared to initial detectors and keeps improving on a monthly basis, the upcoming science runs have a good chance for the first direct detection of gravitational waves.
NASA Technical Reports Server (NTRS)
Dickey, Jean O.
1999-01-01
Uncertainty over the response of the atmospheric hydrological cycle (particularly the distribution of water vapor and cloudiness) to anthropogenic forcing is a primary source of doubt in current estimates of global climate sensitivity, which raises severe difficulties in evaluating its likely societal impact. Fortunately, a variety of advanced techniques and sensors are beginning to shed new light on the atmospheric hydrological cycle. One of the most promising makes use of the sensitivity of the Global Positioning System (GPS) to the thermodynamic state, and in particular the water vapor content, of the atmosphere through which the radio signals propagate. Our strategy to derive the maximum benefit for hydrological studies from the rapidly increasing GPS data stream will proceed in three stages: (1) systematically analyze and archive quality-controlled retrievals using state-of-the-art techniques; (2) employ both currently available and innovative assimilation procedures to incorporate these determinations into advanced regional and global atmospheric models and assess their effects; and (3) apply the results to investigate selected scientific issues of relevance to regional and global hydrological studies. An archive of GPS-based estimation of total zenith delay (TZD) data and water vapor where applicable has been established with expanded automated quality control. The accuracy of the GPS estimates is being monitored; the investigation of systematic errors is ongoing using comparisons with water vapor radiometers. Meteorological packages have been implemented. The accuracy and utilization of the TZD estimates has been improved by implementing a troposphere gradient model. GPS-based gradients have been validated as real atmospheric moisture gradients, establishing a link between the estimated gradients and the passage of weather fronts. We have developed a generalized ray tracing inversion scheme that can be used to analyze occultation data acquired from space- or land-based receivers. The National Center for Atmospheric Research mesoscale model (version MM5) has been adapted for Southern California, and assimilation studies are underway. Additional information is contained in the original.
Materials and structural aspects of advanced gas-turbine helicopter engines
NASA Technical Reports Server (NTRS)
Freche, J. C.; Acurio, J.
1979-01-01
Advances in materials, coatings, turbine cooling technology, structural and design concepts, and component-life prediction of helicopter gas-turbine-engine components are presented. Stationary parts including the inlet particle separator, the front frame, rotor tip seals, vanes and combustors and rotating components - compressor blades, disks, and turbine blades - are discussed. Advanced composite materials are considered for the front frame and compressor blades, prealloyed powder superalloys will increase strength and reduce costs of disks, the oxide dispersion strengthened alloys will have 100C higher use temperature in combustors and vanes than conventional superalloys, ceramics will provide the highest use temperature of 1400C for stator vanes and 1370C for turbine blades, and directionally solidified eutectics will afford up to 50C temperature advantage at turbine blade operating conditions. Coatings for surface protection at higher surface temperatures and design trends in turbine cooling technology are discussed. New analytical methods of life prediction such as strain gage partitioning for high temperature prediction, fatigue life, computerized prediction of oxidation resistance, and advanced techniques for estimating coating life are described.
Factors influencing the consumption of alcohol and tobacco: the use and abuse of economic models.
Godfrey, C
1989-10-01
This paper is concerned with the use of economic models in the debate about the role that tax increases and restrictions on advertising should play in reducing the health problems that arise from the consumption of alcohol and tobacco. It is argued that properly specified demand models that take account of all the important factors that influence consumption are required, otherwise inadequate modelling may lead to misleading estimates of the effects of policy changes. The ability of economics to deal with goods such as alcohol and tobacco that have addictive characteristics receives special attention. Recent advances in economic theory, estimation techniques and statistical testing are discussed, as is the problem of identifying policy recommendations from empirical results.
NASA Astrophysics Data System (ADS)
Wasklewicz, Thad; Zhu, Zhen; Gares, Paul
2017-12-01
Rapid technological advances, sustained funding, and a greater recognition of the value of topographic data have helped develop an increasing archive of topographic data sources. Advances in basic and applied research related to Earth surface changes require researchers to integrate recent high-resolution topography (HRT) data with the legacy datasets. Several technical challenges and data uncertainty issues persist to date when integrating legacy datasets with more recent HRT data. The disparate data sources required to extend the topographic record back in time are often stored in formats that are not readily compatible with more recent HRT data. Legacy data may also contain unknown error or unreported error that make accounting for data uncertainty difficult. There are also cases of known deficiencies in legacy datasets, which can significantly bias results. Finally, scientists are faced with the daunting challenge of definitively deriving the extent to which a landform or landscape has or will continue to change in response natural and/or anthropogenic processes. Here, we examine the question: how do we evaluate and portray data uncertainty from the varied topographic legacy sources and combine this uncertainty with current spatial data collection techniques to detect meaningful topographic changes? We view topographic uncertainty as a stochastic process that takes into consideration spatial and temporal variations from a numerical simulation and physical modeling experiment. The numerical simulation incorporates numerous topographic data sources typically found across a range of legacy data to present high-resolution data, while the physical model focuses on more recent HRT data acquisition techniques. Elevation uncertainties observed from anchor points in the digital terrain models are modeled using "states" in a stochastic estimator. Stochastic estimators trace the temporal evolution of the uncertainties and are natively capable of incorporating sensor measurements observed at various times in history. The geometric relationship between the anchor point and the sensor measurement can be approximated via spatial correlation even when a sensor does not directly observe an anchor point. Findings from a numerical simulation indicate the estimated error coincides with the actual error using certain sensors (Kinematic GNSS, ALS, TLS, and SfM-MVS). Data from 2D imagery and static GNSS did not perform as well at the time the sensor is integrated into estimator largely as a result of the low density of data added from these sources. The estimator provides a history of DEM estimation as well as the uncertainties and cross correlations observed on anchor points. Our work provides preliminary evidence that our approach is valid for integrating legacy data with HRT and warrants further exploration and field validation. [Figure not available: see fulltext.
Diabat Interpolation for Polymorph Free-Energy Differences.
Kamat, Kartik; Peters, Baron
2017-02-02
Existing methods to compute free-energy differences between polymorphs use harmonic approximations, advanced non-Boltzmann bias sampling techniques, and/or multistage free-energy perturbations. This work demonstrates how Bennett's diabat interpolation method ( J. Comput. Phys. 1976, 22, 245 ) can be combined with energy gaps from lattice-switch Monte Carlo techniques ( Phys. Rev. E 2000, 61, 906 ) to swiftly estimate polymorph free-energy differences. The new method requires only two unbiased molecular dynamics simulations, one for each polymorph. To illustrate the new method, we compute the free-energy difference between face-centered cubic and body-centered cubic polymorphs for a Gaussian core solid. We discuss the justification for parabolic models of the free-energy diabats and similarities to methods that have been used in studies of electron transfer.
Nonlinearity response correction in phase-shifting deflectometry
NASA Astrophysics Data System (ADS)
Nguyen, Manh The; Kang, Pilseong; Ghim, Young-Sik; Rhee, Hyug-Gyo
2018-04-01
Owing to the nonlinearity response of digital devices such as screens and cameras in phase-shifting deflectometry, non-sinusoidal phase-shifted fringe patterns are generated and additional measurement errors are introduced. In this paper, a new deflectometry technique is described for overcoming these problems using a pre-distorted pattern combined with an advanced iterative algorithm. The experiment results show that this method can reconstruct the 3D surface map of a sample without fringe print-through caused by the nonlinearity response of digital devices. The proposed technique is verified by measuring the surface height variations in a deformable mirror and comparing them with the measurement result obtained using a coordinate measuring machine. The difference between the two measurement results is estimated to be less than 13 µm.
Oceanographic applications of laser technology
NASA Technical Reports Server (NTRS)
Hoge, F. E.
1988-01-01
Oceanographic activities with the Airborne Oceanographic Lidar (AOL) for the past several years have primarily been focussed on using active (laser induced pigment fluorescence) and concurrent passive ocean color spectra to improve existing ocean color algorithms for estimating primary production in the world's oceans. The most significant results were the development of a technique for selecting optimal passive wavelengths for recovering phytoplankton photopigment concentration and the application of this technique, termed active-passive correlation spectroscopy (APCS), to various forms of passive ocean color algorithms. Included in this activity is use of airborne laser and passive ocean color for development of advanced satellite ocean color sensors. Promising on-wavelength subsurface scattering layer measurements were recently obtained. A partial summary of these results are shown.
NASA Technical Reports Server (NTRS)
Oliver, W. R.
1980-01-01
The development of an advanced technology high lift system for an energy efficient transport incorporating a high aspect ratio supercritical wing is described. This development is based on the results of trade studies to select the high lift system, analysis techniques utilized to design the high lift system, and results of a wind tunnel test program. The program included the first experimental low speed, high Reynolds number wind tunnel test for this class of aircraft. The experimental results include the effects on low speed aerodynamic characteristics of various leading and trailing edge devices, nacelles and pylons, aileron, spoilers, and Mach and Reynolds numbers. Results are discussed and compared with the experimental data and the various aerodynamic characteristics are estimated.
Estimating groundwater recharge
Healy, Richard W.; Scanlon, Bridget R.
2010-01-01
Understanding groundwater recharge is essential for successful management of water resources and modeling fluid and contaminant transport within the subsurface. This book provides a critical evaluation of the theory and assumptions that underlie methods for estimating rates of groundwater recharge. Detailed explanations of the methods are provided - allowing readers to apply many of the techniques themselves without needing to consult additional references. Numerous practical examples highlight benefits and limitations of each method. Approximately 900 references allow advanced practitioners to pursue additional information on any method. For the first time, theoretical and practical considerations for selecting and applying methods for estimating groundwater recharge are covered in a single volume with uniform presentation. Hydrogeologists, water-resource specialists, civil and agricultural engineers, earth and environmental scientists and agronomists will benefit from this informative and practical book. It can serve as the primary text for a graduate-level course on groundwater recharge or as an adjunct text for courses on groundwater hydrology or hydrogeology.
Consistent Partial Least Squares Path Modeling via Regularization.
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.
POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2013-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power as the percentage of cases in which an estimate of interest is significantly different from zero. Examples of power calculation for commonly used mediational models are provided. Power analyses for the single mediator, multiple mediators, three-path mediation, mediation with latent variables, moderated mediation, and mediation in longitudinal designs are described. Annotated sample syntax for Mplus is appended and tabled values of required sample sizes are shown for some models. PMID:23935262
What we know and don't know about Earth's missing biodiversity.
Scheffers, Brett R; Joppa, Lucas N; Pimm, Stuart L; Laurance, William F
2012-09-01
Estimates of non-microbial diversity on Earth range from 2 million to over 50 million species, with great uncertainties in numbers of insects, fungi, nematodes, and deep-sea organisms. We summarize estimates for major taxa, the methods used to obtain them, and prospects for further discoveries. Major challenges include frequent synonymy, the difficulty of discriminating certain species by morphology alone, and the fact that many undiscovered species are small, difficult to find, or have small geographic ranges. Cryptic species could be numerous in some taxa. Novel techniques, such as DNA barcoding, new databases, and crowd-sourcing, could greatly accelerate the rate of species discovery. Such advances are timely. Most missing species probably live in biodiversity hotspots, where habitat destruction is rife, and so current estimates of extinction rates from known species are too low. Copyright © 2012 Elsevier Ltd. All rights reserved.
Hyperspectral image reconstruction for x-ray fluorescence tomography
Gürsoy, Doǧa; Biçer, Tekin; Lanzirotti, Antonio; ...
2015-01-01
A penalized maximum-likelihood estimation is proposed to perform hyperspectral (spatio-spectral) image reconstruction for X-ray fluorescence tomography. The approach minimizes a Poisson-based negative log-likelihood of the observed photon counts, and uses a penalty term that has the effect of encouraging local continuity of model parameter estimates in both spatial and spectral dimensions simultaneously. The performance of the reconstruction method is demonstrated with experimental data acquired from a seed of arabidopsis thaliana collected at the 13-ID-E microprobe beamline at the Advanced Photon Source. The resulting element distribution estimates with the proposed approach show significantly better reconstruction quality than the conventional analytical inversionmore » approaches, and allows for a high data compression factor which can reduce data acquisition times remarkably. In particular, this technique provides the capability to tomographically reconstruct full energy dispersive spectra without compromising reconstruction artifacts that impact the interpretation of results.« less
Analytical Fuselage and Wing Weight Estimation of Transport Aircraft
NASA Technical Reports Server (NTRS)
Chambers, Mark C.; Ardema, Mark D.; Patron, Anthony P.; Hahn, Andrew S.; Miura, Hirokazu; Moore, Mark D.
1996-01-01
A method of estimating the load-bearing fuselage weight and wing weight of transport aircraft based on fundamental structural principles has been developed. This method of weight estimation represents a compromise between the rapid assessment of component weight using empirical methods based on actual weights of existing aircraft, and detailed, but time-consuming, analysis using the finite element method. The method was applied to eight existing subsonic transports for validation and correlation. Integration of the resulting computer program, PDCYL, has been made into the weights-calculating module of the AirCraft SYNThesis (ACSYNT) computer program. ACSYNT has traditionally used only empirical weight estimation methods; PDCYL adds to ACSYNT a rapid, accurate means of assessing the fuselage and wing weights of unconventional aircraft. PDCYL also allows flexibility in the choice of structural concept, as well as a direct means of determining the impact of advanced materials on structural weight. Using statistical analysis techniques, relations between the load-bearing fuselage and wing weights calculated by PDCYL and corresponding actual weights were determined.
Motor unit number estimation based on high-density surface electromyography decomposition.
Peng, Yun; He, Jinbao; Yao, Bo; Li, Sheng; Zhou, Ping; Zhang, Yingchun
2016-09-01
To advance the motor unit number estimation (MUNE) technique using high density surface electromyography (EMG) decomposition. The K-means clustering convolution kernel compensation algorithm was employed to detect the single motor unit potentials (SMUPs) from high-density surface EMG recordings of the biceps brachii muscles in eight healthy subjects. Contraction forces were controlled at 10%, 20% and 30% of the maximal voluntary contraction (MVC). Achieved MUNE results and the representativeness of the SMUP pools were evaluated using a high-density weighted-average method. Mean numbers of motor units were estimated as 288±132, 155±87, 107±99 and 132±61 by using the developed new MUNE at 10%, 20%, 30% and 10-30% MVCs, respectively. Over 20 SMUPs were obtained at each contraction level, and the mean residual variances were lower than 10%. The new MUNE method allows a convenient and non-invasive collection of a large size of SMUP pool with great representativeness. It provides a useful tool for estimating the motor unit number of proximal muscles. The present new MUNE method successfully avoids the use of intramuscular electrodes or multiple electrical stimuli which is required in currently available MUNE techniques; as such the new MUNE method can minimize patient discomfort for MUNE tests. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Realization of daily evapotranspiration in arid ecosystems based on remote sensing techniques
NASA Astrophysics Data System (ADS)
Elhag, Mohamed; Bahrawi, Jarbou A.
2017-03-01
Daily evapotranspiration is a major component of water resources management plans. In arid ecosystems, the quest for an efficient water budget is always hard to achieve due to insufficient irrigational water and high evapotranspiration rates. Therefore, monitoring of daily evapotranspiration is a key practice for sustainable water resources management, especially in arid environments. Remote sensing techniques offered a great help to estimate the daily evapotranspiration on a regional scale. Existing open-source algorithms proved to estimate daily evapotranspiration comprehensively in arid environments. The only deficiency of these algorithms is the course scale of the used remote sensing data. Consequently, the adequate downscaling algorithm is a compulsory step to rationalize an effective water resources management plan. Daily evapotranspiration was estimated fairly well using an Advance Along-Track Scanner Radiometer (AATSR) in conjunction with (MEdium Resolution Imaging Spectrometer) MERIS data acquired in July 2013 with 1 km spatial resolution and 3 days of temporal resolution under a surface energy balance system (SEBS) model. Results were validated against reference evapotranspiration ground truth values using standardized Penman-Monteith method with R2 of 0.879. The findings of the current research successfully monitor turbulent heat fluxes values estimated from AATSR and MERIS data with a temporal resolution of 3 days only in conjunction with reliable meteorological data. Research verdicts are necessary inputs for a well-informed decision-making processes regarding sustainable water resource management.
Improved estimates of partial volume coefficients from noisy brain MRI using spatial context.
Manjón, José V; Tohka, Jussi; Robles, Montserrat
2010-11-01
This paper addresses the problem of accurate voxel-level estimation of tissue proportions in the human brain magnetic resonance imaging (MRI). Due to the finite resolution of acquisition systems, MRI voxels can contain contributions from more than a single tissue type. The voxel-level estimation of this fractional content is known as partial volume coefficient estimation. In the present work, two new methods to calculate the partial volume coefficients under noisy conditions are introduced and compared with current similar methods. Concretely, a novel Markov Random Field model allowing sharp transitions between partial volume coefficients of neighbouring voxels and an advanced non-local means filtering technique are proposed to reduce the errors due to random noise in the partial volume coefficient estimation. In addition, a comparison was made to find out how the different methodologies affect the measurement of the brain tissue type volumes. Based on the obtained results, the main conclusions are that (1) both Markov Random Field modelling and non-local means filtering improved the partial volume coefficient estimation results, and (2) non-local means filtering was the better of the two strategies for partial volume coefficient estimation. Copyright 2010 Elsevier Inc. All rights reserved.
Madi, Mahmoud K; Karameh, Fadi N
2017-01-01
Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate under CD-CKF. In conclusion, and with the CKF recently benchmarked against other advanced Bayesian techniques, the CD-CKF framework could provide significant gains in robustness and accuracy when estimating a variety of biological phenomena models where the underlying process dynamics unfold at time scales faster than those seen in collected measurements.
2017-01-01
Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate under CD-CKF. In conclusion, and with the CKF recently benchmarked against other advanced Bayesian techniques, the CD-CKF framework could provide significant gains in robustness and accuracy when estimating a variety of biological phenomena models where the underlying process dynamics unfold at time scales faster than those seen in collected measurements. PMID:28727850
NASA Astrophysics Data System (ADS)
Alfieri, Luisa
2015-12-01
Power quality (PQ) disturbances are becoming an important issue in smart grids (SGs) due to the significant economic consequences that they can generate on sensible loads. However, SGs include several distributed energy resources (DERs) that can be interconnected to the grid with static converters, which lead to a reduction of the PQ levels. Among DERs, wind turbines and photovoltaic systems are expected to be used extensively due to the forecasted reduction in investment costs and other economic incentives. These systems can introduce significant time-varying voltage and current waveform distortions that require advanced spectral analysis methods to be used. This paper provides an application of advanced parametric methods for assessing waveform distortions in SGs with dispersed generation. In particular, the Standard International Electrotechnical Committee (IEC) method, some parametric methods (such as Prony and Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT)), and some hybrid methods are critically compared on the basis of their accuracy and the computational effort required.
Preliminary noise tradeoff study of a Mach 2.7 cruise aircraft
NASA Technical Reports Server (NTRS)
Mascitti, V. R.; Maglieri, D. J. (Editor); Raney, J. P. (Editor)
1979-01-01
NASA computer codes in the areas of preliminary sizing and enroute performance, takeoff and landing performance, aircraft noise prediction, and economics were used in a preliminary noise tradeoff study for a Mach 2.7 design supersonic cruise concept. Aerodynamic configuration data were based on wind-tunnel model tests and related analyses. Aircraft structural characteristics and weight were based on advanced structural design methodologies, assuming conventional titanium technology. The most advanced noise prediction techniques available were used, and aircraft operating costs were estimated using accepted industry methods. The 4-engines cycles included in the study were based on assumed 1985 technology levels. Propulsion data was provided by aircraft manufacturers. Additional empirical data is needed to define both noise reduction features and other operating characteristics of all engine cycles under study. Data on VCE design parameters, coannular nozzle inverted flow noise reduction and advanced mechanical suppressors are urgently needed to reduce the present uncertainties in studies of this type.
Robb, N
2014-03-01
The basic techniques of conscious sedation have been found to be safe and effective for the management of anxiety in adult dental patients requiring sedation to allow them to undergo dental treatment. There remains great debate within the profession as to the role of the so called advanced sedation techniques. This paper presents a series of nine patients who were managed with advanced sedation techniques where the basic techniques were either inappropriate or had previously failed to provide adequate relief of anxiety. In these cases, had there not been the availability of advanced sedation techniques, the most likely recourse would have been general anaesthesia--a treatment modality that current guidance indicates should not be used where there is an appropriate alternative. The sedation techniques used have provided that appropriate alternative management strategy.
Final report on "Carbon Data Assimilation with a Coupled Ensemble Kalman Filter"
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalnay, Eugenia; Kang, Ji-Sun; Fung, Inez
2014-07-23
We proposed (and accomplished) the development of an Ensemble Kalman Filter (EnKF) approach for the estimation of surface carbon fluxes as if they were parameters, augmenting the model with them. Our system is quite different from previous approaches, such as carbon flux inversions, 4D-Var, and EnKF with approximate background error covariance (Peters et al., 2008). We showed (using observing system simulation experiments, OSSEs) that these differences lead to a more accurate estimation of the evolving surface carbon fluxes at model grid-scale resolution. The main properties of the LETKF-C are: a) The carbon cycle LETKF is coupled with the simultaneous assimilationmore » of the standard atmospheric variables, so that the ensemble wind transport of the CO2 provides an estimation of the carbon transport uncertainty. b) The use of an assimilation window (6hr) much shorter than the months-long windows used in other methods. This avoids the inevitable “blurring” of the signal that takes place in long windows due to turbulent mixing since the CO2 does not have time to mix before the next window. In this development we introduced new, advanced techniques that have since been adopted by the EnKF community (Kang, 2009, Kang et al., 2011, Kang et al. 2012). These advances include “variable localization” that reduces sampling errors in the estimation of the forecast error covariance, more advanced adaptive multiplicative and additive inflations, and vertical localization based on the time scale of the processes. The main result has been obtained using the LETKF-C with all these advances, and assimilating simulated atmospheric CO2 observations from different observing systems (surface flask observations of CO2 but no surface carbon fluxes observations, total column CO2 from GoSAT/OCO-2, and upper troposphere AIRS retrievals). After a spin-up of about one month, the LETKF-C succeeded in reconstructing the true evolving surface fluxes of carbon at a model grid resolution. When applied to the CAM3.5 model, the LETKF gave very promising results as well, although only one month is available.« less
Final Technical Report [Carbon Data Assimilation with a Coupled Ensemble Kalman Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalnay, Eugenia
2013-08-30
We proposed (and accomplished) the development of an Ensemble Kalman Filter (EnKF) approach for the estimation of surface carbon fluxes as if they were parameters, augmenting the model with them. Our system is quite different from previous approaches, such as carbon flux inversions, 4D-Var, and EnKF with approximate background error covariance (Peters et al., 2008). We showed (using observing system simulation experiments, OSSEs) that these differences lead to a more accurate estimation of the evolving surface carbon fluxes at model grid-scale resolution. The main properties of the LETKF-C are: a) The carbon cycle LETKF is coupled with the simultaneous assimilationmore » of the standard atmospheric variables, so that the ensemble wind transport of the CO2 provides an estimation of the carbon transport uncertainty. b) The use of an assimilation window (6hr) much shorter than the months-long windows used in other methods. This avoids the inevitable “blurring” of the signal that takes place in long windows due to turbulent mixing since the CO2 does not have time to mix before the next window. In this development we introduced new, advanced techniques that have since been adopted by the EnKF community (Kang, 2009, Kang et al., 2011, Kang et al. 2012). These advances include “variable localization” that reduces sampling errors in the estimation of the forecast error covariance, more advanced adaptive multiplicative and additive inflations, and vertical localization based on the time scale of the processes. The main result has been obtained using the LETKF-C with all these advances, and assimilating simulated atmospheric CO2 observations from different observing systems (surface flask observations of CO2 but no surface carbon fluxes observations, total column CO2 from GoSAT/OCO-2, and upper troposphere AIRS retrievals). After a spin-up of about one month, the LETKF-C succeeded in reconstructing the true evolving surface fluxes of carbon at a model grid resolution. When applied to the CAM3.5 model, the LETKF gave very promising results as well, although only one month is available.« less
1983-01-01
altioser access (2) Asesss maturity of on-gotnR efforts and integrate appropriate development Into an effective globally dftjtributod .command spport...numerical techniques for nonlinear media.structure shock Interaction inrluding effects of elastic-plastic deformation have bee.a developed and used to...shtittle flight; develop camera payload for SPARTAN (free flyer) flight f rom shuttle. Develop detailed Interpretivesystem capablity~ for global ultraviolet
Advanced reliability modeling of fault-tolerant computer-based systems
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1982-01-01
Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.
MASS ESTIMATES OF RAPIDLY MOVING PROMINENCE MATERIAL FROM HIGH-CADENCE EUV IMAGES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, David R.; Baker, Deborah; Van Driel-Gesztelyi, Lidia, E-mail: d.r.williams@ucl.ac.uk
We present a new method for determining the column density of erupting filament material using state-of-the-art multi-wavelength imaging data. Much of the prior work on filament/prominence structure can be divided between studies that use a polychromatic approach with targeted campaign observations and those that use synoptic observations, frequently in only one or two wavelengths. The superior time resolution, sensitivity, and near-synchronicity of data from the Solar Dynamics Observatory's Advanced Imaging Assembly allow us to combine these two techniques using photoionization continuum opacity to determine the spatial distribution of hydrogen in filament material. We apply the combined techniques to SDO/AIA observationsmore » of a filament that erupted during the spectacular coronal mass ejection on 2011 June 7. The resulting 'polychromatic opacity imaging' method offers a powerful way to track partially ionized gas as it erupts through the solar atmosphere on a regular basis, without the need for coordinated observations, thereby readily offering regular, realistic mass-distribution estimates for models of these erupting structures.« less
Using postural synergies to animate a low-dimensional hand avatar in haptic simulation.
Mulatto, Sara; Formaglio, Alessandro; Malvezzi, Monica; Prattichizzo, Domenico
2013-01-01
A technique to animate a realistic hand avatar with 20 DoFs based on the biomechanics of the human hand is presented. The animation does not use any sensor glove or advanced tracker with markers. The proposed approach is based on the knowledge of a set of kinematic constraints on the model of the hand, referred to as postural synergies, which allows to represent the hand posture using a number of variables lower than the number of joints of the hand model. This low-dimensional set of parameters is estimated from direct measurement of the motion of thumb and index finger tracked using two haptic devices. A kinematic inversion algorithm has been developed, which takes synergies into account and estimates the kinematic configuration of the whole hand, i.e., also of the fingers whose end tips are not directly tracked by the two haptic devices. The hand skin is deformable and its deformation is computed using a linear vertex blending technique. The proposed synergy-based animation of the hand avatar involves only algebraic computations and is suitable for real-time implementation as required in haptics.
A novel method for characterizing the impact response of functionally graded plates
NASA Astrophysics Data System (ADS)
Larson, Reid A.
Functionally graded material (FGM) plates are advanced composites with properties that vary continuously through the thickness of the plate. Metal-ceramic FGM plates have been proposed for use in thermal protection systems where a metal-rich interior surface of the plate gradually transitions to a ceramic-rich exterior surface of the plate. The ability of FGMs to resist impact loads must be demonstrated before using them in high-temperature environments in service. This dissertation presents a novel technique by which the impact response of FGM plates is characterized for low-velocity, low- to medium-energy impact loads. An experiment was designed where strain histories in FGM plates were collected during impact events. These strain histories were used to validate a finite element simulation of the test. A parameter estimation technique was developed to estimate local material properties in the anisotropic, non-homogenous FGM plates to optimize the finite element simulations. The optimized simulations captured the physics of the impact events. The method allows research & design engineers to make informed decisions necessary to implement FGM plates in aerospace platforms.
Liang, Liang; Liu, Minliang; Martin, Caitlin; Sun, Wei
2018-05-09
Advances in structural finite element analysis (FEA) and medical imaging have made it possible to investigate the in vivo biomechanics of human organs such as blood vessels, for which organ geometries at the zero-pressure level need to be recovered. Although FEA-based inverse methods are available for zero-pressure geometry estimation, these methods typically require iterative computation, which are time-consuming and may be not suitable for time-sensitive clinical applications. In this study, by using machine learning (ML) techniques, we developed an ML model to estimate the zero-pressure geometry of human thoracic aorta given 2 pressurized geometries of the same patient at 2 different blood pressure levels. For the ML model development, a FEA-based method was used to generate a dataset of aorta geometries of 3125 virtual patients. The ML model, which was trained and tested on the dataset, is capable of recovering zero-pressure geometries consistent with those generated by the FEA-based method. Thus, this study demonstrates the feasibility and great potential of using ML techniques as a fast surrogate of FEA-based inverse methods to recover zero-pressure geometries of human organs. Copyright © 2018 John Wiley & Sons, Ltd.
Precise estimation of tropospheric path delays with GPS techniques
NASA Technical Reports Server (NTRS)
Lichten, S. M.
1990-01-01
Tropospheric path delays are a major source of error in deep space tracking. However, the tropospheric-induced delay at tracking sites can be calibrated using measurements of Global Positioning System (GPS) satellites. A series of experiments has demonstrated the high sensitivity of GPS to tropospheric delays. A variety of tests and comparisons indicates that current accuracy of the GPS zenith tropospheric delay estimates is better than 1-cm root-mean-square over many hours, sampled continuously at intervals of six minutes. These results are consistent with expectations from covariance analyses. The covariance analyses also indicate that by the mid-1990s, when the GPS constellation is complete and the Deep Space Network is equipped with advanced GPS receivers, zenith tropospheric delay accuracy with GPS will improve further to 0.5 cm or better.
NASA Astrophysics Data System (ADS)
Amako, Eri; Enjoji, Takaharu; Uchida, Satoshi; Tochikubo, Fumiyoshi
Constant monitoring and immediate control of fermentation processes have been required for advanced quality preservation in food industry. In the present work, simple estimation of metabolic states for heat-injured Escherichia coli (E. coli) in a micro-cell was investigated using dielectrophoretic impedance measurement (DEPIM) method. Temporal change in the conductance between micro-gap (ΔG) was measured for various heat treatment temperatures. In addition, the dependence of enzyme activity, growth capacity and membrane situation for E. coli on heat treatment temperature was also analyzed with conventional biological methods. Consequently, a correlation between ΔG and those biological properties was obtained quantitatively. This result suggests that DEPIM method will be available for an effective monitoring technique for complex change in various biological states of microorganisms.
NASA Technical Reports Server (NTRS)
Kong, Jeffrey
1994-01-01
This thesis focuses on the subject of the accuracy of parameter estimation and system identification techniques. Motivated by a complicated load measurement from NASA Dryden Flight Research Center, advanced system identification techniques are needed. The objective of this problem is to accurately predict the load experienced by the aircraft wing structure during flight determined from a set of calibrated load and gage response relationship. We can then model the problem as a black box input-output system identification from which the system parameter has to be estimated. Traditional LS (Least Square) techniques and the issues of noisy data and model accuracy are addressed. A statistical bound reflecting the change in residual is derived in order to understand the effects of the perturbations on the data. Due to the intrinsic nature of the LS problem, LS solution faces the dilemma of the trade off between model accuracy and noise sensitivity. A method of conflicting performance indices is presented, thus allowing us to improve the noise sensitivity while at the same time configuring the degredation of the model accuracy. SVD techniques for data reduction are studied and the equivalence of the Correspondence Analysis (CA) and Total Least Squares Criteria are proved. We also looked at nonlinear LS problems with NASA F-111 data set as an example. Conventional methods are neither easily applicable nor suitable for the specific load problem since the exact model of the system is unknown. Neural Network (NN) does not require prior information on the model of the system. This robustness motivated us to apply the NN techniques on our load problem. Simulation results for the NN methods used in both the single load and the 'warning signal' problems are both useful and encouraging. The performance of the NN (for single load estimate) is better than the LS approach, whereas no conventional approach was tried for the 'warning signals' problems. The NN design methodology is also presented. The use of SVD, CA and Collinearity Index methods are used to reduce the number of neurons in a layer.
Visible light scatter measurements of the Advanced X-ray Astronomical Facility /AXAF/ mirror samples
NASA Technical Reports Server (NTRS)
Griner, D. B.
1981-01-01
NASA is studying the properties of mirror surfaces for X-ray telescopes, the data of which will be used to develop the telescope system for the Advanced X-ray Astronomical Facility. Visible light scatter measurements, using a computer controlled scanner, are made of various mirror samples to determine surface roughness. Total diffuse scatter is calculated using numerical integration techniques and used to estimate the rms surface roughness. The data measurements are then compared with X-ray scatter measurements of the same samples. A summary of the data generated is presented, along with graphs showing changes in scatter on samples before and after cleaning. Results show that very smooth surfaces can be polished on the common substrate materials (from 2 to 10 Angstroms), and nickel appears to give the lowest visible light scatter.
The costs of introducing new technologies into space systems
NASA Technical Reports Server (NTRS)
Dodson, E. N.; Partma, H.; Ruhland, W.
1992-01-01
A review is conducted of cost-research studies intended to provide guidelines for cost estimates of integrating new technologies into existing satellite systems. Quantitative methods are described for determining the technological state-of-the-art so that proposed programs can be evaluated accurately in terms of their contribution to technological development. The R&D costs associated with the proposed programs are then assessed with attention given to the technological advances. Also incorporated quantifiably are any reductions in the costs of production, operations, and support afforded by the advanced technologies. The proposed model is employed in relation to a satellite sizing and cost study in which a tradeoff between increased R&D costs and reduced production costs is examined. The technology/cost model provides a consistent yardstick for assessing the true relative economic impact of introducing novel techniques and technologies.
Reliability model of disk arrays RAID-5 with data striping
NASA Astrophysics Data System (ADS)
Rahman, P. A.; D'K Novikova Freyre Shavier, G.
2018-03-01
Within the scope of the this scientific paper, the simplified reliability model of disk arrays RAID-5 (redundant arrays of inexpensive disks) and an advanced reliability model offered by the authors taking into the consideration nonzero time of the faulty disk replacement and different failure rates of disks in normal state of the disk array and in degraded and rebuild states are discussed. The formula obtained by the authors for calculation of the mean time to data loss (MTTDL) of the RAID-5 disk arrays on basis of the advanced model is also presented. Finally, the technique of estimation of the initial reliability parameters, which are used in the reliability model, and the calculation examples of the mean time to data loss of the RAID-5 disk arrays for the different number of disks are also given.
Advanced fabrication techniques for hydrogen-cooled engine structures
NASA Technical Reports Server (NTRS)
Buchmann, O. A.; Arefian, V. V.; Warren, H. A.; Vuigner, A. A.; Pohlman, M. J.
1985-01-01
Described is a program for development of coolant passage geometries, material systems, and joining processes that will produce long-life hydrogen-cooled structures for scramjet applications. Tests were performed to establish basic material properties, and samples constructed and evaluated to substantiate fabrication processes and inspection techniques. Results of the study show that the basic goal of increasing the life of hydrogen-cooled structures two orders of magnitude relative to that of the Hypersonic Research Engine can be reached with available means. Estimated life is 19000 cycles for the channels and 16000 cycles for pin-fin coolant passage configurations using Nickel 201. Additional research is required to establish the fatigue characteristics of dissimilar-metal coolant passages (Nickel 201/Inconel 718) and to investigate the embrittling effects of the hydrogen coolant.
Characterizing reliability in a product/process design-assurance program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerscher, W.J. III; Booker, J.M.; Bement, T.R.
1997-10-01
Over the years many advancing techniques in the area of reliability engineering have surfaced in the military sphere of influence, and one of these techniques is Reliability Growth Testing (RGT). Private industry has reviewed RGT as part of the solution to their reliability concerns, but many practical considerations have slowed its implementation. It`s objective is to demonstrate the reliability requirement of a new product with a specified confidence. This paper speaks directly to that objective but discusses a somewhat different approach to achieving it. Rather than conducting testing as a continuum and developing statistical confidence bands around the results, thismore » Bayesian updating approach starts with a reliability estimate characterized by large uncertainty and then proceeds to reduce the uncertainty by folding in fresh information in a Bayesian framework.« less
Spectromicroscopy and coherent diffraction imaging: focus on energy materials applications.
Hitchcock, Adam P; Toney, Michael F
2014-09-01
Current and future capabilities of X-ray spectromicroscopy are discussed based on coherence-limited imaging methods which will benefit from the dramatic increase in brightness expected from a diffraction-limited storage ring (DLSR). The methods discussed include advanced coherent diffraction techniques and nanoprobe-based real-space imaging using Fresnel zone plates or other diffractive optics whose performance is affected by the degree of coherence. The capabilities of current systems, improvements which can be expected, and some of the important scientific themes which will be impacted are described, with focus on energy materials applications. Potential performance improvements of these techniques based on anticipated DLSR performance are estimated. Several examples of energy sciences research problems which are out of reach of current instrumentation, but which might be solved with the enhanced DLSR performance, are discussed.
Assessment of Remote Sensing Technologies for Location of Hydrogen and Helium Leaks
NASA Technical Reports Server (NTRS)
Sellar, R. Glenn; Sohn, Yongho; Mathur, Varun; Reardon, Peter
2001-01-01
In Phase 1 of this project, a hierarchy of techniques for H2 and He leak location was developed. A total of twelve specific remote sensing techniques were evaluated; the results are summarized. A basic diffusion model was also developed to predict the concentration and distribution of H2 or He resulting from a leak. The objectives of Phase 2 of the project consisted of the following four tasks: Advance Rayleigh Doppler technique from TRL 1 to TRL 2; Plan to advance Rayleigh Doppler technique from TRL 2 to TRL 3; Advance researchers and resources for further advancement; Extend diffusion model.
Incorporating Equipment Condition Assessment in Risk Monitors for Advanced Small Modular Reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coble, Jamie B.; Coles, Garill A.; Meyer, Ryan M.
2013-10-01
Advanced small modular reactors (aSMRs) can complement the current fleet of large light-water reactors in the USA for baseload and peak demand power production and process heat applications (e.g., water desalination, shale oil extraction, hydrogen production). The day-to-day costs of aSMRs are expected to be dominated by operations and maintenance (O&M); however, the effect of diverse operating missions and unit modularity on O&M is not fully understood. These costs could potentially be reduced by optimized scheduling, with risk-informed scheduling of maintenance, repair, and replacement of equipment. Currently, most nuclear power plants have a “living” probabilistic risk assessment (PRA), which reflectsmore » the as-operated, as-modified plant and combine event probabilities with population-based probability of failure (POF) for key components. “Risk monitors” extend the PRA by incorporating the actual and dynamic plant configuration (equipment availability, operating regime, environmental conditions, etc.) into risk assessment. In fact, PRAs are more integrated into plant management in today’s nuclear power plants than at any other time in the history of nuclear power. However, population-based POF curves are still used to populate fault trees; this approach neglects the time-varying condition of equipment that is relied on during standard and non-standard configurations. Equipment condition monitoring techniques can be used to estimate the component POF. Incorporating this unit-specific estimate of POF in the risk monitor can provide a more accurate estimate of risk in different operating and maintenance configurations. This enhanced risk assessment will be especially important for aSMRs that have advanced component designs, which don’t have an available operating history to draw from, and often use passive design features, which present challenges to PRA. This paper presents the requirements and technical gaps for developing a framework to integrate unit-specific estimates of POF into risk monitors, resulting in enhanced risk monitors that support optimized operation and maintenance of aSMRs.« less
NASA Astrophysics Data System (ADS)
Czarnogorska, M.; Samsonov, S.; White, D.
2014-11-01
The research objectives of the Aquistore CO2 storage project are to design, adapt, and test non-seismic monitoring methods for measurement, and verification of CO2 storage, and to integrate data to determine subsurface fluid distributions, pressure changes and associated surface deformation. Aquistore site is located near Estevan in Southern Saskatchewan on the South flank of the Souris River and west of the Boundary Dam Power Station and the historical part of Estevan coal mine in southeastern Saskatchewan, Canada. Several monitoring techniques were employed in the study area including advanced satellite Differential Interferometric Synthetic Aperture Radar (DInSAR) technique, GPS, tiltmeters and piezometers. The targeted CO2 injection zones are within the Winnipeg and Deadwood formations located at > 3000 m depth. An array of monitoring techniques was employed in the study area including advanced satellite Differential Interferometric Synthetic Aperture Radar (DInSAR) with established corner reflectors, GPS, tiltmeters and piezometers stations. We used airborne LIDAR data for topographic phase estimation, and DInSAR product geocoding. Ground deformation maps have been calculated using Multidimensional Small Baseline Subset (MSBAS) methodology from 134 RADARSAT-2 images, from five different beams, acquired during 20120612-20140706. We computed and interpreted nine time series for selected places. MSBAS results indicate slow ground deformation up to 1 cm/year not related to CO2 injection but caused by various natural and anthropogenic causes.
State-space self-tuner for on-line adaptive control
NASA Technical Reports Server (NTRS)
Shieh, L. S.
1994-01-01
Dynamic systems, such as flight vehicles, satellites and space stations, operating in real environments, constantly face parameter and/or structural variations owing to nonlinear behavior of actuators, failure of sensors, changes in operating conditions, disturbances acting on the system, etc. In the past three decades, adaptive control has been shown to be effective in dealing with dynamic systems in the presence of parameter uncertainties, structural perturbations, random disturbances and environmental variations. Among the existing adaptive control methodologies, the state-space self-tuning control methods, initially proposed by us, are shown to be effective in designing advanced adaptive controllers for multivariable systems. In our approaches, we have embedded the standard Kalman state-estimation algorithm into an online parameter estimation algorithm. Thus, the advanced state-feedback controllers can be easily established for digital adaptive control of continuous-time stochastic multivariable systems. A state-space self-tuner for a general multivariable stochastic system has been developed and successfully applied to the space station for on-line adaptive control. Also, a technique for multistage design of an optimal momentum management controller for the space station has been developed and reported in. Moreover, we have successfully developed various digital redesign techniques which can convert a continuous-time controller to an equivalent digital controller. As a result, the expensive and unreliable continuous-time controller can be implemented using low-cost and high performance microprocessors. Recently, we have developed a new hybrid state-space self tuner using a new dual-rate sampling scheme for on-line adaptive control of continuous-time uncertain systems.
NASA Astrophysics Data System (ADS)
Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn; Prata, Fred; Kazahaya, Ryunosuke; Nakamichi, Haruhisa; Iguchi, Masato
2017-12-01
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Here we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan. Six infrasound stations deployed from 12-20 February 2015 recorded the explosions. We compute numerical Green's functions using 3-D Finite Difference Time Domain modeling and a high-resolution digital elevation model. The inversion, assuming a simple acoustic monopole source, provides realistic eruption masses and excellent fit to the data for the majority of the explosions. The inversion results are compared to independent eruption masses derived from ground-based ash collection and volcanic gas measurements. Assuming realistic flow densities, our infrasound-derived eruption masses for ash-rich eruptions compare favorably to the ground-based estimates, with agreement ranging from within a factor of two to one order of magnitude. Uncertainties in the time-dependent flow density and acoustic propagation likely contribute to the mismatch between the methods. Our results suggest that realistic and accurate infrasound-based eruption mass and mass flow rate estimates can be computed using the method employed here. If accurate volcanic flow parameters are known, application of this technique could be broadly applied to enable near real-time calculation of eruption mass flow rates and total masses. These critical input parameters for volcanic eruption modeling and monitoring are not currently available.
NASA Technical Reports Server (NTRS)
Hejduk, M. D.; Cowardin, H. M.; Stansbery, Eugene G.
2012-01-01
In performing debris surveys of deep-space orbital regions, the considerable volume of the area to be surveyed and the increased orbital altitude suggest optical telescopes as the most efficient survey instruments; but to proceed this way, methodologies for debris object size estimation using only optical tracking and photometric information are needed. Basic photometry theory indicates that size estimation should be possible if satellite albedo and shape are known. One method for estimating albedo is to try to determine the object's material type photometrically, as one can determine the albedos of common satellite materials in the laboratory. Examination of laboratory filter photometry (using Johnson BVRI filters) on a set of satellite material samples indicates that most material types can be separated at the 1-sigma level via B-R versus R-I color differences with a relatively small amount of required resampling, and objects that remain ambiguous can be resolved by B-R versus B-V color differences and solar radiation pressure differences. To estimate shape, a technique advanced by Hall et al. [1], based on phase-brightness density curves and not requiring any a priori knowledge of attitude, has been modified slightly to try to make it more resistant to the specular characteristics of different materials and to reduce the number of samples necessary to make robust shape determinations. Working from a gallery of idealized debris shapes, the modified technique identifies most shapes within this gallery correctly, also with a relatively small amount of resampling. These results are, of course, based on relatively small laboratory investigations and simulated data, and expanded laboratory experimentation and further investigation with in situ survey measurements will be required in order to assess their actual efficacy under survey conditions; but these techniques show sufficient promise to justify this next level of analysis.
Integrated survival analysis using an event-time approach in a Bayesian framework
Walsh, Daniel P.; Dreitz, VJ; Heisey, Dennis M.
2015-01-01
Event-time or continuous-time statistical approaches have been applied throughout the biostatistical literature and have led to numerous scientific advances. However, these techniques have traditionally relied on knowing failure times. This has limited application of these analyses, particularly, within the ecological field where fates of marked animals may be unknown. To address these limitations, we developed an integrated approach within a Bayesian framework to estimate hazard rates in the face of unknown fates. We combine failure/survival times from individuals whose fates are known and times of which are interval-censored with information from those whose fates are unknown, and model the process of detecting animals with unknown fates. This provides the foundation for our integrated model and permits necessary parameter estimation. We provide the Bayesian model, its derivation, and use simulation techniques to investigate the properties and performance of our approach under several scenarios. Lastly, we apply our estimation technique using a piece-wise constant hazard function to investigate the effects of year, age, chick size and sex, sex of the tending adult, and nesting habitat on mortality hazard rates of the endangered mountain plover (Charadrius montanus) chicks. Traditional models were inappropriate for this analysis because fates of some individual chicks were unknown due to failed radio transmitters. Simulations revealed biases of posterior mean estimates were minimal (≤ 4.95%), and posterior distributions behaved as expected with RMSE of the estimates decreasing as sample sizes, detection probability, and survival increased. We determined mortality hazard rates for plover chicks were highest at <5 days old and were lower for chicks with larger birth weights and/or whose nest was within agricultural habitats. Based on its performance, our approach greatly expands the range of problems for which event-time analyses can be used by eliminating the need for having completely known fate data.
Integrated survival analysis using an event-time approach in a Bayesian framework.
Walsh, Daniel P; Dreitz, Victoria J; Heisey, Dennis M
2015-02-01
Event-time or continuous-time statistical approaches have been applied throughout the biostatistical literature and have led to numerous scientific advances. However, these techniques have traditionally relied on knowing failure times. This has limited application of these analyses, particularly, within the ecological field where fates of marked animals may be unknown. To address these limitations, we developed an integrated approach within a Bayesian framework to estimate hazard rates in the face of unknown fates. We combine failure/survival times from individuals whose fates are known and times of which are interval-censored with information from those whose fates are unknown, and model the process of detecting animals with unknown fates. This provides the foundation for our integrated model and permits necessary parameter estimation. We provide the Bayesian model, its derivation, and use simulation techniques to investigate the properties and performance of our approach under several scenarios. Lastly, we apply our estimation technique using a piece-wise constant hazard function to investigate the effects of year, age, chick size and sex, sex of the tending adult, and nesting habitat on mortality hazard rates of the endangered mountain plover (Charadrius montanus) chicks. Traditional models were inappropriate for this analysis because fates of some individual chicks were unknown due to failed radio transmitters. Simulations revealed biases of posterior mean estimates were minimal (≤ 4.95%), and posterior distributions behaved as expected with RMSE of the estimates decreasing as sample sizes, detection probability, and survival increased. We determined mortality hazard rates for plover chicks were highest at <5 days old and were lower for chicks with larger birth weights and/or whose nest was within agricultural habitats. Based on its performance, our approach greatly expands the range of problems for which event-time analyses can be used by eliminating the need for having completely known fate data.
Disaster debris estimation using high-resolution polarimetric stereo-SAR
NASA Astrophysics Data System (ADS)
Koyama, Christian N.; Gokon, Hideomi; Jimbo, Masaru; Koshimura, Shunichi; Sato, Motoyuki
2016-10-01
This paper addresses the problem of debris estimation which is one of the most important initial challenges in the wake of a disaster like the Great East Japan Earthquake and Tsunami. Reasonable estimates of the debris have to be made available to decision makers as quickly as possible. Current approaches to obtain this information are far from being optimal as they usually rely on manual interpretation of optical imagery. We have developed a novel approach for the estimation of tsunami debris pile heights and volumes for improved emergency response. The method is based on a stereo-synthetic aperture radar (stereo-SAR) approach for very high-resolution polarimetric SAR. An advanced gradient-based optical-flow estimation technique is applied for optimal image coregistration of the low-coherence non-interferometric data resulting from the illumination from opposite directions and in different polarizations. By applying model based decomposition of the coherency matrix, only the odd bounce scattering contributions are used to optimize echo time computation. The method exclusively considers the relative height differences from the top of the piles to their base to achieve a very fine resolution in height estimation. To define the base, a reference point on non-debris-covered ground surface is located adjacent to the debris pile targets by exploiting the polarimetric scattering information. The proposed technique is validated using in situ data of real tsunami debris taken on a temporary debris management site in the tsunami affected area near Sendai city, Japan. The estimated height error is smaller than 0.6 m RMSE. The good quality of derived pile heights allows for a voxel-based estimation of debris volumes with a RMSE of 1099 m3. Advantages of the proposed method are fast computation time, and robust height and volume estimation of debris piles without the need for pre-event data or auxiliary information like DEM, topographic maps or GCPs.
3D shape reconstruction of specular surfaces by using phase measuring deflectometry
NASA Astrophysics Data System (ADS)
Zhou, Tian; Chen, Kun; Wei, Haoyun; Li, Yan
2016-10-01
The existing estimation methods for recovering height information from surface gradient are mainly divided into Modal and Zonal techniques. Since specular surfaces used in the industry always have complex and large areas, considerations must be given to both the improvement of measurement accuracy and the acceleration of on-line processing speed, which beyond the capacity of existing estimations. Incorporating the Modal and Zonal approaches into a unifying scheme, we introduce an improved 3D shape reconstruction version of specular surfaces based on Phase Measuring Deflectometry in this paper. The Modal estimation is firstly implemented to derive the coarse height information of the measured surface as initial iteration values. Then the real shape can be recovered utilizing a modified Zonal wave-front reconstruction algorithm. By combining the advantages of Modal and Zonal estimations, the proposed method simultaneously achieves consistently high accuracy and dramatically rapid convergence. Moreover, the iterative process based on an advanced successive overrelaxation technique shows a consistent rejection of measurement errors, guaranteeing the stability and robustness in practical applications. Both simulation and experimentally measurement demonstrate the validity and efficiency of the proposed improved method. According to the experimental result, the computation time decreases approximately 74.92% in contrast to the Zonal estimation and the surface error is about 6.68 μm with reconstruction points of 391×529 pixels of an experimentally measured sphere mirror. In general, this method can be conducted with fast convergence speed and high accuracy, providing an efficient, stable and real-time approach for the shape reconstruction of specular surfaces in practical situations.
Advanced Neuroimaging in Traumatic Brain Injury
Edlow, Brian L.; Wu, Ona
2013-01-01
Advances in structural and functional neuroimaging have occurred at a rapid pace over the past two decades. Novel techniques for measuring cerebral blood flow, metabolism, white matter connectivity, and neural network activation have great potential to improve the accuracy of diagnosis and prognosis for patients with traumatic brain injury (TBI), while also providing biomarkers to guide the development of new therapies. Several of these advanced imaging modalities are currently being implemented into clinical practice, whereas others require further development and validation. Ultimately, for advanced neuroimaging techniques to reach their full potential and improve clinical care for the many civilians and military personnel affected by TBI, it is critical for clinicians to understand the applications and methodological limitations of each technique. In this review, we examine recent advances in structural and functional neuroimaging and the potential applications of these techniques to the clinical care of patients with TBI. We also discuss pitfalls and confounders that should be considered when interpreting data from each technique. Finally, given the vast amounts of advanced imaging data that will soon be available to clinicians, we discuss strategies for optimizing data integration, visualization and interpretation. PMID:23361483
Abou-El-Enein, Mohamed; Römhild, Andy; Kaiser, Daniel; Beier, Carola; Bauer, Gerhard; Volk, Hans-Dieter; Reinke, Petra
2013-03-01
Advanced therapy medicinal products (ATMP) have gained considerable attention in academia due to their therapeutic potential. Good Manufacturing Practice (GMP) principles ensure the quality and sterility of manufacturing these products. We developed a model for estimating the manufacturing costs of cell therapy products and optimizing the performance of academic GMP-facilities. The "Clean-Room Technology Assessment Technique" (CTAT) was tested prospectively in the GMP facility of BCRT, Berlin, Germany, then retrospectively in the GMP facility of the University of California-Davis, California, USA. CTAT is a two-level model: level one identifies operational (core) processes and measures their fixed costs; level two identifies production (supporting) processes and measures their variable costs. The model comprises several tools to measure and optimize performance of these processes. Manufacturing costs were itemized using adjusted micro-costing system. CTAT identified GMP activities with strong correlation to the manufacturing process of cell-based products. Building best practice standards allowed for performance improvement and elimination of human errors. The model also demonstrated the unidirectional dependencies that may exist among the core GMP activities. When compared to traditional business models, the CTAT assessment resulted in a more accurate allocation of annual expenses. The estimated expenses were used to set a fee structure for both GMP facilities. A mathematical equation was also developed to provide the final product cost. CTAT can be a useful tool in estimating accurate costs for the ATMPs manufactured in an optimized GMP process. These estimates are useful when analyzing the cost-effectiveness of these novel interventions. Copyright © 2013 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.
Advanced Background Subtraction Applied to Aeroacoustic Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
Bahr, Christopher J.; Horne, William C.
2015-01-01
An advanced form of background subtraction is presented and applied to aeroacoustic wind tunnel data. A variant of this method has seen use in other fields such as climatology and medical imaging. The technique, based on an eigenvalue decomposition of the background noise cross-spectral matrix, is robust against situations where isolated background auto-spectral levels are measured to be higher than levels of combined source and background signals. It also provides an alternate estimate of the cross-spectrum, which previously might have poor definition for low signal-to-noise ratio measurements. Simulated results indicate similar performance to conventional background subtraction when the subtracted spectra are weaker than the true contaminating background levels. Superior performance is observed when the subtracted spectra are stronger than the true contaminating background levels. Experimental results show limited success in recovering signal behavior for data where conventional background subtraction fails. They also demonstrate the new subtraction technique's ability to maintain a proper coherence relationship in the modified cross-spectral matrix. Beam-forming and de-convolution results indicate the method can successfully separate sources. Results also show a reduced need for the use of diagonal removal in phased array processing, at least for the limited data sets considered.
NASA Astrophysics Data System (ADS)
Nardi, F.; Grimaldi, S.; Petroselli, A.
2012-12-01
Remotely sensed Digital Elevation Models (DEMs), largely available at high resolution, and advanced terrain analysis techniques built in Geographic Information Systems (GIS), provide unique opportunities for DEM-based hydrologic and hydraulic modelling in data-scarce river basins paving the way for flood mapping at the global scale. This research is based on the implementation of a fully continuous hydrologic-hydraulic modelling optimized for ungauged basins with limited river flow measurements. The proposed procedure is characterized by a rainfall generator that feeds a continuous rainfall-runoff model producing flow time series that are routed along the channel using a bidimensional hydraulic model for the detailed representation of the inundation process. The main advantage of the proposed approach is the characterization of the entire physical process during hydrologic extreme events of channel runoff generation, propagation, and overland flow within the floodplain domain. This physically-based model neglects the need for synthetic design hyetograph and hydrograph estimation that constitute the main source of subjective analysis and uncertainty of standard methods for flood mapping. Selected case studies show results and performances of the proposed procedure as respect to standard event-based approaches.
Surgical treatment of solitary brain metastases.
Gates, Marilyn; Alsaidi, Mohammed; Kalkanis, Steven
2012-01-01
Brain metastases are the most common form of brain tumors and are diagnosed in about 40% of all patients with systemic malignancies. Although the percentage of solitary brain metastases has dropped in recent estimates from about 50-30% of all patients with brain metastases, this percentage still represents a significant number of patients, and the overall incidence of brain metastases is still on the rise. Historically, brain metastases carried a grim prognosis with a median survival of only a few weeks. The utilization of whole-brain radiation therapy (WBRT) and steroids improved the prognosis to few months. However, it was not until the advent of advanced surgical techniques in conjunction with other treatment modalities such as WBRT and stereotactic radiosurgery that patients became less likely to succumb to neurological complications. In the last few decades, surgical resection has evolved from a mere emergent palliative treatment to a standard treatment modality that has led to improved clinical outcomes in carefully selected patients with brain metastases. This positive contribution has been made possible by randomized clinical trials, advancement of surgical techniques and tools, imaging modalities, and better understanding of the pathophysiology and perioperative care. Copyright © 2012 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Padhee, Varsha
Common Mode Voltage (CMV) in any power converter has been the major contributor to premature motor failures, bearing deterioration, shaft voltage build up and electromagnetic interference. Intelligent control methods like Space Vector Pulse Width Modulation (SVPWM) techniques provide immense potential and flexibility to reduce CMV, thereby targeting all the afore mentioned problems. Other solutions like passive filters, shielded cables and EMI filters add to the volume and cost metrics of the entire system. Smart SVPWM techniques therefore, come with a very important advantage of being an economical solution. This thesis discusses a modified space vector technique applied to an Indirect Matrix Converter (IMC) which results in the reduction of common mode voltages and other advanced features. The conventional indirect space vector pulse-width modulation (SVPWM) method of controlling matrix converters involves the usage of two adjacent active vectors and one zero vector for both rectifying and inverting stages of the converter. By suitable selection of space vectors, the rectifying stage of the matrix converter can generate different levels of virtual DC-link voltage. This capability can be exploited for operation of the converter in different ranges of modulation indices for varying machine speeds. This results in lower common mode voltage and improves the harmonic spectrum of the output voltage, without increasing the number of switching transitions as compared to conventional modulation. To summarize it can be said that the responsibility of formulating output voltages with a particular magnitude and frequency has been transferred solely to the rectifying stage of the IMC. Estimation of degree of distortion in the three phase output voltage is another facet discussed in this thesis. An understanding of the SVPWM technique and the switching sequence of the space vectors in detail gives the potential to estimate the RMS value of the switched output voltage of any converter. This conceivably aids the sizing and design of output passive filters. An analytical estimation method has been presented to achieve this purpose for am IMC. Knowledge of the fundamental component in output voltage can be utilized to calculate its Total Harmonic Distortion (THD). The effectiveness of the proposed SVPWM algorithms and the analytical estimation technique is substantiated by simulations in MATLAB / Simulink and experiments on a laboratory prototype of the IMC. Proper comparison plots have been provided to contrast the performance of the proposed methods with the conventional SVPWM method. The behavior of output voltage distortion and CMV with variation in operating parameters like modulation index and output frequency has also been analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hungate, Bruce; Pett-Ridge, Jennifer; Blazewicz, Steven
In this project, we developed an innovative and ground-breaking technique, quantitative stable isotope probing, a technique that uses density separation of nucleic acids as a quantitative measurement technique. This work is substantial because it advances SIP beyond the qualitative technique that has dominate the field for years. The first methods paper was published in Applied and Environmental Microbiology (Hungate et al. 2015), and this paper describes the mathematical model underlying the quantitative interpretation. A second methods paper (Schwartz et al. 2015) provides a conceptual overview of the method and its application to research problems. A third methods paper was justmore » published (Koch et al. 2018), in which we develop the quantitative model combining sequencing and isotope data to estimate actual rates of microbial growth and death in natural populations. This work has met much enthusiasm in scientific presentations around the world. It has met with equally enthusiastic resistance in the peer-review process, though our record of publication to date argues that people are accepting the merits of the approach. The skepticism and resistance are also potentially signs that this technique is pushing the field forward, albeit with some of the discomfort that accompanies extrapolation. Part of this is a cultural element in the field – the field of microbiology is not accustomed to the assumptions of ecosystem science. Research conducted in this project has pushed the philosophical perspective that major advances can occur when we advocate a sound merger between the traditions of strong inference in microbiology with those of grounded scaling in ecosystem science.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hungate, Bruce; PettRidge, Jennifer; Blazewicz, St
In this project, we developed an innovative and groundbreaking technique, quantitative stable isotope probing, a technique that uses density separation of nucleic acids as a quantitative measurement technique. This work is substantial because it advances SIP beyond the qualitative technique that has dominate the field for years. The first methods paper was published in Applied and Environmental Microbiology (Hungate et al. 2015), and this paper describes the mathematical model underlying the quantitative interpretation. A second methods paper (Schwartz et al. 2015) provides a conceptual overview of the method and its application to research problems. A third methods paper was justmore » published (Koch et al. 2018), in which we develop the quantitative model combining sequencing and isotope data to estimate actual rates of microbial growth and death in natural populations. This work has met much enthusiasm in scientific presentations around the world. It has met with equally enthusiastic resistance in the peerreview process, though our record of publication to date argues that people are accepting the merits of the approach. The skepticism and resistance are also potentially signs that this technique is pushing the field forward, albeit with some of the discomfort that accompanies extrapolation. Part of this is a cultural element in the field – the field of microbiology is not accustomed to the assumptions of ecosystem science. Research conducted in this project has pushed the philosophical perspective that major advances can occur when we advocate a sound merger between the traditions of strong inference in microbiology with those of grounded scaling in ecosystem science.« less
Review of advanced imaging techniques
Chen, Yu; Liang, Chia-Pin; Liu, Yang; Fischer, Andrew H.; Parwani, Anil V.; Pantanowitz, Liron
2012-01-01
Pathology informatics encompasses digital imaging and related applications. Several specialized microscopy techniques have emerged which permit the acquisition of digital images (“optical biopsies”) at high resolution. Coupled with fiber-optic and micro-optic components, some of these imaging techniques (e.g., optical coherence tomography) are now integrated with a wide range of imaging devices such as endoscopes, laparoscopes, catheters, and needles that enable imaging inside the body. These advanced imaging modalities have exciting diagnostic potential and introduce new opportunities in pathology. Therefore, it is important that pathology informaticists understand these advanced imaging techniques and the impact they have on pathology. This paper reviews several recently developed microscopic techniques, including diffraction-limited methods (e.g., confocal microscopy, 2-photon microscopy, 4Pi microscopy, and spatially modulated illumination microscopy) and subdiffraction techniques (e.g., photoactivated localization microscopy, stochastic optical reconstruction microscopy, and stimulated emission depletion microscopy). This article serves as a primer for pathology informaticists, highlighting the fundamentals and applications of advanced optical imaging techniques. PMID:22754737
A Statistical Description of Neural Ensemble Dynamics
Long, John D.; Carmena, Jose M.
2011-01-01
The growing use of multi-channel neural recording techniques in behaving animals has produced rich datasets that hold immense potential for advancing our understanding of how the brain mediates behavior. One limitation of these techniques is they do not provide important information about the underlying anatomical connections among the recorded neurons within an ensemble. Inferring these connections is often intractable because the set of possible interactions grows exponentially with ensemble size. This is a fundamental challenge one confronts when interpreting these data. Unfortunately, the combination of expert knowledge and ensemble data is often insufficient for selecting a unique model of these interactions. Our approach shifts away from modeling the network diagram of the ensemble toward analyzing changes in the dynamics of the ensemble as they relate to behavior. Our contribution consists of adapting techniques from signal processing and Bayesian statistics to track the dynamics of ensemble data on time-scales comparable with behavior. We employ a Bayesian estimator to weigh prior information against the available ensemble data, and use an adaptive quantization technique to aggregate poorly estimated regions of the ensemble data space. Importantly, our method is capable of detecting changes in both the magnitude and structure of correlations among neurons missed by firing rate metrics. We show that this method is scalable across a wide range of time-scales and ensemble sizes. Lastly, the performance of this method on both simulated and real ensemble data is used to demonstrate its utility. PMID:22319486
Manns, Braden; McKenzie, Susan Q.; Au, Flora; Gignac, Pamela M.; Geller, Lawrence Ian
2017-01-01
Background: Many working-age individuals with advanced chronic kidney disease (CKD) are unable to work, or are only able to work at a reduced capacity and/or with a reduction in time at work, and receive disability payments, either from the Canadian government or from private insurers, but the magnitude of those payments is unknown. Objective: The objective of this study was to estimate Canada Pension Plan Disability Benefit and private disability insurance benefits paid to Canadians with advanced kidney failure, and how feasible improvements in prevention, identification, and early treatment of CKD and increased use of kidney transplantation might mitigate those costs. Design: This study used an analytical model combining Canadian data from various sources. Setting and Patients: This study included all patients with advanced CKD in Canada, including those with estimated glomerular filtration rate (eGFR) <30 mL/min/m2 and those on dialysis. Measurements: We combined disability estimates from a provincial kidney care program with the prevalence of advanced CKD and estimated disability payments from the Canada Pension Plan and private insurance plans to estimate overall disability benefit payments for Canadians with advanced CKD. Results: We estimate that Canadians with advanced kidney failure are receiving disability benefit payments of at least Can$217 million annually. These estimates are sensitive to the proportion of individuals with advanced kidney disease who are unable to work, and plausible variation in this estimate could mean patients with advanced kidney disease are receiving up to Can$260 million per year. Feasible strategies to reduce the proportion of individuals with advanced kidney disease, either through prevention, delay or reduction in severity, or increasing the rate of transplantation, could result in reductions in the cost of Canada Pension Plan and private disability insurance payments by Can$13.8 million per year within 5 years. Limitations: This study does not estimate how CKD prevention or increasing the rate of kidney transplantation might influence health care cost savings more broadly, and does not include the cost to provincial governments for programs that provide income for individuals without private insurance and who do not qualify for Canada Pension Plan disability payments. Conclusions: Private disability insurance providers and federal government programs incur high costs related to individuals with advanced kidney failure, highlighting the significance of kidney disease not only to patients, and their families, but also to these other important stakeholders. Improvements in care of individuals with kidney disease could reduce these costs. PMID:28491340
Manns, Braden; McKenzie, Susan Q; Au, Flora; Gignac, Pamela M; Geller, Lawrence Ian
2017-01-01
Many working-age individuals with advanced chronic kidney disease (CKD) are unable to work, or are only able to work at a reduced capacity and/or with a reduction in time at work, and receive disability payments, either from the Canadian government or from private insurers, but the magnitude of those payments is unknown. The objective of this study was to estimate Canada Pension Plan Disability Benefit and private disability insurance benefits paid to Canadians with advanced kidney failure, and how feasible improvements in prevention, identification, and early treatment of CKD and increased use of kidney transplantation might mitigate those costs. This study used an analytical model combining Canadian data from various sources. This study included all patients with advanced CKD in Canada, including those with estimated glomerular filtration rate (eGFR) <30 mL/min/m 2 and those on dialysis. We combined disability estimates from a provincial kidney care program with the prevalence of advanced CKD and estimated disability payments from the Canada Pension Plan and private insurance plans to estimate overall disability benefit payments for Canadians with advanced CKD. We estimate that Canadians with advanced kidney failure are receiving disability benefit payments of at least Can$217 million annually. These estimates are sensitive to the proportion of individuals with advanced kidney disease who are unable to work, and plausible variation in this estimate could mean patients with advanced kidney disease are receiving up to Can$260 million per year. Feasible strategies to reduce the proportion of individuals with advanced kidney disease, either through prevention, delay or reduction in severity, or increasing the rate of transplantation, could result in reductions in the cost of Canada Pension Plan and private disability insurance payments by Can$13.8 million per year within 5 years. This study does not estimate how CKD prevention or increasing the rate of kidney transplantation might influence health care cost savings more broadly, and does not include the cost to provincial governments for programs that provide income for individuals without private insurance and who do not qualify for Canada Pension Plan disability payments. Private disability insurance providers and federal government programs incur high costs related to individuals with advanced kidney failure, highlighting the significance of kidney disease not only to patients, and their families, but also to these other important stakeholders. Improvements in care of individuals with kidney disease could reduce these costs.
A test to evaluate the earthquake prediction algorithm, M8
Healy, John H.; Kossobokov, Vladimir G.; Dewey, James W.
1992-01-01
A test of the algorithm M8 is described. The test is constructed to meet four rules, which we propose to be applicable to the test of any method for earthquake prediction: 1. An earthquake prediction technique should be presented as a well documented, logical algorithm that can be used by investigators without restrictions. 2. The algorithm should be coded in a common programming language and implementable on widely available computer systems. 3. A test of the earthquake prediction technique should involve future predictions with a black box version of the algorithm in which potentially adjustable parameters are fixed in advance. The source of the input data must be defined and ambiguities in these data must be resolved automatically by the algorithm. 4. At least one reasonable null hypothesis should be stated in advance of testing the earthquake prediction method, and it should be stated how this null hypothesis will be used to estimate the statistical significance of the earthquake predictions. The M8 algorithm has successfully predicted several destructive earthquakes, in the sense that the earthquakes occurred inside regions with linear dimensions from 384 to 854 km that the algorithm had identified as being in times of increased probability for strong earthquakes. In addition, M8 has successfully "post predicted" high percentages of strong earthquakes in regions to which it has been applied in retroactive studies. The statistical significance of previous predictions has not been established, however, and post-prediction studies in general are notoriously subject to success-enhancement through hindsight. Nor has it been determined how much more precise an M8 prediction might be than forecasts and probability-of-occurrence estimates made by other techniques. We view our test of M8 both as a means to better determine the effectiveness of M8 and as an experimental structure within which to make observations that might lead to improvements in the algorithm or conceivably lead to a radically different approach to earthquake prediction.
Mayhew, Terry M; Lucocq, John M
2015-01-01
The terms morphome and morphomics are not new but, recently, a group of morphologists and cell biologists has given them clear definitions and emphasised their integral importance in systems biology. By analogy to other ‘-omes’, the morphome refers to the distribution of matter within 3-dimensional (3D) space. It equates to the totality of morphological features within a biological system (virus, single cell, multicellular organism or populations thereof) and morphomics is the systematic study of those structures. Morphomics research has the potential to generate ‘big data’ because it includes all imaging techniques at all levels of achievable resolution and all structural scales from gross anatomy and medical imaging, via optical and electron microscopy, to molecular characterisation. As with other ‘-omics’, quantification is an important part of morphomics and, because biological systems exist and operate in 3D space, precise descriptions of form, content and spatial relationships require the quantification of structure in 3D. Revealing and quantifying structural detail inside the specimen is achieved currently in two main ways: (i) by some form of reconstruction from serial physical or tomographic slices or (ii) by using randomly-sampled sections and simple test probes (points, lines, areas, volumes) to derive stereological estimates of global and/or individual quantities. The latter include volumes, surfaces, lengths and numbers of interesting features and spatial relationships between them. This article emphasises the value of stereological design, sampling principles and estimation tools as a template for combining with alternative imaging techniques to tackle the ‘big data’ issue and advance knowledge and understanding of the morphome. The combination of stereology, TEM and immunogold cytochemistry provides a practical illustration of how this has been achieved in the sub-field of nanomorphomics. Applying these quantitative tools/techniques in a carefully managed study design offers us a deeper appreciation of the spatiotemporal relationships between the genome, metabolome and morphome which are integral to systems biology. PMID:25753334
Seeing the tipping point: Balance perception and visual shape.
Firestone, Chaz; Keil, Frank C
2016-07-01
In a brief glance at an object or shape, we can appreciate a rich suite of its functional properties, including the organization of the object's parts, its optimal contact points for grasping, and its center of mass, or balancing point. However, in the real world and the laboratory, balance perception shows systematic biases whereby observers may misjudge a shape's center of mass by a severe margin. Are such biases simply quirks of physical reasoning? Or might they instead reflect more fundamental principles of object representation? Here we demonstrate systematically biased center-of-mass estimation for two-dimensional (2D) shapes (Study 1) and advance a surprising explanation of such biases. We suggest that the mind implicitly represents ordinary 2D shapes as rich, volumetric, three-dimensional (3D) objects, and that these "inflated" shape representations intrude on and bias perception of the 2D shape's geometric properties. Such "inflation" is a computer-graphics technique for segmenting shapes into parts, and we show that a model derived from this technique best accounts for the biases in center-of-mass estimation in Study 1. Further supporting this account, we show that reducing the need for inflated shape representations diminishes such biases: Center-of-mass estimation improved when cues to shapehood were attenuated (Study 2) and when shapes' depths were explicitly depicted using real-life objects laser-cut from wood (Study 3). We suggest that the technique of shape inflation is actually implemented in the mind; thus, biases in our impressions of balance reflect a more general functional characteristic of object perception. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Lü, Fan; Shao, Li-Ming; Zhang, Hua; Fu, Wen-Ding; Feng, Shi-Jin; Zhan, Liang-Tong; Chen, Yun-Min; He, Pin-Jing
2018-01-01
Bio-stability is a key feature for the utilization and final disposal of biowaste-derived residues, such as aerobic compost or vermicompost of food waste, bio-dried waste, anaerobic digestate or landfilled waste. The present paper reviews conventional methods and advanced techniques used for the assessment of bio-stability. The conventional methods are reclassified into two categories. Advanced techniques, including spectroscopic (fluorescent, ultraviolet-visible, infrared, Raman, nuclear magnetic resonance), thermogravimetric and thermochemolysis analysis, are emphasized for their application in bio-stability assessment in recent years. Their principles, pros and cons are critically discussed. These advanced techniques are found to be convenient in sample preparation and to supply diversified information. However, the viability of these techniques as potential indicators for bio-stability assessment ultimately lies in the establishment of the relationship of advanced ones with the conventional methods, especially with the methods based on biotic response. Furthermore, some misuses in data explanation should be noted. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cooper, S. D.; Roy, D. P.; Sathyachandran, S. K.
2016-12-01
Quantifying the above ground biomass of grasslands is needed for a number of applications including monitoring grass productivity, wildlife habitat, carbon storage, and fuel bed characteristics. Destructive biomass measurements, although highly accurate, are time consuming and are not easily undertaken on a repeat basis or over large areas. A number of non-destructive techniques have been developed that relate vegetation structural properties to above ground biomass. Conventionally, the disc pasture meter is used for rapid grass biomass estimation and uses the settling height of a disk placed on the grass and allometry. Structure-from-Motion (SfM) photogrammetry and Terrestrial Laser Scanning (TLS) are two technologies that have the potential to yield highly precise three-dimensional (3D) structural measurements of vegetation quite rapidly. Recent advances in computing and data acquisition technologies have led to the successful application of TLS and SfM in woody biomass estimation, but application in grassland systems remains largely untested. The Canopy Biomass Lidar (CBL) is one such advance and is a highly portable and relatively inexpensive TLS unit allowing for rapid and widespread data collection. We investigated the efficacy of a CBL unit as well as SfM in allometric estimation of grassland biomass from volumetric measurements derived from these two technologies, both separately and through the merging of the two independently generated 3D point clouds. The results are compared to biomass estimation from a pasture disc meter. Best use practices for grassland applications of these technologies are also presented.
Estimation of color modification in digital images by CFA pattern change.
Choi, Chang-Hee; Lee, Hae-Yeoun; Lee, Heung-Kyu
2013-03-10
Extensive studies have been carried out for detecting image forgery such as copy-move, re-sampling, blurring, and contrast enhancement. Although color modification is a common forgery technique, there is no reported forensic method for detecting this type of manipulation. In this paper, we propose a novel algorithm for estimating color modification in images acquired from digital cameras when the images are modified. Most commercial digital cameras are equipped with a color filter array (CFA) for acquiring the color information of each pixel. As a result, the images acquired from such digital cameras include a trace from the CFA pattern. This pattern is composed of the basic red green blue (RGB) colors, and it is changed when color modification is carried out on the image. We designed an advanced intermediate value counting method for measuring the change in the CFA pattern and estimating the extent of color modification. The proposed method is verified experimentally by using 10,366 test images. The results confirmed the ability of the proposed method to estimate color modification with high accuracy. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Biological nitrogen fixation: rates, patterns and ecological controls in terrestrial ecosystems
Vitousek, Peter M.; Menge, Duncan N.L.; Reed, Sasha C.; Cleveland, Cory C.
2013-01-01
New techniques have identified a wide range of organisms with the capacity to carry out biological nitrogen fixation (BNF)—greatly expanding our appreciation of the diversity and ubiquity of N fixers—but our understanding of the rates and controls of BNF at ecosystem and global scales has not advanced at the same pace. Nevertheless, determining rates and controls of BNF is crucial to placing anthropogenic changes to the N cycle in context, and to understanding, predicting and managing many aspects of global environmental change. Here, we estimate terrestrial BNF for a pre-industrial world by combining information on N fluxes with 15N relative abundance data for terrestrial ecosystems. Our estimate is that pre-industrial N fixation was 58 (range of 40–100) Tg N fixed yr−1; adding conservative assumptions for geological N reduces our best estimate to 44 Tg N yr−1. This approach yields substantially lower estimates than most recent calculations; it suggests that the magnitude of human alternation of the N cycle is substantially larger than has been assumed.
Remote sensing-based estimation of annual soil respiration at two contrasting forest sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Ni; Gu, Lianhong; Black, T. Andrew
Here, soil respiration (R s), an important component of the global carbon cycle, can be estimated using remotely sensed data, but the accuracy of this technique has not been thoroughly investigated. In this study, we proposed a methodology for the remote estimation of annual R s at two contrasting FLUXNET forest sites (a deciduous broadleaf forest and an evergreen needleleaf forest). A version of the Akaike's information criterion was used to select the best model from a range of models for annual R s estimation based on the remotely sensed data products from the Moderate Resolution Imaging Spectroradiometer and root-zonemore » soil moisture product derived from assimilation of the NASA Advanced Microwave Scanning Radiometer soil moisture products and a two-layer Palmer water balance model. We found that the Arrhenius-type function based on nighttime land surface temperature (LST-night) was the best model by comprehensively considering the model explanatory power and model complexity at the Missouri Ozark and BC-Campbell River 1949 Douglas-fir sites.« less
Recent advancements in GRACE mascon regularization and uncertainty assessment
NASA Astrophysics Data System (ADS)
Loomis, B. D.; Luthcke, S. B.
2017-12-01
The latest release of the NASA Goddard Space Flight Center (GSFC) global time-variable gravity mascon product applies a new regularization strategy along with new methods for estimating noise and leakage uncertainties. The critical design component of mascon estimation is the construction of the applied regularization matrices, and different strategies exist between the different centers that produce mascon solutions. The new approach from GSFC directly applies the pre-fit Level 1B inter-satellite range-acceleration residuals in the design of time-dependent regularization matrices, which are recomputed at each step of our iterative solution method. We summarize this new approach, demonstrating the simultaneous increase in recovered time-variable gravity signal and reduction in the post-fit inter-satellite residual magnitudes, until solution convergence occurs. We also present our new approach for estimating mascon noise uncertainties, which are calibrated to the post-fit inter-satellite residuals. Lastly, we present a new technique for end users to quickly estimate the signal leakage errors for any selected grouping of mascons, and we test the viability of this leakage assessment procedure on the mascon solutions produced by other processing centers.
Measurement of fracture toughness by nanoindentation methods: Recent advances and future challenges
Sebastiani, Marco; Johanns, K. E.; Herbert, Erik G.; ...
2015-04-30
In this study, we describe recent advances and developments for the measurement of fracture toughness at small scales by the use of nanoindentation-based methods including techniques based on micro-cantilever beam bending and micro-pillar splitting. A critical comparison of the techniques is made by testing a selected group of bulk and thin film materials. For pillar splitting, cohesive zone finite element simulations are used to validate a simple relationship between the critical load at failure, the pillar radius, and the fracture toughness for a range of material properties and coating/substrate combinations. The minimum pillar diameter required for nucleation and growth ofmore » a crack during indentation is also estimated. An analysis of pillar splitting for a film on a dissimilar substrate material shows that the critical load for splitting is relatively insensitive to the substrate compliance for a large range of material properties. Experimental results from a selected group of materials show good agreement between single cantilever and pillar splitting methods, while a discrepancy of ~25% is found between the pillar splitting technique and double-cantilever testing. It is concluded that both the micro-cantilever and pillar splitting techniques are valuable methods for micro-scale assessment of fracture toughness of brittle ceramics, provided the underlying assumptions can be validated. Although the pillar splitting method has some advantages because of the simplicity of sample preparation and testing, it is not applicable to most metals because their higher toughness prevents splitting, and in this case, micro-cantilever bend testing is preferred.« less
Quantitative Aspects of Single Molecule Microscopy
Ober, Raimund J.; Tahmasbi, Amir; Ram, Sripad; Lin, Zhiping; Ward, E. Sally
2015-01-01
Single molecule microscopy is a relatively new optical microscopy technique that allows the detection of individual molecules such as proteins in a cellular context. This technique has generated significant interest among biologists, biophysicists and biochemists, as it holds the promise to provide novel insights into subcellular processes and structures that otherwise cannot be gained through traditional experimental approaches. Single molecule experiments place stringent demands on experimental and algorithmic tools due to the low signal levels and the presence of significant extraneous noise sources. Consequently, this has necessitated the use of advanced statistical signal and image processing techniques for the design and analysis of single molecule experiments. In this tutorial paper, we provide an overview of single molecule microscopy from early works to current applications and challenges. Specific emphasis will be on the quantitative aspects of this imaging modality, in particular single molecule localization and resolvability, which will be discussed from an information theoretic perspective. We review the stochastic framework for image formation, different types of estimation techniques and expressions for the Fisher information matrix. We also discuss several open problems in the field that demand highly non-trivial signal processing algorithms. PMID:26167102
Irreversible electroporation of locally advanced pancreatic neck/body adenocarcinoma
2015-01-01
Objective Irreversible electroporation (IRE) of locally advanced pancreatic adenocarcinoma of the neck has been used to palliate appropriate stage 3 pancreatic cancers without evidence of metastasis and who have undergone appropriate induction therapy. Currently there has not been a standardized reported technique for pancreatic mid-body tumors for patient selection and intra-operative technique. Patients Subjects are patients with locally advanced pancreatic adenocarcinoma of the body/neck who have undergone appropriate induction chemotherapy for a reasonable duration. Main outcome measures Technique of open IRE of locally advanced pancreatic adenocarcinoma of the neck/body is described, with the emphasis on intra-operative ultrasound and intra-operative electroporation management. Results The technique of open IRE of the pancreatic neck/body with bracketing of the celiac axis and superior mesenteric artery with continuous intraoperative ultrasound imaging and consideration of intraoperative navigational system is described. Conclusions IRE of locally advanced pancreatic adenocarcinoma of the body/neck is feasible for appropriate patients with locally advanced unresectable pancreatic cancer. PMID:26029461
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierre, John W.; Wies, Richard; Trudnowski, Daniel
Time-synchronized measurements provide rich information for estimating a power-system's electromechanical modal properties via advanced signal processing. This information is becoming critical for the improved operational reliability of interconnected grids. A given mode's properties are described by its frequency, damping, and shape. Modal frequencies and damping are useful indicators of power-system stress, usually declining with increased load or reduced grid capacity. Mode shape provides critical information for operational control actions. This project investigated many advanced techniques for power system identification from measured data focusing on mode frequency and damping ratio estimation. Investigators from the three universities coordinated their effort with Pacificmore » Northwest National Laboratory (PNNL). Significant progress was made on developing appropriate techniques for system identification with confidence intervals and testing those techniques on field measured data and through simulation. Experimental data from the western area power system was provided by PNNL and Bonneville Power Administration (BPA) for both ambient conditions and for signal injection tests. Three large-scale tests were conducted for the western area in 2005 and 2006. Measured field PMU (Phasor Measurement Unit) data was provided to the three universities. A 19-machine simulation model was enhanced for testing the system identification algorithms. Extensive simulations were run with this model to test the performance of the algorithms. University of Wyoming researchers participated in four primary activities: (1) Block and adaptive processing techniques for mode estimation from ambient signals and probing signals, (2) confidence interval estimation, (3) probing signal design and injection method analysis, and (4) performance assessment and validation from simulated and field measured data. Subspace based methods have been use to improve previous results from block processing techniques. Bootstrap techniques have been developed to estimate confidence intervals for the electromechanical modes from field measured data. Results were obtained using injected signal data provided by BPA. A new probing signal was designed that puts more strength into the signal for a given maximum peak to peak swing. Further simulations were conducted on a model based on measured data and with the modifications of the 19-machine simulation model. Montana Tech researchers participated in two primary activities: (1) continued development of the 19-machine simulation test system to include a DC line; and (2) extensive simulation analysis of the various system identification algorithms and bootstrap techniques using the 19 machine model. Researchers at the University of Alaska-Fairbanks focused on the development and testing of adaptive filter algorithms for mode estimation using data generated from simulation models and on data provided in collaboration with BPA and PNNL. There efforts consist of pre-processing field data, testing and refining adaptive filter techniques (specifically the Least Mean Squares (LMS), the Adaptive Step-size LMS (ASLMS), and Error Tracking (ET) algorithms). They also improved convergence of the adaptive algorithms by using an initial estimate from block processing AR method to initialize the weight vector for LMS. Extensive testing was performed on simulated data from the 19 machine model. This project was also extensively involved in the WECC (Western Electricity Coordinating Council) system wide tests carried out in 2005 and 2006. These tests involved injecting known probing signals into the western power grid. One of the primary goals of these tests was the reliable estimation of electromechanical mode properties from measured PMU data. Applied to the system were three types of probing inputs: (1) activation of the Chief Joseph Dynamic Brake, (2) mid-level probing at the Pacific DC Intertie (PDCI), and (3) low-level probing on the PDCI. The Chief Joseph Dynamic Brake is a 1400 MW disturbance to the system and is injected for a half of a second. For the mid and low-level probing, the Celilo terminal of the PDCI is modulated with a known probing signal. Similar but less extensive tests were conducted in June of 2000. The low-level probing signals were designed at the University of Wyoming. A number of important design factors are considered. The designed low-level probing signal used in the tests is a multi-sine signal. Its frequency content is focused in the range of the inter-area electromechanical modes. The most frequently used of these low-level multi-sine signals had a period of over two minutes, a root-mean-square (rms) value of 14 MW, and a peak magnitude of 20 MW. Up to 15 cycles of this probing signal were injected into the system resulting in a processing gain of 15. The resulting measured response at points throughout the system was not much larger than the ambient noise present in the measurements.« less
Bounded Kalman filter method for motion-robust, non-contact heart rate estimation
Prakash, Sakthi Kumar Arul; Tucker, Conrad S.
2018-01-01
The authors of this work present a real-time measurement of heart rate across different lighting conditions and motion categories. This is an advancement over existing remote Photo Plethysmography (rPPG) methods that require a static, controlled environment for heart rate detection, making them impractical for real-world scenarios wherein a patient may be in motion, or remotely connected to a healthcare provider through telehealth technologies. The algorithm aims to minimize motion artifacts such as blurring and noise due to head movements (uniform, random) by employing i) a blur identification and denoising algorithm for each frame and ii) a bounded Kalman filter technique for motion estimation and feature tracking. A case study is presented that demonstrates the feasibility of the algorithm in non-contact estimation of the pulse rate of subjects performing everyday head and body movements. The method in this paper outperforms state of the art rPPG methods in heart rate detection, as revealed by the benchmarked results. PMID:29552419
Consistent Partial Least Squares Path Modeling via Regularization
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present. PMID:29515491
Thyroid Radiofrequency Ablation: Updates on Innovative Devices and Techniques
Park, Hye Sun; Park, Auh Whan; Chung, Sae Rom; Choi, Young Jun; Lee, Jeong Hyun
2017-01-01
Radiofrequency ablation (RFA) is a well-known, effective, and safe method for treating benign thyroid nodules and recurrent thyroid cancers. Thyroid-dedicated devices and basic techniques for thyroid RFA were introduced by the Korean Society of Thyroid Radiology (KSThR) in 2012. Thyroid RFA has now been adopted worldwide, with subsequent advances in devices and techniques. To optimize the treatment efficacy and patient safety, understanding the basic and advanced RFA techniques and selecting the optimal treatment strategy are critical. The goal of this review is to therefore provide updates and analysis of current devices and advanced techniques for RFA treatment of benign thyroid nodules and recurrent thyroid cancers. PMID:28670156
Remote sensing advances in agricultural inventories
NASA Technical Reports Server (NTRS)
Dragg, J. L.; Bizzell, R. M.; Trichel, M. C.; Hatch, R. E.; Phinney, D. E.; Baker, T. C.
1984-01-01
As the complexity of the world's agricultural industry increases, more timely and more accurate world-wide agricultural information is required to support production and marketing decisions, policy formulation, and technology development. The Inventory Technology Development Project of the AgRISTARS Program has developed new automated technology that uses data sets acquired by spaceborne remote sensors. Research has emphasized the development of multistage, multisensor sampling and estimation techniques for use in global environments where reliable ground observations are not available. This paper presents research results obtained from data sets acquired by four different sensors: Landsat MSS, Landsat TM, Shuttle-Imaging Radar and environmental satellite (AVHRR).
Communications terminal breadboard
NASA Technical Reports Server (NTRS)
1972-01-01
A baseline design is presented of a digital communications link between an advanced manned spacecraft (AMS) and an earth terminal via an Intelsat 4 type communications satellite used as a geosynchronous orbiting relay station. The fabrication, integration, and testing of terminal elements at each end of the link are discussed. In the baseline link design, the information carrying capacity of the link was estimated for both the forward direction (earth terminal to AMS) and the return direction, based upon orbital geometry, relay satellite characteristics, terminal characteristics, and the improvement that can be achieved by the use of convolutional coding/Viterbi decoding techniques.
Advanced Statistics for Exotic Animal Practitioners.
Hodsoll, John; Hellier, Jennifer M; Ryan, Elizabeth G
2017-09-01
Correlation and regression assess the association between 2 or more variables. This article reviews the core knowledge needed to understand these analyses, moving from visual analysis in scatter plots through correlation, simple and multiple linear regression, and logistic regression. Correlation estimates the strength and direction of a relationship between 2 variables. Regression can be considered more general and quantifies the numerical relationships between an outcome and 1 or multiple variables in terms of a best-fit line, allowing predictions to be made. Each technique is discussed with examples and the statistical assumptions underlying their correct application. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Elishakoff, Isaac; Lin, Y. K.; Zhu, Li-Ping; Fang, Jian-Jie; Cai, G. Q.
1994-01-01
This report supplements a previous report of the same title submitted in June, 1992. It summarizes additional analytical techniques which have been developed for predicting the response of linear and nonlinear structures to noise excitations generated by large propulsion power plants. The report is divided into nine chapters. The first two deal with incomplete knowledge of boundary conditions of engineering structures. The incomplete knowledge is characterized by a convex set, and its diagnosis is formulated as a multi-hypothesis discrete decision-making algorithm with attendant criteria of adaptive termination.
Adaptive enhanced sampling by force-biasing using neural networks
NASA Astrophysics Data System (ADS)
Guo, Ashley Z.; Sevgen, Emre; Sidky, Hythem; Whitmer, Jonathan K.; Hubbell, Jeffrey A.; de Pablo, Juan J.
2018-04-01
A machine learning assisted method is presented for molecular simulation of systems with rugged free energy landscapes. The method is general and can be combined with other advanced sampling techniques. In the particular implementation proposed here, it is illustrated in the context of an adaptive biasing force approach where, rather than relying on discrete force estimates, one can resort to a self-regularizing artificial neural network to generate continuous, estimated generalized forces. By doing so, the proposed approach addresses several shortcomings common to adaptive biasing force and other algorithms. Specifically, the neural network enables (1) smooth estimates of generalized forces in sparsely sampled regions, (2) force estimates in previously unexplored regions, and (3) continuous force estimates with which to bias the simulation, as opposed to biases generated at specific points of a discrete grid. The usefulness of the method is illustrated with three different examples, chosen to highlight the wide range of applicability of the underlying concepts. In all three cases, the new method is found to enhance considerably the underlying traditional adaptive biasing force approach. The method is also found to provide improvements over previous implementations of neural network assisted algorithms.
NASA Astrophysics Data System (ADS)
Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri
2018-01-01
This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).
NASA Astrophysics Data System (ADS)
Sankaran, A.; Chuang, Keh-Shih; Yonekawa, Hisashi; Huang, H. K.
1992-06-01
The imaging characteristics of two chest radiographic equipment, Advanced Multiple Beam Equalization Radiography (AMBER) and Konica Direct Digitizer [using a storage phosphor (SP) plate] systems have been compared. The variables affecting image quality and the computer display/reading systems used are detailed. Utilizing specially designed wedge, geometric, and anthropomorphic phantoms, studies were conducted on: exposure and energy response of detectors; nodule detectability; different exposure techniques; various look- up tables (LUTs), gray scale displays and laser printers. Methods for scatter estimation and reduction were investigated. It is concluded that AMBER with screen-film and equalization techniques provides better nodule detectability than SP plates. However, SP plates have other advantages such as flexibility in the selection of exposure techniques, image processing features, and excellent sensitivity when combined with optimum reader operating modes. The equalization feature of AMBER provides better nodule detectability under the denser regions of the chest. Results of diagnostic accuracy are demonstrated with nodule detectability plots and analysis of images obtained with phantoms.
A new coherent demodulation technique for land-mobile satellite communications
NASA Technical Reports Server (NTRS)
Yoshida, Shousei; Tomita, Hideho
1990-01-01
An advanced coherent demodulation technique is described for land mobile satellite (LMS) communications. The proposed technique features a combined narrow/wind band dual open loop carrier phase estimator, which is effectively able to compensate the fast carrier phase fluctuation by fading with sacrificing a phase slip rate. Also included is the realization of quick carrier and clock reacquisition after shadowing by taking open loop structure. Its bit error rate (BER) performance is superior to that of existing detection schemes, showing a BER of 1 x 10(exp -2) at 6.3 dB E sub b/N sub o over the Rician channel with 10 dB C/M and 200 Hz (1/16 modulation rate) fading pitch f sub d for QPSK. The proposed scheme consists of a fast response carrier recovery and a quick bit timing recovery with an interpolation. An experimental terminal model was developed to evaluate its performance at fading conditions. The results are quite satisfactory, giving prospects for future LMS applications.
Cell mechanics in biomedical cavitation
Wang, Qianxi; Manmi, Kawa; Liu, Kuo-Kang
2015-01-01
Studies on the deformation behaviours of cellular entities, such as coated microbubbles and liposomes subject to a cavitation flow, become increasingly important for the advancement of ultrasonic imaging and drug delivery. Numerical simulations for bubble dynamics of ultrasound contrast agents based on the boundary integral method are presented in this work. The effects of the encapsulating shell are estimated by adapting Hoff's model used for thin-shell contrast agents. The viscosity effects are estimated by including the normal viscous stress in the boundary condition. In parallel, mechanical models of cell membranes and liposomes as well as state-of-the-art techniques for quantitative measurement of viscoelasticity for a single cell or coated microbubbles are reviewed. The future developments regarding modelling and measurement of the material properties of the cellular entities for cutting-edge biomedical applications are also discussed. PMID:26442142
Field development will cost $1 billion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rintoul, B.
1980-10-01
The development of the Belridge property 40 miles west of Bakersfield in the San Joaquin Valley is not only one of the biggest production developments of this decade in California but also is one of the msot challenging. It will call on advanced expertise and, ultimately, on techniques that are still in the research stage. The program calls for drilling at least 3000 wells and reworking another 2200 wells. In excess of $100 million is being committed in 1980 alone. Since acquiring the property, Shell has increased the estimate of proved developed and undeveloped reserves to approximately 598 million bblmore » of hydrocarbon liquids and 364 billion cu ft of natural gas. The higher estimates mirror the company's confidence in its capability to recover a larger amount of the in-place oil and gas than previously expected.« less
The extent of burning in African savanna
NASA Technical Reports Server (NTRS)
Cahoon, D. R. JR.; Levine, J. S.; Cofer, W. R. Iii; Stocks, B. J.
1994-01-01
The temporal and spatial distribution of African savanna grassland fires has been examined, and the areal extent of these fires has been estimated for the subequatorial African continent. African savanna fires have been investigated using remote sensing techniques and imagery collected by low-light sensors on Defense Meteorological Satellite Program (DMSP) satellites and by the Advanced Very High Resolution Radiometer (AVHRR) which is aboard polar orbiting National Oceanic and Atmospheric Administration (NOAA) satellites. DMSP imagery has been used to map the evolution of savanna burning over all of the African continent and the analysis of AVHRR imagery has been used to estimate the areal extent of the burning in the southern hemispheric African savannas. The work presented primarily reflects the analysiscompleted for the year 1987. However, comparisons have been made with other years and the representativeness of the 1987 analysis is discussed.
Imaging flow cytometry for phytoplankton analysis.
Dashkova, Veronika; Malashenkov, Dmitry; Poulton, Nicole; Vorobjev, Ivan; Barteneva, Natasha S
2017-01-01
This review highlights the concepts and instrumentation of imaging flow cytometry technology and in particular its use for phytoplankton analysis. Imaging flow cytometry, a hybrid technology combining speed and statistical capabilities of flow cytometry with imaging features of microscopy, is rapidly advancing as a cell imaging platform that overcomes many of the limitations of current techniques and contributed significantly to the advancement of phytoplankton analysis in recent years. This review presents the various instrumentation relevant to the field and currently used for assessment of complex phytoplankton communities' composition and abundance, size structure determination, biovolume estimation, detection of harmful algal bloom species, evaluation of viability and metabolic activity and other applications. Also we present our data on viability and metabolic assessment of Aphanizomenon sp. cyanobacteria using Imagestream X Mark II imaging cytometer. Herein, we highlight the immense potential of imaging flow cytometry for microalgal research, but also discuss limitations and future developments. Copyright © 2016 Elsevier Inc. All rights reserved.
Protein self-assembly onto nanodots leads to formation of conductive bio-based hybrids
Hu, Xiao; Dong, Chenbo; Su, Rigu; Xu, Quan; Dinu, Cerasela Zoica
2016-01-01
The next generation of nanowires that could advance the integration of functional nanosystems into synthetic applications from photocatalysis to optical devices need to demonstrate increased ability to promote electron transfer at their interfaces while ensuring optimum quantum confinement. Herein we used the biological recognition and the self-assembly properties of tubulin, a protein involved in building the filaments of cellular microtubules, to create stable, free standing and conductive sulfur-doped carbon nanodots-based conductive bio-hybrids. The physical and chemical properties (e.g., composition, morphology, diameter etc.) of such user-synthesized hybrids were investigated using atomic and spectroscopic techniques, while the electron transfer rate was estimated using peak currents formed during voltammetry scanning. Our results demonstrate the ability to create individually hybrid nanowires capable to reduce energy losses; such hybrids could possibly be used in the future for the advancement and implementation into nanometer-scale functional devices. PMID:27922059
A Historical Perspective on Dynamics Testing at the Langley Research Center
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Kvaternik, Raymond G.; Hanks, Brantley R.
2000-01-01
The experience and advancement of Structural dynamics testing for space system applications at the Langley Research Center of the National Aeronautics and Space Administration (NASA) over the past four decades is reviewed. This experience began in the 1960's with the development of a technology base using a variety of physical models to explore dynamic phenomena and to develop reliable analytical modeling capability for space systems. It continued through the 1970's and 80's with the development of rapid, computer-aided test techniques, the testing of low-natural frequency, gravity-sensitive systems, the testing of integrated structures with active flexible motion control, and orbital flight measurements, It extended into the 1990's where advanced computerized system identification methods were developed for estimating the dynamic states of complex, lightweight, flexible aerospace systems, The scope of discussion in this paper includes ground and flight tests and summarizes lessons learned in both successes and failures.
Demographic Analysis from Biometric Data: Achievements, Challenges, and New Frontiers.
Sun, Yunlian; Zhang, Man; Sun, Zhenan; Tan, Tieniu
2018-02-01
Biometrics is the technique of automatically recognizing individuals based on their biological or behavioral characteristics. Various biometric traits have been introduced and widely investigated, including fingerprint, iris, face, voice, palmprint, gait and so forth. Apart from identity, biometric data may convey various other personal information, covering affect, age, gender, race, accent, handedness, height, weight, etc. Among these, analysis of demographics (age, gender, and race) has received tremendous attention owing to its wide real-world applications, with significant efforts devoted and great progress achieved. This survey first presents biometric demographic analysis from the standpoint of human perception, then provides a comprehensive overview of state-of-the-art advances in automated estimation from both academia and industry. Despite these advances, a number of challenging issues continue to inhibit its full potential. We second discuss these open problems, and finally provide an outlook into the future of this very active field of research by sharing some promising opportunities.
The effect of the water on the curcumin tautomerism: A quantitative approach
NASA Astrophysics Data System (ADS)
Manolova, Yana; Deneva, Vera; Antonov, Liudmil; Drakalska, Elena; Momekova, Denitsa; Lambov, Nikolay
2014-11-01
The tautomerism of curcumin has been investigated in ethanol/water binary mixtures by using UV-Vis spectroscopy and advanced quantum-chemical calculations. The spectral changes were processed by using advanced chemometric procedure, based on resolution of overlapping bands technique. As a result, molar fractions of the tautomers and their individual spectra have been estimated. It has been shown that in ethanol the enol-keto tautomer only is presented. The addition of water leads to appearance of a new spectral band, which was assigned to the diketo tautomeric form. The results show that in 90% water/10% ethanol the diketo form is dominating. The observed shift in the equilibrium is explained by the quantum chemical calculations, which show that water molecules stabilize diketo tautomer through formation of stable complexes. To our best knowledge we report for the first time quantitative data for the tautomerism of curcumin and the effect of the water.
Zhang, M.; Takahashi, M.; Morin, R.H.; Endo, H.; Esaki, T.; ,
2002-01-01
The accurate hydraulic characterization of low-permeability subsurface environments has important practical significance. In order to examine this issue from the perspective of laboratory-based approaches, we review some recent advancements in the theoretical analyses of three different laboratory techniques specifically applied to low-permeability geologic materials: constant-head, constant flow-rate and transient-pulse permeability tests. Some potential strategies for effectively decreasing the time required to confidently estimate the permeability of these materials are presented. In addition, a new and versatile laboratory system is introduced that can implement any of these three test methods while simultaneously subjecting a specimen to high confining pressures and pore pressures, thereby simulating in situ conditions at great depths. The capabilities and advantages of this innovative system are demonstrated using experimental data derived from Shirahama sandstone and Inada granite, two rock types widely encountered in Japan.
Clinical Application Of Advanced Infrared Thermography (IRT) In Locomotor Diseases
NASA Astrophysics Data System (ADS)
Engel, Joachim-Michael
1983-11-01
Locomotor diseases is a wide range of about 450 different illnesses with all different pathologies, clinical and prognostic features and response to treatment. No single method will be able to cover the whole spectrum of local and systemic signs and symptoms. Nevertheless there is a need for objective measurements at the site of disease: clinical examination is often enough depending from subjective estimations and personal experiance of the clinician. Laboratory tests only show the systemic effect of the disease, like inflammation. X-rays are restricted to the detection of structural changes appearing late during the pathological process, even when using different techniques. Here IRT offers several advantages to the clinician as well as to the patient. As a non invasive method it monitors the course of disease at the anatomic site of pathology. Quantitative figures calculated from the thermogram,either taken at steady-state or during dynamic tests, are essential for differential diagnosis and follow-up. Advanced IRT camera systems fulfill all requirements set up for medical thermography recently by the National Bureau of Standards. Although, the user should check his system daily with regard to precision of absolute temperature measurements. Standardisation of recording technique is essential as well,to get reliable results. Ambient conditions must be adapted to the locomotor disease pathology under study. Advanced IRT systems , e.g. ZEISS-IKOTHERM, together with image processing capability and special software, e.g. THERMOTOM package, are valuable tools to the rheumatologist for diagnosing and monitoring locomotor diseases.
Market for advanced humanitarian mine detectors
NASA Astrophysics Data System (ADS)
Newnham, Peter; Daniels, David J.
2001-10-01
Uncleared landmines and unexploded ordnance remain a major humanitarian and economic threat in over 60 countries. It is estimated that world wide over US 60 million was spent on mien clearance in 1999. Most of this funding is provided by government aid, often channeled via the UN or European Community. The minefield threat is very varied, with many different types of mien, UXO, terrain and climate type. To cope with this variety a range of demining techniques are used: mechanical techniques such as flails are used for vegetation clearance, however the majority of demining work is still carried out by manual deminers using metal detectors and prodders. Over the last 5 years there has been considerable interest within the scientific and engineering communities in the application of advanced technologies to improve the safety and efficiency of this work. Nevertheless few new products have been introduced into, and accepted by, the demining community. Despite the high political profile of the landmine problem very little e hard dat is available on the real characteristics of the demining equipment market. As part of a European Union supported program to evacuate a multi-sensor handheld mien detector concept, Thales and ERA Technology Ltd have carried out an in-depth assessment of this market. This paper describes the cost- benefits that could accrue to the demining community associated with use of advanced equipment under appropriate conditions and the equipment requirements that result. The dynamics of the demining equipment market and the barriers to entry are discussed.
Piriformis Syndrome and Endoscopic Sciatic Neurolysis.
Knudsen, Joshua S; Mei-Dan, Omer; Brick, Mathew J
2016-03-01
Piriformis syndrome is the compression or the irritation of the sciatic nerve by the adjacent piriformis muscle in the buttock leading to symptoms that include buttock pain, leg pain, and altered neurology in the sciatic nerve distribution. Epidemiological figures of the prevalence are unknown, but are estimated to be about 12.2% to 27%. There is no consensus on the diagnostic criteria. Advancement in magnetic resonance imaging allows us to observe unilateral hyperintensity and bowing of the sciatic nerve. The pathophysiology of the disease includes single blunt trauma, overuse causing piriformis hypertrophy, and long-term microtrauma causing scarring. Treatments include physiotherapy, steroid injections, and surgery. Minimally invasive techniques are emerging with the hope that with less postoperative scar tissue formation, there will be less recurrence of the disease. In this chapter, senior author describes his technique for endoscopic sciatic neurolysis.
Observations of the Geometry of Horizon-Based Optical Navigation
NASA Technical Reports Server (NTRS)
Christian, John; Robinson, Shane
2016-01-01
NASA's Orion Project has sparked a renewed interest in horizon-based optical navigation(OPNAV) techniques for spacecraft in the Earth-Moon system. Some approaches have begun to explore the geometry of horizon-based OPNAV and exploit the fact that it is a conic section problem. Therefore, the present paper focuses more deeply on understanding and leveraging the various geometric interpretations of horizon-based OPNAV. These results provide valuable insight into the fundamental workings of OPNAV solution methods, their convergence properties, and associated estimate covariance. Most importantly, the geometry and transformations uncovered in this paper lead to a simple and non-iterative solution to the generic horizon-based OPNAV problem. This represents a significant theoretical advancement over existing methods. Thus, we find that a clear understanding of geometric relationships is central to the prudent design, use, and operation of horizon-based OPNAV techniques.
HALT to qualify electronic packages: a proof of concept
NASA Astrophysics Data System (ADS)
Ramesham, Rajeshuni
2014-03-01
A proof of concept of the Highly Accelerated Life Testing (HALT) technique was explored to assess and optimize electronic packaging designs for long duration deep space missions in a wide temperature range (-150°C to +125°C). HALT is a custom hybrid package suite of testing techniques using environments such as extreme temperatures and dynamic shock step processing from 0g up to 50g of acceleration. HALT testing used in this study implemented repetitive shock on the test vehicle components at various temperatures to precipitate workmanship and/or manufacturing defects to show the weak links of the designs. The purpose is to reduce the product development cycle time for improvements to the packaging design qualification. A test article was built using advanced electronic package designs and surface mount technology processes, which are considered useful for a variety of JPL and NASA projects, i.e. (surface mount packages such as ball grid arrays (BGA), plastic ball grid arrays (PBGA), very thin chip array ball grid array (CVBGA), quad flat-pack (QFP), micro-lead-frame (MLF) packages, several passive components, etc.). These packages were daisy-chained and independently monitored during the HALT test. The HALT technique was then implemented to predict reliability and assess survivability of these advanced packaging techniques for long duration deep space missions in much shorter test durations. Test articles were built using advanced electronic package designs that are considered useful in various NASA projects. All the advanced electronic packages were daisychained independently to monitor the continuity of the individual electronic packages. Continuity of the daisy chain packages was monitored during the HALT testing using a data logging system. We were able to test the boards up to 40g to 50g shock levels at temperatures ranging from +125°C to -150°C. The HALT system can deliver 50g shock levels at room temperature. Several tests were performed by subjecting the test boards to various g levels ranging from 5g to 50g, test durations of 10 minutes to 60 minutes, hot temperatures of up to +125°C and cold temperatures down to -150°C. During the HALT test, electrical continuity measurements of the PBGA package showed an open-circuit, whereas the BGA, MLF, and QFPs showed signs of small variations of electrical continuity measurements. The electrical continuity anomaly of the PBGA occurred in the test board within 12 hours of commencing the accelerated test. Similar test boards were assembled, thermal cycled independently from -150°C to +125°C and monitored for electrical continuity through each package design. The PBGA package on the test board showed an anomalous electrical continuity behavior after 959 thermal cycles. Each thermal cycle took around 2.33 hours, so that a total test time to failure of the PBGA was 2,237 hours (or ~3.1 months) due to thermal cycling alone. The accelerated technique (thermal cycling + shock) required only 12 hours to cause a failure in the PBGA electronic package. Compared to the thermal cycle only test, this was an acceleration of ~186 times (more than 2 orders of magnitude). This acceleration process can save significant time and resources for predicting the life of a package component in a given environment, assuming the failure mechanisms are similar in both the tests. Further studies are in progress to make systematic evaluations of the HALT technique on various other advanced electronic packaging components on the test board. With this information one will be able to estimate the number of mission thermal cycles to failure with a much shorter test program. Further studies are in progress to make systematic study of various components, constant temperature range for both the tests. Therefore, one can estimate the number of hours to fail in a given thermal and shock levels for a given test board physical properties.
Systems-Level Synthetic Biology for Advanced Biofuel Production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruffing, Anne; Jensen, Travis J.; Strickland, Lucas Marshall
2015-03-01
Cyanobacteria have been shown to be capable of producing a variety of advanced biofuels; however, product yields remain well below those necessary for large scale production. New genetic tools and high throughput metabolic engineering techniques are needed to optimize cyanobacterial metabolisms for enhanced biofuel production. Towards this goal, this project advances the development of a multiple promoter replacement technique for systems-level optimization of gene expression in a model cyanobacterial host: Synechococcus sp. PCC 7002. To realize this multiple-target approach, key capabilities were developed, including a high throughput detection method for advanced biofuels, enhanced transformation efficiency, and genetic tools for Synechococcusmore » sp. PCC 7002. Moreover, several additional obstacles were identified for realization of this multiple promoter replacement technique. The techniques and tools developed in this project will help to enable future efforts in the advancement of cyanobacterial biofuels.« less
Evolution and Advances in Satellite Analysis of Volcanoes
NASA Astrophysics Data System (ADS)
Dean, K. G.; Dehn, J.; Webley, P.; Bailey, J.
2008-12-01
Over the past 20 years satellite data used for monitoring and analysis of volcanic eruptions has evolved in terms of timeliness, access, distribution, resolution and understanding of volcanic processes. Initially satellite data was used for retrospective analysis but has evolved to proactive monitoring systems. Timely acquisition of data and the capability to distribute large data files paralleled advances in computer technology and was a critical component for near real-time monitoring. The sharing of these data and resulting discussions has improved our understanding of eruption processes and, even more importantly, their impact on society. To illustrate this evolution, critical scientific discoveries will be highlighted, including detection of airborne ash and sulfur dioxide, cloud-height estimates, prediction of ash cloud movement, and detection of thermal anomalies as precursor-signals to eruptions. AVO has been a leader in implementing many of these advances into an operational setting such as, automated eruption detection, database analysis systems, and remotely accessible web-based analysis systems. Finally, limitations resulting from trade-offs between resolution and how they impact some weakness in detection techniques and hazard assessments will be presented.
Advanced Small Modular Reactor Economics Model Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Thomas J.
2014-10-01
The US Department of Energy Office of Nuclear Energy’s Advanced Small Modular Reactor (SMR) research and development activities focus on four key areas: Developing assessment methods for evaluating advanced SMR technologies and characteristics; and Developing and testing of materials, fuels and fabrication techniques; and Resolving key regulatory issues identified by US Nuclear Regulatory Commission and industry; and Developing advanced instrumentation and controls and human-machine interfaces. This report focuses on development of assessment methods to evaluate advanced SMR technologies and characteristics. Specifically, this report describes the expansion and application of the economic modeling effort at Oak Ridge National Laboratory. Analysis ofmore » the current modeling methods shows that one of the primary concerns for the modeling effort is the handling of uncertainty in cost estimates. Monte Carlo–based methods are commonly used to handle uncertainty, especially when implemented by a stand-alone script within a program such as Python or MATLAB. However, a script-based model requires each potential user to have access to a compiler and an executable capable of handling the script. Making the model accessible to multiple independent analysts is best accomplished by implementing the model in a common computing tool such as Microsoft Excel. Excel is readily available and accessible to most system analysts, but it is not designed for straightforward implementation of a Monte Carlo–based method. Using a Monte Carlo algorithm requires in-spreadsheet scripting and statistical analyses or the use of add-ons such as Crystal Ball. An alternative method uses propagation of error calculations in the existing Excel-based system to estimate system cost uncertainty. This method has the advantage of using Microsoft Excel as is, but it requires the use of simplifying assumptions. These assumptions do not necessarily bring into question the analytical results. In fact, the analysis shows that the propagation of error method introduces essentially negligible error, especially when compared to the uncertainty associated with some of the estimates themselves. The results of these uncertainty analyses generally quantify and identify the sources of uncertainty in the overall cost estimation. The obvious generalization—that capital cost uncertainty is the main driver—can be shown to be an accurate generalization for the current state of reactor cost analysis. However, the detailed analysis on a component-by-component basis helps to demonstrate which components would benefit most from research and development to decrease the uncertainty, as well as which components would benefit from research and development to decrease the absolute cost.« less
Refinement of NMR structures using implicit solvent and advanced sampling techniques.
Chen, Jianhan; Im, Wonpil; Brooks, Charles L
2004-12-15
NMR biomolecular structure calculations exploit simulated annealing methods for conformational sampling and require a relatively high level of redundancy in the experimental restraints to determine quality three-dimensional structures. Recent advances in generalized Born (GB) implicit solvent models should make it possible to combine information from both experimental measurements and accurate empirical force fields to improve the quality of NMR-derived structures. In this paper, we study the influence of implicit solvent on the refinement of protein NMR structures and identify an optimal protocol of utilizing these improved force fields. To do so, we carry out structure refinement experiments for model proteins with published NMR structures using full NMR restraints and subsets of them. We also investigate the application of advanced sampling techniques to NMR structure refinement. Similar to the observations of Xia et al. (J.Biomol. NMR 2002, 22, 317-331), we find that the impact of implicit solvent is rather small when there is a sufficient number of experimental restraints (such as in the final stage of NMR structure determination), whether implicit solvent is used throughout the calculation or only in the final refinement step. The application of advanced sampling techniques also seems to have minimal impact in this case. However, when the experimental data are limited, we demonstrate that refinement with implicit solvent can substantially improve the quality of the structures. In particular, when combined with an advanced sampling technique, the replica exchange (REX) method, near-native structures can be rapidly moved toward the native basin. The REX method provides both enhanced sampling and automatic selection of the most native-like (lowest energy) structures. An optimal protocol based on our studies first generates an ensemble of initial structures that maximally satisfy the available experimental data with conventional NMR software using a simplified force field and then refines these structures with implicit solvent using the REX method. We systematically examine the reliability and efficacy of this protocol using four proteins of various sizes ranging from the 56-residue B1 domain of Streptococcal protein G to the 370-residue Maltose-binding protein. Significant improvement in the structures was observed in all cases when refinement was based on low-redundancy restraint data. The proposed protocol is anticipated to be particularly useful in early stages of NMR structure determination where a reliable estimate of the native fold from limited data can significantly expedite the overall process. This refinement procedure is also expected to be useful when redundant experimental data are not readily available, such as for large multidomain biomolecules and in solid-state NMR structure determination.
NASA Astrophysics Data System (ADS)
Angel, Erin
Advances in Computed Tomography (CT) technology have led to an increase in the modality's diagnostic capabilities and therefore its utilization, which has in turn led to an increase in radiation exposure to the patient population. As a result, CT imaging currently constitutes approximately half of the collective exposure to ionizing radiation from medical procedures. In order to understand the radiation risk, it is necessary to estimate the radiation doses absorbed by patients undergoing CT imaging. The most widely accepted risk models are based on radiosensitive organ dose as opposed to whole body dose. In this research, radiosensitive organ dose was estimated using Monte Carlo based simulations incorporating detailed multidetector CT (MDCT) scanner models, specific scan protocols, and using patient models based on accurate patient anatomy and representing a range of patient sizes. Organ dose estimates were estimated for clinical MDCT exam protocols which pose a specific concern for radiosensitive organs or regions. These dose estimates include estimation of fetal dose for pregnant patients undergoing abdomen pelvis CT exams or undergoing exams to diagnose pulmonary embolism and venous thromboembolism. Breast and lung dose were estimated for patients undergoing coronary CTA imaging, conventional fixed tube current chest CT, and conventional tube current modulated (TCM) chest CT exams. The correlation of organ dose with patient size was quantified for pregnant patients undergoing abdomen/pelvis exams and for all breast and lung dose estimates presented. Novel dose reduction techniques were developed that incorporate organ location and are specifically designed to reduce close to radiosensitive organs during CT acquisition. A generalizable model was created for simulating conventional and novel attenuation-based TCM algorithms which can be used in simulations estimating organ dose for any patient model. The generalizable model is a significant contribution of this work as it lays the foundation for the future of simulating TCM using Monte Carlo methods. As a result of this research organ dose can be estimated for individual patients undergoing specific conventional MDCT exams. This research also brings understanding to conventional and novel close reduction techniques in CT and their effect on organ dose.
Hochman, Mark N
2007-04-01
This article will review standard techniques for intraligamentary injection and describe the technology and technique behind a new single-tooth anesthesia system. This system and technique represents a technological advancement and a greater understanding of intraligamentary anesthesia.
Brassey, Charlotte A.; Gardiner, James D.
2015-01-01
Body mass is a fundamental physical property of an individual and has enormous bearing upon ecology and physiology. Generating reliable estimates for body mass is therefore a necessary step in many palaeontological studies. Whilst early reconstructions of mass in extinct species relied upon isolated skeletal elements, volumetric techniques are increasingly applied to fossils when skeletal completeness allows. We apply a new ‘alpha shapes’ (α-shapes) algorithm to volumetric mass estimation in quadrupedal mammals. α-shapes are defined by: (i) the underlying skeletal structure to which they are fitted; and (ii) the value α, determining the refinement of fit. For a given skeleton, a range of α-shapes may be fitted around the individual, spanning from very coarse to very fine. We fit α-shapes to three-dimensional models of extant mammals and calculate volumes, which are regressed against mass to generate predictive equations. Our optimal model is characterized by a high correlation coefficient and mean square error (r2=0.975, m.s.e.=0.025). When applied to the woolly mammoth (Mammuthus primigenius) and giant ground sloth (Megatherium americanum), we reconstruct masses of 3635 and 3706 kg, respectively. We consider α-shapes an improvement upon previous techniques as resulting volumes are less sensitive to uncertainties in skeletal reconstructions, and do not require manual separation of body segments from skeletons. PMID:26361559
NASA Technical Reports Server (NTRS)
Redemann, Jens; Shinozuka, Y.; Kacenelenbogen, M.; Russell, P.; Vaughan, M.; Ferrare, R.; Hostetler, C.; Rogers, R.; Burton, S.; Livingston, J.;
2014-01-01
We describe a technique for combining CALIOP aerosol backscatter, MODIS spectral AOD (aerosol optical depth), and OMI AAOD (absorption aerosol optical depth) measurements for the purpose of estimating full spectral sets of aerosol radiative properties, and ultimately for calculating the 3-D distribution of direct aerosol radiative forcing. We present results using one year of data collected in 2007 and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Initial calculations of seasonal clear-sky aerosol radiative forcing based on our multi-sensor aerosol retrievals compare well with over-ocean and top of the atmosphere IPCC-2007 model-based results, and with more recent assessments in the "Climate Change Science Program Report: Atmospheric Aerosol Properties and Climate Impacts" (2009). We discuss some of the challenges that exist in extending our clear-sky results to all-sky conditions. On the basis of comparisons to suborbital measurements, we present some of the limitations of the MODIS and CALIOP retrievals in the presence of adjacent or underlying clouds. Strategies for meeting these challenges are discussed. We also discuss a methodology for using the multi-sensor aerosol retrievals for aerosol type classification based on advanced clustering techniques. The combination of research results permits conclusions regarding the attribution of aerosol radiative forcing to aerosol type.
Robust infrared target tracking using discriminative and generative approaches
NASA Astrophysics Data System (ADS)
Asha, C. S.; Narasimhadhan, A. V.
2017-09-01
The process of designing an efficient tracker for thermal infrared imagery is one of the most challenging tasks in computer vision. Although a lot of advancement has been achieved in RGB videos over the decades, textureless and colorless properties of objects in thermal imagery pose hard constraints in the design of an efficient tracker. Tracking of an object using a single feature or a technique often fails to achieve greater accuracy. Here, we propose an effective method to track an object in infrared imagery based on a combination of discriminative and generative approaches. The discriminative technique makes use of two complementary methods such as kernelized correlation filter with spatial feature and AdaBoost classifier with pixel intesity features to operate in parallel. After obtaining optimized locations through discriminative approaches, the generative technique is applied to determine the best target location using a linear search method. Unlike the baseline algorithms, the proposed method estimates the scale of the target by Lucas-Kanade homography estimation. To evaluate the proposed method, extensive experiments are conducted on 17 challenging infrared image sequences obtained from LTIR dataset and a significant improvement of mean distance precision and mean overlap precision is accomplished as compared with the existing trackers. Further, a quantitative and qualitative assessment of the proposed approach with the state-of-the-art trackers is illustrated to clearly demonstrate an overall increase in performance.
Two biased estimation techniques in linear regression: Application to aircraft
NASA Technical Reports Server (NTRS)
Klein, Vladislav
1988-01-01
Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.
Wafer-level colinearity monitoring for TFH applications
NASA Astrophysics Data System (ADS)
Moore, Patrick; Newman, Gary; Abreau, Kelly J.
2000-06-01
Advances in thin film head (TFH) designs continue to outpace those in the IC industry. The transition to giant magneto resistive (GMR) designs is underway along with the push toward areal densities in the 20 Gbit/inch2 regime and beyond. This comes at a time when the popularity of the low-cost personal computer (PC) is extremely high, and PC prices are continuing to fall. Consequently, TFH manufacturers are forced to deal with pricing pressure in addition to technological demands. New methods of monitoring and improving yield are required along with advanced head designs. TFH manufacturing is a two-step process. The first is a wafer-level process consisting of manufacturing devices on substrates using processes similar to those in the IC industry. The second half is a slider-level process where wafers are diced into 'rowbars' containing many heads. Each rowbar is then lapped to obtain the desired performance from each head. Variation in the placement of specific layers of each device on the bar, known as a colinearity error, causes a change in device performance and directly impacts yield. The photolithography tool and process contribute to colinearity errors. These components include stepper lens distortion errors, stepper stage errors, reticle fabrication errors, and CD uniformity errors. Currently, colinearity is only very roughly estimated during wafer-level TFH production. An absolute metrology tool, such as a Nikon XY, could be used to quantify colinearity with improved accuracy, but this technique is impractical since TFH manufacturers typically do not have this type of equipment at the production site. More importantly, this measurement technique does not provide the rapid feedback needed in a high-volume production facility. Consequently, the wafer-fab must rely on resistivity-based measurements from slider-fab to quantify colinearity errors. The feedback of this data may require several weeks, making it useless as a process diagnostic. This study examines a method of quickly estimating colinearity at the wafer-level with a test reticle and metrology equipment routinely found in TFH facilities. Colinearity results are correlated to slider-fab measurements on production devices. Stepper contributions to colinearity are estimated, and compared across multiple steppers and stepper generations. Multiple techniques of integrating this diagnostic into production are investigated and discussed.
RF Testing Of Microwave Integrated Circuits
NASA Technical Reports Server (NTRS)
Romanofsky, R. R.; Ponchak, G. E.; Shalkhauser, K. A.; Bhasin, K. B.
1988-01-01
Fixtures and techniques are undergoing development. Four test fixtures and two advanced techniques developed in continuing efforts to improve RF characterization of MMIC's. Finline/waveguide test fixture developed to test submodules of 30-GHz monolithic receiver. Universal commercially-manufactured coaxial test fixture modified to enable characterization of various microwave solid-state devices in frequency range of 26.5 to 40 GHz. Probe/waveguide fixture is compact, simple, and designed for non destructive testing of large number of MMIC's. Nondestructive-testing fixture includes cosine-tapered ridge, to match impedance wavequide to microstrip. Advanced technique is microwave-wafer probing. Second advanced technique is electro-optical sampling.
Endoscopic therapy for early gastric cancer: Standard techniques and recent advances in ESD
Kume, Keiichiro
2014-01-01
The technique of endoscopic submucosal dissection (ESD) is now a well-known endoscopic therapy for early gastric cancer. ESD was introduced to resect large specimens of early gastric cancer in a single piece. ESD can provide precision of histologic diagnosis and can also reduce the recurrence rate. However, the drawback of ESD is its technical difficulty, and, consequently, it is associated with a high rate of complications, the need for advanced endoscopic techniques, and a lengthy procedure time. Various advances in the devices and techniques used for ESD have contributed to overcoming these drawbacks. PMID:24914364
Advanced Diffusion-Weighted Magnetic Resonance Imaging Techniques of the Human Spinal Cord
Andre, Jalal B.; Bammer, Roland
2012-01-01
Unlike those of the brain, advances in diffusion-weighted imaging (DWI) of the human spinal cord have been challenged by the more complicated and inhomogeneous anatomy of the spine, the differences in magnetic susceptibility between adjacent air and fluid-filled structures and the surrounding soft tissues, and the inherent limitations of the initially used echo-planar imaging techniques used to image the spine. Interval advances in DWI techniques for imaging the human spinal cord, with the specific aims of improving the diagnostic quality of the images, and the simultaneous reduction in unwanted artifacts have resulted in higher-quality images that are now able to more accurately portray the complicated underlying anatomy and depict pathologic abnormality with improved sensitivity and specificity. Diffusion tensor imaging (DTI) has benefited from the advances in DWI techniques, as DWI images form the foundation for all tractography and DTI. This review provides a synopsis of the many recent advances in DWI of the human spinal cord, as well as some of the more common clinical uses for these techniques, including DTI and tractography. PMID:22158130
Cost-of-illness studies: concepts, scopes, and methods
2014-01-01
Liver diseases are one of the main causes of death, and their ever-increasing prevalence is threatening to cause significant damage both to individuals and society as a whole. This damage is especially serious for the economically active population in Korea. From the societal perspective, it is therefore necessary to consider the economic impacts associated with liver diseases, and identify interventions that can reduce the burden of these diseases. The cost-of-illness study is considered to be an essential evaluation technique in health care. By measuring and comparing the economic burdens of diseases to society, such studies can help health-care decision-makers to set up and prioritize health-care policies and interventions. Using economic theories, this paper introduces various study methods that are generally applicable to most disease cases for estimating the costs of illness associated with mortality, morbidity, disability, and other disease characteristics. It also presents concepts and scopes of costs along with different cost categories from different research perspectives in cost estimations. By discussing the epidemiological and economic grounds of the cost-of-illness study, the reported results represent useful information about several evaluation techniques at an advanced level, such as cost-benefit analysis, cost-effectiveness analysis, and cost-utility analysis. PMID:25548737
Measurement of ROS homeostasis in isolated mitochondria.
Tretter, L; Ambrus, A
2014-01-01
In this chapter, we describe the currently most advanced methods applied for the quantitative assessment of ROS homeostasis inside the mitochondrion. These techniques are of particular interest in the field of oxidative stress. After discussing the importance of quantifying mitochondrial ROS homeostasis, three major aspects of this phenomenon and the pertinent methodologies for detection are delineated in detail. First the most important methods, based on fluorimetric or spectrophotometric approaches, for the detection of mitochondrial ROS are described. Elimination of ROS generated inside the mitochondrion is another crucial mechanism that also needs to be quantified accurately to estimate the antioxidant capacity of mitochondria under specific conditions. Since ROS generation and elimination manifest in concert, there needs to exist independent methods for the estimation of the net effect. Such a sensitive biochemical marker in the mitochondrion is aconitase, a citric acid cycle enzyme which is greatly sensitive to ROS. We describe two procedures for the precise determination of aconitase activity. A few auxiliary techniques and good practices having relevance in the successful accomplishment of the more delicate approaches are also mentioned. All other relevant technical considerations including advantages/disadvantages of the various methods and the most common artifacts are also discussed.
Cost-of-illness studies: concepts, scopes, and methods.
Jo, Changik
2014-12-01
Liver diseases are one of the main causes of death, and their ever-increasing prevalence is threatening to cause significant damage both to individuals and society as a whole. This damage is especially serious for the economically active population in Korea. From the societal perspective, it is therefore necessary to consider the economic impacts associated with liver diseases, and identify interventions that can reduce the burden of these diseases. The cost-of-illness study is considered to be an essential evaluation technique in health care. By measuring and comparing the economic burdens of diseases to society, such studies can help health-care decision-makers to set up and prioritize health-care policies and interventions. Using economic theories, this paper introduces various study methods that are generally applicable to most disease cases for estimating the costs of illness associated with mortality, morbidity, disability, and other disease characteristics. It also presents concepts and scopes of costs along with different cost categories from different research perspectives in cost estimations. By discussing the epidemiological and economic grounds of the cost-of-illness study, the reported results represent useful information about several evaluation techniques at an advanced level, such as cost-benefit analysis, cost-effectiveness analysis, and cost-utility analysis.
Probability techniques for reliability analysis of composite materials
NASA Technical Reports Server (NTRS)
Wetherhold, Robert C.; Ucci, Anthony M.
1994-01-01
Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.
NASA Astrophysics Data System (ADS)
Sujatha, N.; Anand, B. S. Suresh; Nivetha, K. Bala; Narayanamurthy, V. B.; Seshadri, V.; Poddar, R.
2015-07-01
Light-based diagnostic techniques provide a minimally invasive way for selective biomarker estimation when tissues transform from a normal to a malignant state. Spectroscopic techniques based on diffuse reflectance characterize the changes in tissue hemoglobin/oxygenation levels during the tissue transformation process. Recent clinical investigations have shown that changes in tissue oxygenation and microcirculation are observed in diabetic subjects in the initial and progressive stages. In this pilot study, we discuss the potential of diffuse reflectance spectroscopy (DRS) in the visible (Vis) range to differentiate the skin microcirculatory hemoglobin levels between normal and advanced diabetic subjects with and without neuropathy. Average concentration of hemoglobin as well as hemoglobin oxygen saturation within the probed tissue volume is estimated for a total of four different sites in the foot sole. The results indicate a statistically significant decrease in average total hemoglobin and increase in hemoglobin oxygen saturation levels for diabetic foot compared with a normal foot. The present study demonstrates the ability of reflectance spectroscopy in the Vis range to determine and differentiate the changes in tissue hemoglobin and hemoglobin oxygen saturation levels in normal and diabetic subjects.
Urban rainfall estimation employing commercial microwave links
NASA Astrophysics Data System (ADS)
Overeem, Aart; Leijnse, Hidde; Uijlenhoet, Remko; ten Veldhuis, Marie-claire
2015-04-01
Urban areas often lack rainfall information. To increase the number of rainfall observations in cities, microwave links from operational cellular telecommunication networks may be employed. Although this new potential source of rainfall information has been shown to be promising, its quality needs to be demonstrated more extensively. In the Rain Sense kickstart project of the Amsterdam Institute for Advanced Metropolitan Solutions (AMS), sensors and citizens are preparing Amsterdam for future weather. Part of this project is rainfall estimation using new measurement techniques. Innovative sensing techniques will be utilized such as rainfall estimation from microwave links, umbrellas for weather sensing, low-cost sensors at lamp posts and in drainage pipes for water level observation. These will be combined with information provided by citizens in an active way through smartphone apps and in a passive way through social media posts (Twitter, Flickr etc.). Sensor information will be integrated, visualized and made accessible to citizens to help raise citizen awareness of urban water management challenges and promote resilience by providing information on how citizens can contribute in addressing these. Moreover, citizens and businesses can benefit from reliable weather information in planning their social and commercial activities. In the end city-wide high-resolution rainfall maps will be derived, blending rainfall information from microwave links and weather radars. This information will be used for urban water management. This presentation focuses on rainfall estimation from commercial microwave links. Received signal levels from tens of microwave links within the Amsterdam region (roughly 1 million inhabitants) in the Netherlands are utilized to estimate rainfall with high spatial and temporal resolution. Rainfall maps will be presented and compared to a gauge-adjusted radar rainfall data set. Rainfall time series from gauge(s), radars and links will be compared.
Xu, Xiao Wu; Yu, Xin Xiao; Jia, Guo Dong; Li, Han Zhi; Lu, Wei Wei; Liu, Zi Qiang
2017-07-18
Soil-vegetation-atmosphere continuum (SPAC) is one of the important research objects in the field of terrestrial hydrology, ecology and global change. The process of water and carbon cycling, and their coupling mechanism are frontier issues. With characteristics of tracing, integration and indication, stable isotope techniques contribute to the estimation of the relationship between carbon sequestration and water consumption in ecosystems. In this review, based on a brief introduction of stable isotope principles and techniques, the applications of stable isotope techniques to water and carbon exchange in SPAC using optical stable isotope techniques were mainly explained, including: partitioning of net carbon exchange into photosynthesis and respiration; partitioning of evapotranspiration into transpiration and evaporation; coupling of water and carbon cycle at the ecosystem scale. Advanced techniques and methods provided long-term and high frequency measurements for isotope signals at the ecosystem scale, but the issues about the precision and accuracy for measurements, partitioning of ecosystem respiration, adaptability for models under non-steady state, scaling up, coupling mechanism of water and carbon cycles, were challenging. The main existing research findings, limitations and future research prospects were discussed, which might help new research and technology development in the field of stable isotope ecology.
NASA Technical Reports Server (NTRS)
Ahamed, Aakash; Bolten, John; Doyle, Colin; Fayne, Jessica
2016-01-01
Floods are the costliest natural disaster, causing approximately 6.8 million deaths in the twentieth century alone. Worldwide economic flood damage estimates in 2012 exceed $19 Billion USD. Extended duration floods also pose longer term threats to food security, water, sanitation, hygiene, and community livelihoods, particularly in developing countries. Projections by the Intergovernmental Panel on Climate Change (IPCC) suggest that precipitation extremes, rainfall intensity, storm intensity, and variability are increasing due to climate change. Increasing hydrologic uncertainty will likely lead to unprecedented extreme flood events. As such, there is a vital need to enhance and further develop traditional techniques used to rapidly assess flooding and extend analytical methods to estimate impacted population and infrastructure. Measuring flood extent in situ is generally impractical, time consuming, and can be inaccurate. Remotely sensed imagery acquired from space-borne and airborne sensors provides a viable platform for consistent and rapid wall-to-wall monitoring of large flood events through time. Terabytes of freely available satellite imagery are made available online each day by NASA, ESA, and other international space research institutions. Advances in cloud computing and data storage technologies allow researchers to leverage these satellite data and apply analytical methods at scale. Repeat-survey earth observations help provide insight about how natural phenomena change through time, including the progression and recession of floodwaters. In recent years, cloud-penetrating radar remote sensing techniques (e.g., Synthetic Aperture Radar) and high temporal resolution imagery platforms (e.g., MODIS and its 1-day return period), along with high performance computing infrastructure, have enabled significant advances in software systems that provide flood warning, assessments, and hazard reduction potential. By incorporating social and economic data, researchers can develop systems that automatically quantify the socioeconomic impacts resulting from flood disaster events.
Frank, Lawrence D; Fox, Eric H; Ulmer, Jared M; Chapman, James E; Kershaw, Suzanne E; Sallis, James F; Conway, Terry L; Cerin, Ester; Cain, Kelli L; Adams, Marc A; Smith, Graham R; Hinckson, Erica; Mavoa, Suzanne; Christiansen, Lars B; Hino, Adriano Akira F; Lopes, Adalberto A S; Schipperijn, Jasper
2017-01-23
Advancements in geographic information systems over the past two decades have increased the specificity by which an individual's neighborhood environment may be spatially defined for physical activity and health research. This study investigated how different types of street network buffering methods compared in measuring a set of commonly used built environment measures (BEMs) and tested their performance on associations with physical activity outcomes. An internationally-developed set of objective BEMs using three different spatial buffering techniques were used to evaluate the relative differences in resulting explanatory power on self-reported physical activity outcomes. BEMs were developed in five countries using 'sausage,' 'detailed-trimmed,' and 'detailed,' network buffers at a distance of 1 km around participant household addresses (n = 5883). BEM values were significantly different (p < 0.05) for 96% of sausage versus detailed-trimmed buffer comparisons and 89% of sausage versus detailed network buffer comparisons. Results showed that BEM coefficients in physical activity models did not differ significantly across buffering methods, and in most cases BEM associations with physical activity outcomes had the same level of statistical significance across buffer types. However, BEM coefficients differed in significance for 9% of the sausage versus detailed models, which may warrant further investigation. Results of this study inform the selection of spatial buffering methods to estimate physical activity outcomes using an internationally consistent set of BEMs. Using three different network-based buffering methods, the findings indicate significant variation among BEM values, however associations with physical activity outcomes were similar across each buffering technique. The study advances knowledge by presenting consistently assessed relationships between three different network buffer types and utilitarian travel, sedentary behavior, and leisure-oriented physical activity outcomes.
Very Small Interstellar Spacecraft
NASA Astrophysics Data System (ADS)
Peck, Mason A.
2007-02-01
This paper considers lower limits of length scale in spacecraft: interstellar vehicles consisting of little more material than found in a typical integrated-circuit chip. Some fundamental scaling principles are introduced to show how the dynamics of the very small can be used to realize interstellar travel with minimal advancements in technology. Our recent study for the NASA Institute for Advanced Concepts provides an example: the use of the Lorentz force that acts on electrically charged spacecraft traveling through planetary and stellar magnetospheres. Schaffer and Burns, among others, have used Cassini and Voyager imagery to show that this interaction is responsible for some of the resonances in the orbital dynamics of dust in Jupiter's and Saturn's rings. The Lorentz force turns out to vary in inverse proportion to the square of this characteristic length scale, making it a more effective means of propelling tiny spacecraft than solar sailing. Performance estimates, some insight into plasma interactions, and some hardware concepts are offered. The mission architectures considered here involve the use of these propellantless propulsion techniques for acceleration within our solar system and deceleration near the destination. Performance estimates, some insight into plasma interactions, and some hardware concepts are offered. The mission architectures considered here involve the use of these propellantless propulsion techniques for acceleration within our solar system and deceleration near the destination. We might envision a large number of such satellites with intermittent, bursty communications set up as a one-dimensional network to relay signals across great distances using only the power likely from such small spacecraft. Conveying imagery in this fashion may require a long time because of limited power, but the prospect of imaging another star system close-up ought to be worth the wait.
NASA Astrophysics Data System (ADS)
Parekh, Vishwa S.; Jacobs, Jeremy R.; Jacobs, Michael A.
2014-03-01
The evaluation and treatment of acute cerebral ischemia requires a technique that can determine the total area of tissue at risk for infarction using diagnostic magnetic resonance imaging (MRI) sequences. Typical MRI data sets consist of T1- and T2-weighted imaging (T1WI, T2WI) along with advanced MRI parameters of diffusion-weighted imaging (DWI) and perfusion weighted imaging (PWI) methods. Each of these parameters has distinct radiological-pathological meaning. For example, DWI interrogates the movement of water in the tissue and PWI gives an estimate of the blood flow, both are critical measures during the evolution of stroke. In order to integrate these data and give an estimate of the tissue at risk or damaged; we have developed advanced machine learning methods based on unsupervised non-linear dimensionality reduction (NLDR) techniques. NLDR methods are a class of algorithms that uses mathematically defined manifolds for statistical sampling of multidimensional classes to generate a discrimination rule of guaranteed statistical accuracy and they can generate a two- or three-dimensional map, which represents the prominent structures of the data and provides an embedded image of meaningful low-dimensional structures hidden in their high-dimensional observations. In this manuscript, we develop NLDR methods on high dimensional MRI data sets of preclinical animals and clinical patients with stroke. On analyzing the performance of these methods, we observed that there was a high of similarity between multiparametric embedded images from NLDR methods and the ADC map and perfusion map. It was also observed that embedded scattergram of abnormal (infarcted or at risk) tissue can be visualized and provides a mechanism for automatic methods to delineate potential stroke volumes and early tissue at risk.
ERIC Educational Resources Information Center
Lewis, Jennifer R.; Kotur, Mark S.; Butt, Omar; Kulcarni, Sumant; Riley, Alyssa A.; Ferrell, Nick; Sullivan, Kathryn D.; Ferrari, Mauro
2002-01-01
Discusses small-group apprenticeships (SGAs) as a method for introducing cell culture techniques to high school participants. Teaches cell culture practices and introduces advance imaging techniques to solve various biomedical engineering problems. Clarifies and illuminates the value of small-group laboratory apprenticeships. (Author/KHR)
Advanced techniques to prepare seed to sow
Robert P. Karrfalt
2013-01-01
This paper reviews research on improving the basic technique of cold stratification for tree and shrub seeds. Advanced stratification techniques include long stratification, stratification re-dry, or multiple cycles of warm-cold stratification. Research demonstrates that careful regulation of moisture levels and lengthening the stratification period have produced a...
Fourth NASA Inter-Center Control Systems Conference
NASA Technical Reports Server (NTRS)
1978-01-01
Space vehicle control applications are discussed, along with aircraft guidance, control, and handling qualities. System simulation and identification, engine control, advanced propulsion techniques, and advanced control techniques are also included.
NASA Astrophysics Data System (ADS)
Lough, James D.
The Advanced LIGO detectors will soon be online with enough sensitivity to begin detecting gravitational waves, based on conservative estimates of the rate of neutron star inspirals. These first detections are sure to be significant, however, we will always strive to do better. More questions will be asked about the nature of neutron star material, rates of black hole inspirals, electromagnetic counterparts, etc. To begin to answer all of the questions aLIGO will bring us we will need even better sensitivity in future gravitational wave detectors. This thesis addresses one aspect that will limit us in the future: angular stability of the test masses. Angular stability in advanced LIGO uses an active feedback system. We are proposing to replace the active feedback system with a passive one, eliminating sensing noise contributions. This technique uses the radiation pressure of light inside a cavity as a stable optical spring, fundamentally the same as technique developed by Corbitt, et al. with an additional degree of freedom. I will review the theory of the one dimensional technique and discuss the multidimensional control theory and angular trap setup. I will then present results from the one-dimensional trap which we have built and tested. And propose improvements for the angular trap experiment. Along the way we have discovered an interesting coupling with thermal expansion due to round trip absorption in the high reflective coatings. The front surface HR coating limits our spring stability in this experiment due to the high circulating power and small beam spot size.
Potential public health impact of Age-Related Eye Disease Study results: AREDS report no. 11.
Bressler, Neil M; Bressler, Susan B; Congdon, Nathan G; Ferris, Frederick L; Friedman, David S; Klein, Ronald; Lindblad, Anne S; Milton, Roy C; Seddon, Johanna M
2003-11-01
To estimate the potential public health impact of the findings of the Age-Related Eye Disease Study (AREDS) on reducing the number of persons developing advanced age-related macular degeneration (AMD) during the next 5 years in the United States. The AREDS clinical trial provides estimates of AMD progression rates and of reduction in risk of developing advanced AMD when a high-dose nutritional supplement of antioxidants and zinc is used. These results are applied to estimates of the US population at risk, to estimate the number of people who would potentially avoid advanced AMD during 5 years if those at risk were to take a supplement such as that used in AREDS. An estimated 8 million persons at least 55 years old in the United States have monocular or binocular intermediate AMD or monocular advanced AMD. They are considered to be at high risk for advanced AMD and are those for whom the AREDS formulation should be considered. Of these people, 1.3 million would develop advanced AMD if no treatment were given to reduce their risk. If all of these people at risk received supplements such as those used in AREDS, more than 300,000 (95% confidence interval, 158,000-487,000) of them would avoid advanced AMD and any associated vision loss during the next 5 years. If people at high risk for advanced AMD received supplements such as those suggested by AREDS results, the potential impact on public health in the United States would be considerable during the next 5 years.
NASA Technical Reports Server (NTRS)
Garmestai, H.; Harris, K.; Lourenco, L.
1997-01-01
Representation of morphology and evolution of the microstructure during processing and their relation to properties requires proper experimental techniques. Residual strains, lattice distortion, and texture (micro-texture) at the interface and the matrix of a layered structure or a functionally gradient material and their variation are among parameters important in materials characterization but hard to measure with present experimental techniques. Current techniques available to measure changes in interred material parameters (residual stress, micro-texture, microplasticity) produce results which are either qualitative or unreliable. This problem becomes even more complicated in the case of a temperature variation. These parameters affect many of the mechanical properties of advanced materials including stress-strain relation, ductility, creep, and fatigue. A review of some novel experimental techniques using recent advances in electron microscopy is presented here to measure internal stress, (micro)texture, interracial strength and (sub)grain formation and realignment. Two of these techniques are combined in the chamber of an Environmental Scanning Electron Microscope to measure strain and orientation gradients in advanced materials. These techniques which include Backscattered Kikuchi Diffractometry (BKD) and Microscopic Strain Field Analysis are used to characterize metallic and intermetallic matrix composites and superplastic materials. These techniques are compared with the more conventional x-ray diffraction and indentation techniques.
Multidirectional mobilities: Advanced measurement techniques and applications
NASA Astrophysics Data System (ADS)
Ivarsson, Lars Holger
Today high noise-and-vibration comfort has become a quality sign of products in sectors such as the automotive industry, aircraft, components, households and manufacturing. Consequently, already in the design phase of products, tools are required to predict the final vibration and noise levels. These tools have to be applicable over a wide frequency range with sufficient accuracy. During recent decades a variety of tools have been developed such as transfer path analysis (TPA), input force estimation, substructuring, coupling by frequency response functions (FRF) and hybrid modelling. While these methods have a well-developed theoretical basis, their application combined with experimental data often suffers from a lack of information concerning rotational DOFs. In order to measure response in all 6 DOFs (including rotation), a sensor has been developed, whose special features are discussed in the thesis. This transducer simplifies the response measurements, although in practice the excitation of moments appears to be more difficult. Several excitation techniques have been developed to enable measurement of multidirectional mobilities. For rapid and simple measurement of the loaded mobility matrix, a MIMO (Multiple Input Multiple Output) technique is used. The technique has been tested and validated on several structures of different complexity. A second technique for measuring the loaded 6-by-6 mobility matrix has been developed. This technique employs a model of the excitation set-up, and with this model the mobility matrix is determined from sequential measurements. Measurements on ``real'' structures show that both techniques give results of similar quality, and both are recommended for practical use. As a further step, a technique for measuring the unloaded mobilities is presented. It employs the measured loaded mobility matrix in order to calculate compensation forces and moments, which are later applied in order to compensate for the loading of the measurement equipment. The developed measurement techniques have been used in a hybrid coupling of a plate-and-beam structure to study different aspects of the coupling technique. Results show that RDOFs are crucial and have to be included in this case. The importance of stiffness residuals when mobilities are estimated from modal superposition is demonstrated. Finally it is shown that proper curve fitting can correct errors from inconsistently measured data.
Advances in the regionalization approach: geostatistical techniques for estimating flood quantiles
NASA Astrophysics Data System (ADS)
Chiarello, Valentina; Caporali, Enrica; Matthies, Hermann G.
2015-04-01
The knowledge of peak flow discharges and associated floods is of primary importance in engineering practice for planning of water resources and risk assessment. Streamflow characteristics are usually estimated starting from measurements of river discharges at stream gauging stations. However, the lack of observations at site of interest as well as the measurement inaccuracies, bring inevitably to the necessity of developing predictive models. Regional analysis is a classical approach to estimate river flow characteristics at sites where little or no data exists. Specific techniques are needed to regionalize the hydrological variables over the considered area. Top-kriging or topological kriging, is a kriging interpolation procedure that takes into account the geometric organization and structure of hydrographic network, the catchment area and the nested nature of catchments. The continuous processes in space defined for the point variables are represented by a variogram. In Top-kriging, the measurements are not point values but are defined over a non-zero catchment area. Top-kriging is applied here over the geographical space of Tuscany Region, in Central Italy. The analysis is carried out on the discharge data of 57 consistent runoff gauges, recorded from 1923 to 2014. Top-kriging give also an estimation of the prediction uncertainty in addition to the prediction itself. The results are validated using a cross-validation procedure implemented in the package rtop of the open source statistical environment R The results are compared through different error measurement methods. Top-kriging seems to perform better in nested catchments and larger scale catchments but no for headwater or where there is a high variability for neighbouring catchments.
Nonintrusive Load Monitoring Based on Advanced Deep Learning and Novel Signature.
Kim, Jihyun; Le, Thi-Thu-Huong; Kim, Howon
2017-01-01
Monitoring electricity consumption in the home is an important way to help reduce energy usage. Nonintrusive Load Monitoring (NILM) is existing technique which helps us monitor electricity consumption effectively and costly. NILM is a promising approach to obtain estimates of the electrical power consumption of individual appliances from aggregate measurements of voltage and/or current in the distribution system. Among the previous studies, Hidden Markov Model (HMM) based models have been studied very much. However, increasing appliances, multistate of appliances, and similar power consumption of appliances are three big issues in NILM recently. In this paper, we address these problems through providing our contributions as follows. First, we proposed state-of-the-art energy disaggregation based on Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) model and additional advanced deep learning. Second, we proposed a novel signature to improve classification performance of the proposed model in multistate appliance case. We applied the proposed model on two datasets such as UK-DALE and REDD. Via our experimental results, we have confirmed that our model outperforms the advanced model. Thus, we show that our combination between advanced deep learning and novel signature can be a robust solution to overcome NILM's issues and improve the performance of load identification.
Nonintrusive Load Monitoring Based on Advanced Deep Learning and Novel Signature
Le, Thi-Thu-Huong; Kim, Howon
2017-01-01
Monitoring electricity consumption in the home is an important way to help reduce energy usage. Nonintrusive Load Monitoring (NILM) is existing technique which helps us monitor electricity consumption effectively and costly. NILM is a promising approach to obtain estimates of the electrical power consumption of individual appliances from aggregate measurements of voltage and/or current in the distribution system. Among the previous studies, Hidden Markov Model (HMM) based models have been studied very much. However, increasing appliances, multistate of appliances, and similar power consumption of appliances are three big issues in NILM recently. In this paper, we address these problems through providing our contributions as follows. First, we proposed state-of-the-art energy disaggregation based on Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) model and additional advanced deep learning. Second, we proposed a novel signature to improve classification performance of the proposed model in multistate appliance case. We applied the proposed model on two datasets such as UK-DALE and REDD. Via our experimental results, we have confirmed that our model outperforms the advanced model. Thus, we show that our combination between advanced deep learning and novel signature can be a robust solution to overcome NILM's issues and improve the performance of load identification. PMID:29118809
NASA Astrophysics Data System (ADS)
Xiao, B.; Haslauer, C. P.; Bohling, G. C.; Bárdossy, A.
2017-12-01
The spatial arrangement of hydraulic conductivity (K) determines water flow and solute transport behaviour in groundwater systems. This presentation demonstrates three advances over commonly used geostatistical methods by integrating measurements from novel measurement techniques and novel multivariate non-Gaussian dependence models: The spatial dependence structure of K was analysed using both data sets of K. Previously encountered similarities were confirmed in low-dimensional dependence. These similarities become less stringent and deviate more from symmetric Gaussian dependence in dimensions larger than two. Measurements of small and large K values are more uncertain than medium K values due to decreased sensitivity of the measurement devices at both ends of the K scale. Nevertheless, these measurements contain useful information that we include in the estimation of the marginal distribution and the spatial dependence structure as ``censored measurements'' that are estimated jointly without the common assumption of independence. The spatial dependence structure of the two data sets and their cross-covariances are used to infer the spatial dependence and the amount of the bias between the two data sets. By doing so, one spatial model for K is constructed that is used for simulation and that reflects the characteristics of both measurement techniques. The concept of the presented methodology is to use all available information for the estimation of a stochastic model of the primary parameter (K) at the highly heterogeneous Macrodispersion Experiment (MADE) site. The primary parameter has been measured by two independent measurement techniques whose sets of locations do not overlap. This site offers the unique opportunity of large quantities of measurements of K (31123 direct push injection logging based measurements and 2611 flowmeter based measurements). This improved dependence structure of K will be included into the estimated non-Gaussian dependence models and is expected to reproduce observed solute concentrations at the site better than existing dependence models of K.
NASA Technical Reports Server (NTRS)
Ding, Robert J.
2010-01-01
Four advanced welding techniques and their use in NASA are briefly reviewed in this poster presentation. The welding techniques reviewed are: Solid State Welding, Friction Stir Welding (FSW), Thermal Stir Welding (TSW) and Ultrasonic Stir Welding.
Sensor failure detection system. [for the F100 turbofan engine
NASA Technical Reports Server (NTRS)
Beattie, E. C.; Laprad, R. F.; Mcglone, M. E.; Rock, S. M.; Akhter, M. M.
1981-01-01
Advanced concepts for detecting, isolating, and accommodating sensor failures were studied to determine their applicability to the gas turbine control problem. Five concepts were formulated based upon such techniques as Kalman filters and a screening process led to the selection of one advanced concept for further evaluation. The selected advanced concept uses a Kalman filter to generate residuals, a weighted sum square residuals technique to detect soft failures, likelihood ratio testing of a bank of Kalman filters for isolation, and reconfiguring of the normal mode Kalman filter by eliminating the failed input to accommodate the failure. The advanced concept was compared to a baseline parameter synthesis technique. The advanced concept was shown to be a viable concept for detecting, isolating, and accommodating sensor failures for the gas turbine applications.
Targeted Muscle Reinnervation for Transradial Amputation: Description of Operative Technique.
Morgan, Emily N; Kyle Potter, Benjamin; Souza, Jason M; Tintle, Scott M; Nanos, George P
2016-12-01
Targeted muscle reinnervation (TMR) is a revolutionary surgical technique that, together with advances in upper extremity prostheses and advanced neuromuscular pattern recognition, allows intuitive and coordinated control in multiple planes of motion for shoulder disarticulation and transhumeral amputees. TMR also may provide improvement in neuroma-related pain and may represent an opportunity for sensory reinnervation as advances in prostheses and haptic feedback progress. Although most commonly utilized following shoulder disarticulation and transhumeral amputations, TMR techniques also represent an exciting opportunity for improvement in integrated prosthesis control and neuroma-related pain improvement in patients with transradial amputations. As there are no detailed descriptions of this technique in the literature to date, we provide our surgical technique for TMR in transradial amputations.
Estimating surface soil moisture from SMAP observations using a Neural Network technique.
Kolassa, J; Reichle, R H; Liu, Q; Alemohammad, S H; Gentine, P; Aida, K; Asanuma, J; Bircher, S; Caldwell, T; Colliander, A; Cosh, M; Collins, C Holifield; Jackson, T J; Martínez-Fernández, J; McNairn, H; Pacheco, A; Thibeault, M; Walker, J P
2018-01-01
A Neural Network (NN) algorithm was developed to estimate global surface soil moisture for April 2015 to March 2017 with a 2-3 day repeat frequency using passive microwave observations from the Soil Moisture Active Passive (SMAP) satellite, surface soil temperatures from the NASA Goddard Earth Observing System Model version 5 (GEOS-5) land modeling system, and Moderate Resolution Imaging Spectroradiometer-based vegetation water content. The NN was trained on GEOS-5 soil moisture target data, making the NN estimates consistent with the GEOS-5 climatology, such that they may ultimately be assimilated into this model without further bias correction. Evaluated against in situ soil moisture measurements, the average unbiased root mean square error (ubRMSE), correlation and anomaly correlation of the NN retrievals were 0.037 m 3 m -3 , 0.70 and 0.66, respectively, against SMAP core validation site measurements and 0.026 m 3 m -3 , 0.58 and 0.48, respectively, against International Soil Moisture Network (ISMN) measurements. At the core validation sites, the NN retrievals have a significantly higher skill than the GEOS-5 model estimates and a slightly lower correlation skill than the SMAP Level-2 Passive (L2P) product. The feasibility of the NN method was reflected by a lower ubRMSE compared to the L2P retrievals as well as a higher skill when ancillary parameters in physically-based retrievals were uncertain. Against ISMN measurements, the skill of the two retrieval products was more comparable. A triple collocation analysis against Advanced Microwave Scanning Radiometer 2 (AMSR2) and Advanced Scatterometer (ASCAT) soil moisture retrievals showed that the NN and L2P retrieval errors have a similar spatial distribution, but the NN retrieval errors are generally lower in densely vegetated regions and transition zones.
Advanced overlay: sampling and modeling for optimized run-to-run control
NASA Astrophysics Data System (ADS)
Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.
2016-03-01
In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to estimate stability and ultimately high volume manufacturing tests to monitor OPO by densely measured OVL data.
Formulation of the linear model from the nonlinear simulation for the F18 HARV
NASA Technical Reports Server (NTRS)
Hall, Charles E., Jr.
1991-01-01
The F-18 HARV is a modified F-18 Aircraft which is capable of flying in the post-stall regime in order to achieve superagility. The onset of aerodynamic stall, and continued into the post-stall region, is characterized by nonlinearities in the aerodynamic coefficients. These aerodynamic coefficients are not expressed as analytic functions, but rather in the form of tabular data. The nonlinearities in the aerodynamic coefficients yield a nonlinear model of the aircraft's dynamics. Nonlinear system theory has made many advances, but this area is not sufficiently developed to allow its application to this problem, since many of the theorems are existance theorems and that the systems are composed of analytic functions. Thus, the feedback matrices and the state estimators are obtained from linear system theory techniques. It is important, in order to obtain the correct feedback matrices and state estimators, that the linear description of the nonlinear flight dynamics be as accurate as possible. A nonlinear simulation is run under the Advanced Continuous Simulation Language (ACSL). The ACSL simulation uses FORTRAN subroutines to interface to the look-up tables for the aerodynamic data. ACSL has commands to form the linear representation for the system. Other aspects of this investigation are discussed.
Boundary methods for mode estimation
NASA Astrophysics Data System (ADS)
Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.
1999-08-01
This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).
Advanced magnetic resonance imaging of neurodegenerative diseases.
Agosta, Federica; Galantucci, Sebastiano; Filippi, Massimo
2017-01-01
Magnetic resonance imaging (MRI) is playing an increasingly important role in the study of neurodegenerative diseases, delineating the structural and functional alterations determined by these conditions. Advanced MRI techniques are of special interest for their potential to characterize the signature of each neurodegenerative condition and aid both the diagnostic process and the monitoring of disease progression. This aspect will become crucial when disease-modifying (personalized) therapies will be established. MRI techniques are very diverse and go from the visual inspection of MRI scans to more complex approaches, such as manual and automatic volume measurements, diffusion tensor MRI, and functional MRI. All these techniques allow us to investigate the different features of neurodegeneration. In this review, we summarize the most recent advances concerning the use of MRI in some of the most important neurodegenerative conditions, putting an emphasis on the advanced techniques.
Sonic Fatigue Design Techniques for Advanced Composite Aircraft Structures
1980-04-01
AFWAL-TR-80.3019 AD A 090553 SONIC FATIGUE DESIGN TECHNIQUES FOR ADVANCED COMPOSITE AIRCRAFT STRUCTURES FINAL REPORT Ian Holehouse Rohr Industries...5 2. General Sonic Fatigue Theory .... ....... 7 3. Composite Laminate Analysis .. ....... ... 10 4. Preliminary Sonic Fatigue...overall sonic fatigue design guides. These existing desiyn methcds have been developed for metal structures. However, recent advanced composite
Wind Turbine Gust Prediction Using Remote Sensing Data
NASA Astrophysics Data System (ADS)
Towers, Paul; Jones, Bryn
2013-11-01
Offshore wind energy is a growing energy source as governments around the world look for environmentally friendly solutions to potential future energy shortages. In order to capture more energy from the wind, larger turbines are being designed, leading to the structures becoming increasingly vulnerable to damage caused by violent gusts of wind. Advance knowledge of such gusts will enable turbine control systems to take preventative action, reducing turbine maintenance costs. We present a system which can accurately forecast the velocity profile of an oncoming wind, given only limited spatial measurements from light detection and ranging (LiDAR) units, which are currently operational in industry. Our method combines nonlinear state estimation techniques with low-order models of atmospheric boundary-layer flows to generate flow-field estimates. We discuss the accuracy of our velocity profile predictions by direct comparison to data derived from large eddy simulations of the atmospheric boundary layer.
NASA Technical Reports Server (NTRS)
Chistopher, Sundar A.; Kliche, Donna V.; Chou, Joyce; Welch, Ronald M.
1996-01-01
Collocated measurements from the Advanced Very High Resolution Radiometer (AVHRR) and the Earth Radiation Budget Experiment (ERBE) scanner are used to examine the radiative forcing of atmospheric aerosols generated from biomass burning for 13 images in South America. Using the AVHRR, Local Area Coverage (LAC) data, a new technique based on a combination of spectral and textural measures is developed for detecting these aerosols. Then, the instantaneous shortwave, longwave, and net radiative forcing values are computed from the ERBE instantaneous scanner data. Results for the selected samples from 13 images show that the mean instantaneous net radiative forcing for areas with heavy aerosol loading is about -36 W/sq m and that for the optically thin aerosols are about -16 W/sq m. These results, although preliminary, provide the first estimates of radiative forcing of atmospheric aerosols from biomass burning using satellite data.
Shearer, Jane; McManners, Joseph
2009-07-01
Innovations in periradicular surgery for failed treatment of orthograde root canal disease have been well-documented. We know of no prospective studies that have compared success rates of conventional methods with these presumed advances. In this prospective randomised trial we compare the use of an ultrasonic retrotip with a microhead bur in the preparation of a retrograde cavity. Outcome was estimated clinically by estimation of pain, swelling, and sinus, and radiographically by looking at infill of bone and retrograde root filling 2 weeks and 6 months postoperatively. Both methods used other surgical techniques including microinstruments to place the retrograde root filling. The success rate of the ultrasonic method was higher (all patients, n=26) than that of the microhead method (n=19 of 21). A larger study with longer follow up is required to consolidate this evidence.
Recent Advances in Synthesis and Characterization of SWCNTs Produced by Laser Oven Process
NASA Technical Reports Server (NTRS)
Aepalli, Sivaram
2004-01-01
Results from the parametric study of the two-laser oven process indicated possible improvements with flow conditions and laser characteristics. Higher flow rates, lower operating pressures coupled with changes in flow tube material are found to improve the nanotube yields. The collected nanotube material is analyzed using a combination of characterization techniques including SEM, TEM, TGA, Raman and UV-VIS-NIR to estimate the purity of the samples. In-situ diagnostics of the laser oven process is now extended to include the surface temperature of the target material. Spectral emission from the target surface is compared with black body type emission to estimate the temperature. The surface temperature seemed to correlate well with the ablation rate as well as the quality of the SWCNTs. Recent changes in improving the production rate by rastering the target and using cw laser will be presented.
Recent Advances in Synthesis and Characterization of SWCNTs produced by laser oven process
NASA Technical Reports Server (NTRS)
Arepalli, Sivaram
2004-01-01
Results from the parametric study of the two-laser oven process indicated possible improvements with flow conditions and laser characteristics (ref. 1). Higher flow rates, lower operating pressures coupled with changes in flow tube material are found to improve the nanotube yields. The collected nanotube material is analyzed using a combination of characterization techniques including SEM, TEM, TGA, Raman and UV-VIS-NIR to estimate the purity of the samples. Insitu diagnostics of the laser oven process is now extended to include the surface temperature of the target material. Spectral emission from the target surface is compared with black body type emission to estimate the temperature. The surface temperature seemed to correlate well with the ablation rate as well as the quality of the SWCNTs. Recent changes in improving the production rate by rastering the target and using cw laser will be presented.
A practical guide to propensity score analysis for applied clinical research.
Lee, Jaehoon; Little, Todd D
2017-11-01
Observational studies are often the only viable options in many clinical settings, especially when it is unethical or infeasible to randomly assign participants to different treatment régimes. In such case propensity score (PS) analysis can be applied to accounting for possible selection bias and thereby addressing questions of causal inference. Many PS methods exist, yet few guidelines are available to aid applied researchers in their conduct and evaluation of a PS analysis. In this article we give an overview of available techniques for PS estimation and application, balance diagnostic, treatment effect estimation, and sensitivity assessment, as well as recent advances. We also offer a tutorial that can be used to emulate the steps of PS analysis. Our goal is to provide information that will bring PS analysis within the reach of applied clinical researchers and practitioners. Copyright © 2017 Elsevier Ltd. All rights reserved.
Martian impact crater degradation studies: Implications for localized obliteration episodes
NASA Technical Reports Server (NTRS)
Barlow, N. G.
1992-01-01
Early spacecraft missions to Mars revealed that impact craters display a range of degradational states, but full appreciation of the range of preservational characteristics was not revealed until the Mariner 9 and Viking missions in the 1970's. Many studies have described the spatial and temporal distribution of obliteration episodes based on qualitative descriptions of crater degradation. Recent advances in photoclinometric techniques have led to improved estimates of crater morphometric characteristics. The present study is using photoclinometry to determine crater profiles and is comparing these results with the crater geometry expected for pristine craters of identical size. The result is an estimate of the degree of degradation suffered by Martian impact craters in selected regions of the planet. Size-frequency distribution analyses of craters displaying similar degrees of degradation within localized regions of the planet may provide information about the timing of obliteration episodes in these regions.
Herbei, Radu; Kubatko, Laura
2013-03-26
Markov chains are widely used for modeling in many areas of molecular biology and genetics. As the complexity of such models advances, it becomes increasingly important to assess the rate at which a Markov chain converges to its stationary distribution in order to carry out accurate inference. A common measure of convergence to the stationary distribution is the total variation distance, but this measure can be difficult to compute when the state space of the chain is large. We propose a Monte Carlo method to estimate the total variation distance that can be applied in this situation, and we demonstrate how the method can be efficiently implemented by taking advantage of GPU computing techniques. We apply the method to two Markov chains on the space of phylogenetic trees, and discuss the implications of our findings for the development of algorithms for phylogenetic inference.
Forecasting seasonal outbreaks of influenza.
Shaman, Jeffrey; Karspeck, Alicia
2012-12-11
Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003-2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza.
Forecasting seasonal outbreaks of influenza
Shaman, Jeffrey; Karspeck, Alicia
2012-01-01
Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003–2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza. PMID:23184969
Tropical cyclone intensities from satellite microwave data
NASA Technical Reports Server (NTRS)
Vonderhaar, T. H.; Kidder, S. Q.
1980-01-01
Radial profiles of mean 1000 mb to 250 mb temperature from the Nimbus 6 scanning microwave spectrometer (SCAMS) were constructed around eight intensifying tropical storms in the western Pacific. Seven storms showed distinct inward temperature gradients required for intensification; the eighth displayed no inward gradient and was decaying 24 hours later. The possibility that satellite data might be used to forecast tropical cyclone turning motion was investigated using estimates obtained from Nimbus 6 SCAMS data tapes of the mean 1000 mb to 250 mb temperature field around eleven tropical storms in 1975. Analysis of these data show that for turning storms, in all but one case, the turn was signaled 24 hours in advance by a significant temperature gradient perpendicular to the storm's path, at a distance of 9 deg to 13 deg in front of the storm. A thresholding technique was applied to the North Central U.S. during the summer to estimate precipitation frequency. except
NASA Technical Reports Server (NTRS)
Christopher, Sundar A.; Kliche, Donna A.; Chou, Joyce; Welch, Ronald M.
1996-01-01
Collocated measurements from the Advanced Very High Resolution Radiometer (AVHRR) and the Earth Radiation Budget Experiment (ERBE) scanner are used to examine the radiative forcing of atmospheric aerosols generated from biomass burning for 13 images in South America. Using the AVHRR, Local Area Coverage (LAC) data, a new technique based on a combination of spectral and textural measures is developed for detecting these aerosols. Then, the instantaneous shortwave, longwave, and net radiative forcing values are computed from the ERBE instantaneous scanner data. Results for the selected samples from 13 images show that the mean instantaneous net radiative forcing for areas with heavy aerosol loading is about -36 W/sq m and that for the optically thin aerosols are about -16 W/sq m. These results, although preliminary, provide the first estimates of radiative forcing of atmospheric aerosols from biomass burning using satellite data.
ERIC Educational Resources Information Center
Lewis, Jennifer R.; Kotur, Mark S.; Butt, Omar; Kulcarni, Sumant; Riley, Alyssa A.; Ferrell, Nick; Sullivan, Kathryn D.; Ferrari, Mauro
2002-01-01
The purpose of this article is to discuss "small-group apprenticeships (SGAs)" as a method to instruct cell culture techniques to high school participants. The study aimed to teach cell culture practices and to introduce advanced imaging techniques to solve various biomedical engineering problems. Participants designed and completed experiments…
Biswas, Abhijit; Bayer, Ilker S; Biris, Alexandru S; Wang, Tao; Dervishi, Enkeleda; Faupel, Franz
2012-01-15
This review highlights the most significant advances of the nanofabrication techniques reported over the past decade with a particular focus on the approaches tailored towards the fabrication of functional nano-devices. The review is divided into two sections: top-down and bottom-up nanofabrication. Under the classification of top-down, special attention is given to technical reports that demonstrate multi-directional patterning capabilities less than or equal to 100 nm. These include recent advances in lithographic techniques, such as optical, electron beam, soft, nanoimprint, scanning probe, and block copolymer lithography. Bottom-up nanofabrication techniques--such as, atomic layer deposition, sol-gel nanofabrication, molecular self-assembly, vapor-phase deposition and DNA-scaffolding for nanoelectronics--are also discussed. Specifically, we describe advances in the fabrication of functional nanocomposites and graphene using chemical and physical vapor deposition. Our aim is to provide a comprehensive platform for prominent nanofabrication tools and techniques in order to facilitate the development of new or hybrid nanofabrication techniques leading to novel and efficient functional nanostructured devices. Copyright © 2011 Elsevier B.V. All rights reserved.
Initial experience with a new articulating energy device for laparoscopic liver resection.
Berber, Eren; Akyuz, Muhammet; Aucejo, Federico; Aliyev, Shamil; Aksoy, Erol; Birsen, Onur; Taskin, Eren
2014-03-01
Although significant advances have been made in laparoscopic liver resection (LLR), most techniques still rely on multiple energy devices and staplers, which increase operative costs. The aim of this study was to report the initial results of a new multifunctional energy device for hepatic parenchymal transection. Fourteen patients who underwent LLR using this new device were compared to 20 patients who had LLR using current laparoscopic techniques (CL). Data were collected prospectively. The groups were similar demographics and tumor type and size. Although the type of resection was similar between the groups, the parenchymal transection time was less in the Caiman group (32 ± 5 vs. 63 ± 4 min, respectively, p = 0.0001). The operative time was similar (194 ± 21 vs. 233 ± 16 min, respectively, p = 0.158). There was reduction of the number of advanced instrumentation used in the Caiman group, including the staplers. Estimated blood loss, size of surgical margin, and hospital stay were similar. There was no mortality, and morbidity was 7 % in the Caiman and 20 % in the CL group. This initial study shows that the new device is safe and efficient for LLR. Its main advantage is shortening of hepatic parenchymal transection time. This has implications for increasing efficiency and cost saving in LLR.
Snapping hip: imaging and treatment.
Lee, Kenneth S; Rosas, Humberto G; Phancao, Jean-Pierre
2013-07-01
Snapping hip, or coxa saltans, presents as an audible or palpable snapping that occurs around the hip during movement and can be associated with or without pain. The prevalence of snapping hip is estimated to occur in up to 10% of the general population, but it is especially seen in athletes such as dancers, soccer players, weight lifters, and runners. Although the snapping sound can be readily heard, the diagnostic cause may be a clinical challenge. The causes of snapping hip have been divided into two distinct categories: extra-articular and intra-articular. Extra-articular snapping hip can be further subdivided into external and internal causes. Advances in imaging techniques have improved the diagnostic accuracy of the various causes of snapping hip, mainly by providing real-time imaging evaluation of moving structures during the snapping phase. Image-guided treatments have also been useful in the diagnostic work-up of snapping hip given the complexity and multitude of causes of hip pain. We discuss the common and uncommon causes of snapping hip, the advanced imaging techniques that now give us a better understanding of the underlying mechanism, and an image-guided diagnostic and therapeutic algorithm that helps to identify surgical candidates. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Lawson, Peter R.; Poyneer, Lisa; Barrett, Harrison; Frazin, Richard; Caucci, Luca; Devaney, Nicholas; Furenlid, Lars; Gładysz, Szymon; Guyon, Olivier; Krist, John; Maire, Jérôme; Marois, Christian; Mawet, Dimitri; Mouillet, David; Mugnier, Laurent; Pearson, Iain; Perrin, Marshall; Pueyo, Laurent; Savransky, Dmitry
2015-01-01
The direct imaging of planets around nearby stars is exceedingly difficult. Only about 14 exoplanets have been imaged to date that have masses less than 13 times that of Jupiter. The next generation of planet-finding coronagraphs, including VLT-SPHERE, the Gemini Planet Imager, Palomar P1640, and Subaru HiCIAO have predicted contrast performance of roughly a thousand times less than would be needed to detect Earth-like planets. In this paper we review the state of the art in exoplanet imaging, most notably the method of Locally Optimized Combination of Images (LOCI), and we investigate the potential of improving the detectability of faint exoplanets through the use of advanced statistical methods based on the concepts of the ideal observer and the Hotelling observer. We propose a formal comparison of techniques using a blind data challenge with an evaluation of performance using the Receiver Operating Characteristic (ROC) and Localization ROC (LROC) curves. We place particular emphasis on the understanding and modeling of realistic sources of measurement noise in ground-based AO-corrected coronagraphs. The work reported in this paper is the result of interactions between the co-authors during a week-long workshop on exoplanet imaging that was held in Squaw Valley, California, in March of 2012. PMID:26347393
NASA Technical Reports Server (NTRS)
Lawson, Peter R.; Frazin, Richard; Barrett, Harrison; Caucci, Luca; Devaney, Nicholas; Furenlid, Lars; Gladysz, Szymon; Guyon, Olivier; Krist, John; Maire, Jerome;
2012-01-01
The direct imaging of planets around nearby stars is exceedingly difficult. Only about 14 exoplanets have been imaged to date that have masses less than 13 times that of Jupiter. The next generation of planet-finding coronagraphs, including VLT-SPHERE, the Gemini Planet Imager, Palomar P1640, and Subaru HiCIAO have predicted contrast performance of roughly a thousand times less than would be needed to detect Earth-like planets. In this paper we review the state of the art in exoplanet imaging, most notably the method of Locally Optimized Combination of Images (LOCI), and we investigate the potential of improving the detectability of faint exoplanets through the use of advanced statistical methods based on the concepts of the ideal observer and the Hotelling observer. We provide a formal comparison of techniques through a blind data challenge and evaluate performance using the Receiver Operating Characteristic (ROC) and Localization ROC (LROC) curves. We place particular emphasis on the understanding and modeling of realistic sources of measurement noise in ground-based AO-corrected coronagraphs. The work reported in this paper is the result of interactions between the co-authors during a week-long workshop on exoplanet imaging that was held in Squaw Valley, California, in March of 2012.
Lawson, Peter R; Poyneer, Lisa; Barrett, Harrison; Frazin, Richard; Caucci, Luca; Devaney, Nicholas; Furenlid, Lars; Gładysz, Szymon; Guyon, Olivier; Krist, John; Maire, Jérôme; Marois, Christian; Mawet, Dimitri; Mouillet, David; Mugnier, Laurent; Pearson, Iain; Perrin, Marshall; Pueyo, Laurent; Savransky, Dmitry
2012-07-01
The direct imaging of planets around nearby stars is exceedingly difficult. Only about 14 exoplanets have been imaged to date that have masses less than 13 times that of Jupiter. The next generation of planet-finding coronagraphs, including VLT-SPHERE, the Gemini Planet Imager, Palomar P1640, and Subaru HiCIAO have predicted contrast performance of roughly a thousand times less than would be needed to detect Earth-like planets. In this paper we review the state of the art in exoplanet imaging, most notably the method of Locally Optimized Combination of Images (LOCI), and we investigate the potential of improving the detectability of faint exoplanets through the use of advanced statistical methods based on the concepts of the ideal observer and the Hotelling observer. We propose a formal comparison of techniques using a blind data challenge with an evaluation of performance using the Receiver Operating Characteristic (ROC) and Localization ROC (LROC) curves. We place particular emphasis on the understanding and modeling of realistic sources of measurement noise in ground-based AO-corrected coronagraphs. The work reported in this paper is the result of interactions between the co-authors during a week-long workshop on exoplanet imaging that was held in Squaw Valley, California, in March of 2012.
NASA Astrophysics Data System (ADS)
Lawson, Peter R.; Poyneer, Lisa; Barrett, Harrison; Frazin, Richard; Caucci, Luca; Devaney, Nicholas; Furenlid, Lars; Gładysz, Szymon; Guyon, Olivier; Krist, John; Maire, Jérôme; Marois, Christian; Mawet, Dimitri; Mouillet, David; Mugnier, Laurent; Pearson, Iain; Perrin, Marshall; Pueyo, Laurent; Savransky, Dmitry
2012-07-01
The direct imaging of planets around nearby stars is exceedingly difficult. Only about 14 exoplanets have been imaged to date that have masses less than 13 times that of Jupiter. The next generation of planet-finding coronagraphs, including VLT-SPHERE, the Gemini Planet Imager, Palomar P1640, and Subaru HiCIAO have predicted contrast performance of roughly a thousand times less than would be needed to detect Earth-like planets. In this paper we review the state of the art in exoplanet imaging, most notably the method of Locally Optimized Combination of Images (LOCI), and we investigate the potential of improving the detectability of faint exoplanets through the use of advanced statistical methods based on the concepts of the ideal observer and the Hotelling observer. We propose a formal comparison of techniques using a blind data challenge with an evaluation of performance using the Receiver Operating Characteristic (ROC) and Localization ROC (LROC) curves. We place particular emphasis on the understanding and modeling of realistic sources of measurement noise in ground-based AO-corrected coronagraphs. The work reported in this paper is the result of interactions between the co-authors during a week-long workshop on exoplanet imaging that was held in Squaw Valley, California, in March of 2012.
Validating an Air Traffic Management Concept of Operation Using Statistical Modeling
NASA Technical Reports Server (NTRS)
He, Yuning; Davies, Misty Dawn
2013-01-01
Validating a concept of operation for a complex, safety-critical system (like the National Airspace System) is challenging because of the high dimensionality of the controllable parameters and the infinite number of states of the system. In this paper, we use statistical modeling techniques to explore the behavior of a conflict detection and resolution algorithm designed for the terminal airspace. These techniques predict the robustness of the system simulation to both nominal and off-nominal behaviors within the overall airspace. They also can be used to evaluate the output of the simulation against recorded airspace data. Additionally, the techniques carry with them a mathematical value of the worth of each prediction-a statistical uncertainty for any robustness estimate. Uncertainty Quantification (UQ) is the process of quantitative characterization and ultimately a reduction of uncertainties in complex systems. UQ is important for understanding the influence of uncertainties on the behavior of a system and therefore is valuable for design, analysis, and verification and validation. In this paper, we apply advanced statistical modeling methodologies and techniques on an advanced air traffic management system, namely the Terminal Tactical Separation Assured Flight Environment (T-TSAFE). We show initial results for a parameter analysis and safety boundary (envelope) detection in the high-dimensional parameter space. For our boundary analysis, we developed a new sequential approach based upon the design of computer experiments, allowing us to incorporate knowledge from domain experts into our modeling and to determine the most likely boundary shapes and its parameters. We carried out the analysis on system parameters and describe an initial approach that will allow us to include time-series inputs, such as the radar track data, into the analysis
Deformation Estimation In Non-Urban Areas Exploiting High Resolution SAR Data
NASA Astrophysics Data System (ADS)
Goel, Kanika; Adam, Nico
2012-01-01
Advanced techniques such as the Small Baseline Subset Algorithm (SBAS) have been developed for terrain motion mapping in non-urban areas with a focus on extracting information from distributed scatterers (DSs). SBAS uses small baseline differential interferograms (to limit the effects of geometric decorrelation) and these are typically multilooked to reduce phase noise, resulting in loss of resolution. Various error sources e.g. phase unwrapping errors, topographic errors, temporal decorrelation and atmospheric effects also affect the interferometric phase. The aim of our work is an improved deformation monitoring in non-urban areas exploiting high resolution SAR data. The paper provides technical details and a processing example of a newly developed technique which incorporates an adaptive spatial phase filtering algorithm for an accurate high resolution differential interferometric stacking, followed by deformation retrieval via the SBAS approach where we perform the phase inversion using a more robust L1 norm minimization.
Biodiversity informatics: managing and applying primary biodiversity data.
Soberón, Jorge; Peterson, A Townsend
2004-01-01
Recently, advances in information technology and an increased willingness to share primary biodiversity data are enabling unprecedented access to it. By combining presences of species data with electronic cartography via a number of algorithms, estimating niches of species and their areas of distribution becomes feasible at resolutions one to three orders of magnitude higher than it was possible a few years ago. Some examples of the power of that technique are presented. For the method to work, limitations such as lack of high-quality taxonomic determination, precise georeferencing of the data and availability of high-quality and updated taxonomic treatments of the groups must be overcome. These are discussed, together with comments on the potential of these biodiversity informatics techniques not only for fundamental studies but also as a way for developing countries to apply state of the art bioinformatic methods and large quantities of data, in practical ways, to tackle issues of biodiversity management. PMID:15253354
Hyperspectral imaging for non-contact analysis of forensic traces.
Edelman, G J; Gaston, E; van Leeuwen, T G; Cullen, P J; Aalders, M C G
2012-11-30
Hyperspectral imaging (HSI) integrates conventional imaging and spectroscopy, to obtain both spatial and spectral information from a specimen. This technique enables investigators to analyze the chemical composition of traces and simultaneously visualize their spatial distribution. HSI offers significant potential for the detection, visualization, identification and age estimation of forensic traces. The rapid, non-destructive and non-contact features of HSI mark its suitability as an analytical tool for forensic science. This paper provides an overview of the principles, instrumentation and analytical techniques involved in hyperspectral imaging. We describe recent advances in HSI technology motivating forensic science applications, e.g. the development of portable and fast image acquisition systems. Reported forensic science applications are reviewed. Challenges are addressed, such as the analysis of traces on backgrounds encountered in casework, concluded by a summary of possible future applications. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Advanced methods of structural and trajectory analysis for transport aircraft
NASA Technical Reports Server (NTRS)
Ardema, Mark D.
1995-01-01
This report summarizes the efforts in two areas: (1) development of advanced methods of structural weight estimation, and (2) development of advanced methods of trajectory optimization. The majority of the effort was spent in the structural weight area. A draft of 'Analytical Fuselage and Wing Weight Estimation of Transport Aircraft', resulting from this research, is included as an appendix.
NASA Technical Reports Server (NTRS)
Bak, Juseon; Liu, X.; Wei, J.; Kim, J. H.; Chance, K.; Barnet, C.
2011-01-01
An advance algorithm based on the optimal estimation technique has beeen developed to derive ozone profile from GOME UV radiances and have adapted it to OMI UV radiances. OMI vertical resolution : 7-11 km in the troposphere and 10-14 km in the stratosphere. Satellite ultraviolet measurements (GOME, OMI) contain little vertical information for the small scale of ozone, especially in the upper troposphere (UT) and lower stratosphere (LS) where the sharp O3 gradient across the tropopause and large ozone variability are observed. Therefore, retrievals depend greatly on the a-priori knowledge in the UTLS
Regional Ocean Data Assimilation
NASA Astrophysics Data System (ADS)
Edwards, Christopher A.; Moore, Andrew M.; Hoteit, Ibrahim; Cornuelle, Bruce D.
2015-01-01
This article reviews the past 15 years of developments in regional ocean data assimilation. A variety of scientific, management, and safety-related objectives motivate marine scientists to characterize many ocean environments, including coastal regions. As in weather prediction, the accurate representation of physical, chemical, and/or biological properties in the ocean is challenging. Models and observations alone provide imperfect representations of the ocean state, but together they can offer improved estimates. Variational and sequential methods are among the most widely used in regional ocean systems, and there have been exciting recent advances in ensemble and four-dimensional variational approaches. These techniques are increasingly being tested and adapted for biogeochemical applications.
[Advances in the research on hyperspectral remote sensing in biodiversity and conservation].
He, Cheng; Feng, Zhong-Ke; Yuan, Jin-Jun; Wang, Jia; Gong, Yin-Xi; Dong, Zhi-Hai
2012-06-01
With the species reduction and the habitat destruction becoming serious increasingly, the biodiversity conservation has become one of the hottest topics. Remote sensing, the science of non-contact collection information, has the function of corresponding estimates of biodiversity, building model between species diversity relationship and mapping the index of biodiversity, which has been used widely in the field of biodiversity conservation. The present paper discussed the application of hyperspectral technology to the biodiversity conservation from two aspects, remote sensors and remote sensing techniques, and after, enumerated successful applications for emphasis. All these had a certain reference value in the development of biodiversity conservation.
MODEST: A Tool for Geodesy and Astronomy
NASA Technical Reports Server (NTRS)
Sovers, Ojars J.; Jacobs, Christopher S.; Lanyi, Gabor E.
2004-01-01
Features of the JPL VLBI modeling and estimation software "MODEST" are reviewed. Its main advantages include thoroughly documented model physics, portability, and detailed error modeling. Two unique models are included: modeling of source structure and modeling of both spatial and temporal correlations in tropospheric delay noise. History of the code parallels the development of the astrometric and geodetic VLBI technique and the software retains many of the models implemented during its advancement. The code has been traceably maintained since the early 1980s, and will continue to be updated with recent IERS standards. Scripts are being developed to facilitate user-friendly data processing in the era of e-VLBI.
Regional ocean data assimilation.
Edwards, Christopher A; Moore, Andrew M; Hoteit, Ibrahim; Cornuelle, Bruce D
2015-01-01
This article reviews the past 15 years of developments in regional ocean data assimilation. A variety of scientific, management, and safety-related objectives motivate marine scientists to characterize many ocean environments, including coastal regions. As in weather prediction, the accurate representation of physical, chemical, and/or biological properties in the ocean is challenging. Models and observations alone provide imperfect representations of the ocean state, but together they can offer improved estimates. Variational and sequential methods are among the most widely used in regional ocean systems, and there have been exciting recent advances in ensemble and four-dimensional variational approaches. These techniques are increasingly being tested and adapted for biogeochemical applications.
Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie
2015-01-01
Reliable assessment of islet viability, mass, and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples, but this technique may be susceptible to inter-/intraobserver variability, which may induce false positive/negative islet counts. Here we describe a simple, reliable, automated digital image analysis (ADIA) technique for accurately quantifying islets into total islet number, islet equivalent number (IEQ), and islet purity before islet transplantation. Islets were isolated and purified from n = 42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone and expressed as IEQ number. Islets were analyzed manually by microscopy or automatically quantified using Nikon's inverted Eclipse Ti microscope with built-in NIS-Elements Advanced Research (AR) software. The AIDA method significantly enhanced the number of islet preparations eligible for engraftment compared to the standard manual method (p < 0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number (r(2) = 0.91) and total islet number (r(2) = 0.88) and thus increased to r(2) = 0.93 when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intraobserver reproducibility compared to the standard manual method (p < 0.001). However, islet purity was routinely estimated as significantly higher with the manual method versus the ADIA method (p < 0.001). The ADIA method also detected small islets between 10 and 50 µm in size. Automated digital image analysis utilizing the Nikon Instruments software is an unbiased, simple, and reliable teaching tool to comprehensively assess the individual size of each islet cell preparation prior to transplantation. Implementation of this technology to improve engraftment may help to advance the therapeutic efficacy and accessibility of islet transplantation across centers.
Advanced Manufacturing Processes in the Motor Vehicle Industry
DOT National Transportation Integrated Search
1983-05-01
Advanced manufacturing processes, which include a range of automation and management techniques, are aiding U.S. motor vehicle manufacturers to reduce vehicle costs. This report discusses these techniques in general and their specific applications in...
Battery Calendar Life Estimator Manual Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jon P. Christophersen; Ira Bloom; Ed Thomas
2012-10-01
The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.
Battery Life Estimator Manual Linear Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jon P. Christophersen; Ira Bloom; Ed Thomas
2009-08-01
The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.
Advanced wiring technique and hardware application: Airplane and space vehicle
NASA Technical Reports Server (NTRS)
Ernst, H. L.; Eichman, C. D.
1972-01-01
An advanced wiring system is described which achieves the safety/reliability required for present and future airplane and space vehicle applications. Also, present wiring installation techniques and hardware are analyzed to establish existing problem areas. An advanced wiring system employing matrix interconnecting unit, plug to plug trunk bundles (FCC or ribbon cable) is outlined, and an installation study presented. A planned program to develop, lab test and flight test key features of these techniques and hardware as a part of the SST technology follow-on activities is discussed.
Age synthesis and estimation via faces: a survey.
Fu, Yun; Guo, Guodong; Huang, Thomas S
2010-11-01
Human age, as an important personal trait, can be directly inferred by distinct patterns emerging from the facial appearance. Derived from rapid advances in computer graphics and machine vision, computer-based age synthesis and estimation via faces have become particularly prevalent topics recently because of their explosively emerging real-world applications, such as forensic art, electronic customer relationship management, security control and surveillance monitoring, biometrics, entertainment, and cosmetology. Age synthesis is defined to rerender a face image aesthetically with natural aging and rejuvenating effects on the individual face. Age estimation is defined to label a face image automatically with the exact age (year) or the age group (year range) of the individual face. Because of their particularity and complexity, both problems are attractive yet challenging to computer-based application system designers. Large efforts from both academia and industry have been devoted in the last a few decades. In this paper, we survey the complete state-of-the-art techniques in the face image-based age synthesis and estimation topics. Existing models, popular algorithms, system performances, technical difficulties, popular face aging databases, evaluation protocols, and promising future directions are also provided with systematic discussions.
Using remotely sensed imagery to estimate potential annual pollutant loads in river basins.
He, Bin; Oki, Kazuo; Wang, Yi; Oki, Taikan
2009-01-01
Land cover changes around river basins have caused serious environmental degradation in global surface water areas, in which the direct monitoring and numerical modeling is inherently difficult. Prediction of pollutant loads is therefore crucial to river environmental management under the impact of climate change and intensified human activities. This research analyzed the relationship between land cover types estimated from NOAA Advanced Very High Resolution Radiometer (AVHRR) imagery and the potential annual pollutant loads of river basins in Japan. Then an empirical approach, which estimates annual pollutant loads directly from satellite imagery and hydrological data, was investigated. Six water quality indicators were examined, including total nitrogen (TN), total phosphorus (TP), suspended sediment (SS), Biochemical Oxygen Demand (BOD), Chemical Oxygen Demand (COD), and Dissolved Oxygen (DO). The pollutant loads of TN, TP, SS, BOD, COD, and DO were then estimated for 30 river basins in Japan. Results show that the proposed simulation technique can be used to predict the pollutant loads of river basins in Japan. These results may be useful in establishing total maximum annual pollutant loads and developing best management strategies for surface water pollution at river basin scale.
The use of Leptodyctium riparium (Hedw.) Warnst in the estimation of minimum postmortem interval.
Lancia, Massimo; Conforti, Federica; Aleffi, Michele; Caccianiga, Marco; Bacci, Mauro; Rossi, Riccardo
2013-01-01
The estimation of the postmortem interval (PMI) is still one of the most challenging issues in forensic investigations, especially in cases in which advanced transformative phenomena have taken place. The dating of skeletal remains is even more difficult and sometimes only a rough determination of the PMI is possible. Recent studies suggest that plant analysis can provide a reliable estimation for skeletal remains dating, when traditional techniques are not applicable. Forensic Botany is a relatively recent discipline that includes many subdisciplines such as Palynology, Anatomy, Dendrochronology, Limnology, Systematic, Ecology, and Molecular Biology. In a recent study, Cardoso et al. (Int J Legal Med 2010;124:451) used botanical evidence for the first time to establish the PMI of human skeletal remains found in a forested area of northern Portugal from the growth rate of mosses and shrub roots. The present paper deals with a case in which the study of the growth rate of the bryophyte Leptodyctium riparium (Hedw.) Warnst, was used in estimating the PMI of some human skeletal remains that were found in a wooded area near Perugia, in Central Italy. © 2012 American Academy of Forensic Sciences.
NASA Technical Reports Server (NTRS)
Veitch, J.; Raymond, V.; Farr, B.; Farr, W.; Graff, P.; Vitale, S.; Aylott, B.; Blackburn, K.; Christensen, N.; Coughlin, M.
2015-01-01
The Advanced LIGO and Advanced Virgo gravitational wave (GW) detectors will begin operation in the coming years, with compact binary coalescence events a likely source for the first detections. The gravitational waveforms emitted directly encode information about the sources, including the masses and spins of the compact objects. Recovering the physical parameters of the sources from the GW observations is a key analysis task. This work describes the LALInference software library for Bayesian parameter estimation of compact binary signals, which builds on several previous methods to provide a well-tested toolkit which has already been used for several studies. We show that our implementation is able to correctly recover the parameters of compact binary signals from simulated data from the advanced GW detectors. We demonstrate this with a detailed comparison on three compact binary systems: a binary neutron star (BNS), a neutron star - black hole binary (NSBH) and a binary black hole (BBH), where we show a cross-comparison of results obtained using three independent sampling algorithms. These systems were analysed with non-spinning, aligned spin and generic spin configurations respectively, showing that consistent results can be obtained even with the full 15-dimensional parameter space of the generic spin configurations. We also demonstrate statistically that the Bayesian credible intervals we recover correspond to frequentist confidence intervals under correct prior assumptions by analysing a set of 100 signals drawn from the prior. We discuss the computational cost of these algorithms, and describe the general and problem-specific sampling techniques we have used to improve the efficiency of sampling the compact binary coalescence (CBC) parameter space.
Advanced Modeling and Uncertainty Quantification for Flight Dynamics; Interim Results and Challenges
NASA Technical Reports Server (NTRS)
Hyde, David C.; Shweyk, Kamal M.; Brown, Frank; Shah, Gautam
2014-01-01
As part of the NASA Vehicle Systems Safety Technologies (VSST), Assuring Safe and Effective Aircraft Control Under Hazardous Conditions (Technical Challenge #3), an effort is underway within Boeing Research and Technology (BR&T) to address Advanced Modeling and Uncertainty Quantification for Flight Dynamics (VSST1-7). The scope of the effort is to develop and evaluate advanced multidisciplinary flight dynamics modeling techniques, including integrated uncertainties, to facilitate higher fidelity response characterization of current and future aircraft configurations approaching and during loss-of-control conditions. This approach is to incorporate multiple flight dynamics modeling methods for aerodynamics, structures, and propulsion, including experimental, computational, and analytical. Also to be included are techniques for data integration and uncertainty characterization and quantification. This research shall introduce new and updated multidisciplinary modeling and simulation technologies designed to improve the ability to characterize airplane response in off-nominal flight conditions. The research shall also introduce new techniques for uncertainty modeling that will provide a unified database model comprised of multiple sources, as well as an uncertainty bounds database for each data source such that a full vehicle uncertainty analysis is possible even when approaching or beyond Loss of Control boundaries. Methodologies developed as part of this research shall be instrumental in predicting and mitigating loss of control precursors and events directly linked to causal and contributing factors, such as stall, failures, damage, or icing. The tasks will include utilizing the BR&T Water Tunnel to collect static and dynamic data to be compared to the GTM extended WT database, characterizing flight dynamics in off-nominal conditions, developing tools for structural load estimation under dynamic conditions, devising methods for integrating various modeling elements into a real-time simulation capability, generating techniques for uncertainty modeling that draw data from multiple modeling sources, and providing a unified database model that includes nominal plus increments for each flight condition. This paper presents status of testing in the BR&T water tunnel and analysis of the resulting data and efforts to characterize these data using alternative modeling methods. Program challenges and issues are also presented.
A framework for interactive visualization of digital medical images.
Koehring, Andrew; Foo, Jung Leng; Miyano, Go; Lobe, Thom; Winer, Eliot
2008-10-01
The visualization of medical images obtained from scanning techniques such as computed tomography and magnetic resonance imaging is a well-researched field. However, advanced tools and methods to manipulate these data for surgical planning and other tasks have not seen widespread use among medical professionals. Radiologists have begun using more advanced visualization packages on desktop computer systems, but most physicians continue to work with basic two-dimensional grayscale images or not work directly with the data at all. In addition, new display technologies that are in use in other fields have yet to be fully applied in medicine. It is our estimation that usability is the key aspect in keeping this new technology from being more widely used by the medical community at large. Therefore, we have a software and hardware framework that not only make use of advanced visualization techniques, but also feature powerful, yet simple-to-use, interfaces. A virtual reality system was created to display volume-rendered medical models in three dimensions. It was designed to run in many configurations, from a large cluster of machines powering a multiwalled display down to a single desktop computer. An augmented reality system was also created for, literally, hands-on interaction when viewing models of medical data. Last, a desktop application was designed to provide a simple visualization tool, which can be run on nearly any computer at a user's disposal. This research is directed toward improving the capabilities of medical professionals in the tasks of preoperative planning, surgical training, diagnostic assistance, and patient education.
Recent advances in cytochrome c biosensing technologies.
Manickam, Pandiaraj; Kaushik, Ajeet; Karunakaran, Chandran; Bhansali, Shekhar
2017-01-15
This review is an attempt, for the first time, to describe advancements in sensing technology for cytochrome c (cyt c) detection, at point-of-care (POC) application. Cyt c, a heme containing metalloprotein is located in the intermembrane space of mitochondria and released into bloodstream during pathological conditions. The release of cyt c from mitochondria is a key initiative step in the activation of cell death pathways. Circulating cyt c levels represents a novel in-vivo marker of mitochondrial injury after resuscitation from heart failure and chemotherapy. Thus, cyt c detection is not only serving as an apoptosis biomarker, but also is of great importance to understand certain diseases at cellular level. Various existing techniques such as enzyme-linked immunosorbent assays (ELISA), Western blot, high performance liquid chromatography (HPLC), spectrophotometry and flow cytometry have been used to estimate cyt c. However, the implementation of these techniques at POC application is limited due to longer analysis time, expensive instruments and expertise needed for operation. To overcome these challenges, significant efforts are being made to develop electrochemical biosensing technologies for fast, accurate, selective, and sensitive detection of cyt c. Presented review describes the cutting edge technologies available in the laboratories to detect cyt c. The recent advancements in designing and development of electrochemical cyt c biosensors for the quantification of cyt c are also discussed. This review also highlights the POC cyt c biosensors developed recently, that would prove of interest to biologist and therapist to get real time informatics needed to evaluate death process, diseases progression, therapeutics and processes related with mitochondrial injury. Copyright © 2016 Elsevier B.V. All rights reserved.
The environmental control and life support system advanced automation project
NASA Technical Reports Server (NTRS)
Dewberry, Brandon S.
1991-01-01
The objective of the ECLSS Advanced Automation project includes reduction of the risk associated with the integration of new, beneficial software techniques. Demonstrations of this software to baseline engineering and test personnel will show the benefits of these techniques. The advanced software will be integrated into ground testing and ground support facilities, familiarizing its usage by key personnel.
Joubert, Ruan; Steyn, Johan Dewald; Heystek, Hendrik Jacobus; Steenekamp, Jan Harm; Du Preez, Jan Lourens; Hamman, Josias Hendrik
2017-02-01
The assessment of intestinal membrane permeability properties of new chemical entities is a crucial step in the drug discovery and development process and a variety of in vitro models, methods and techniques are available to estimate the extent of oral drug absorption in humans. However, variations in certain physiological and physico-chemical factors are often not reflected in the results and the complex dynamic interplay between these factors is sometimes oversimplified with in vitro models. Areas covered: In vitro models to evaluate drug pharmacokinetics are briefly outlined, while both physiological and physico-chemical factors that may have an influence on these techniques are critically reviewed. The shortcomings identified for some of the in vitro techniques are discussed in conjunction with novel ways to improve and thereby overcome some challenges. Expert opinion: Although conventional in vitro methods and theories are used as basic guidelines to predict drug absorption, critical evaluations have identified some shortcomings. Advancements in technology have made it possible to investigate and understand the role of physiological and physico-chemical factors in drug delivery more clearly, which can be used to improve and refine the techniques to more closely mimic the in vivo environment.
Relevance Vector Machine Learning for Neonate Pain Intensity Assessment Using Digital Imaging
Gholami, Behnood; Tannenbaum, Allen R.
2011-01-01
Pain assessment in patients who are unable to verbally communicate is a challenging problem. The fundamental limitations in pain assessment in neonates stem from subjective assessment criteria, rather than quantifiable and measurable data. This often results in poor quality and inconsistent treatment of patient pain management. Recent advancements in pattern recognition techniques using relevance vector machine (RVM) learning techniques can assist medical staff in assessing pain by constantly monitoring the patient and providing the clinician with quantifiable data for pain management. The RVM classification technique is a Bayesian extension of the support vector machine (SVM) algorithm, which achieves comparable performance to SVM while providing posterior probabilities for class memberships and a sparser model. If classes represent “pure” facial expressions (i.e., extreme expressions that an observer can identify with a high degree of confidence), then the posterior probability of the membership of some intermediate facial expression to a class can provide an estimate of the intensity of such an expression. In this paper, we use the RVM classification technique to distinguish pain from nonpain in neonates as well as assess their pain intensity levels. We also correlate our results with the pain intensity assessed by expert and nonexpert human examiners. PMID:20172803
NASA Astrophysics Data System (ADS)
Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques
2015-12-01
In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.
Attitude determination using an adaptive multiple model filtering Scheme
NASA Technical Reports Server (NTRS)
Lam, Quang; Ray, Surendra N.
1995-01-01
Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown quantities such as spacecraft dynamics parameters, gyro biases, dynamic disturbances, or environment variations.
Attitude determination using an adaptive multiple model filtering Scheme
NASA Astrophysics Data System (ADS)
Lam, Quang; Ray, Surendra N.
1995-05-01
Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown quantities such as spacecraft dynamics parameters, gyro biases, dynamic disturbances, or environment variations.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-28
... Semiconductor Products Made by Advanced Lithography Techniques and Products Containing Same; Notice of... Mexico) (``STC''), alleging a violation of section 337 in the importation, sale for [[Page 81644
Performance and Weight Estimates for an Advanced Open Rotor Engine
NASA Technical Reports Server (NTRS)
Hendricks, Eric S.; Tong, Michael T.
2012-01-01
NASA s Environmentally Responsible Aviation Project and Subsonic Fixed Wing Project are focused on developing concepts and technologies which may enable dramatic reductions to the environmental impact of future generation subsonic aircraft. The open rotor concept (also historically referred to an unducted fan or advanced turboprop) may allow for the achievement of this objective by reducing engine fuel consumption. To evaluate the potential impact of open rotor engines, cycle modeling and engine weight estimation capabilities have been developed. The initial development of the cycle modeling capabilities in the Numerical Propulsion System Simulation (NPSS) tool was presented in a previous paper. Following that initial development, further advancements have been made to the cycle modeling and weight estimation capabilities for open rotor engines and are presented in this paper. The developed modeling capabilities are used to predict the performance of an advanced open rotor concept using modern counter-rotating propeller designs. Finally, performance and weight estimates for this engine are presented and compared to results from a previous NASA study of advanced geared and direct-drive turbofans.
NASA Technical Reports Server (NTRS)
Hoffman, James Patrick; Del Castillo, Linda; Miller, Jennifer; Jenabi, Masud; Hunter, Donald; Birur, Gajanana
2011-01-01
The higher output power densities required of modern radar architectures, such as the proposed DESDynI [Deformation, Ecosystem Structure, and Dynamics of Ice] SAR [Synthetic Aperture Radar] Instrument (or DSI) require increasingly dense high power electronics. To enable these higher power densities, while maintaining or even improving hardware reliability, requires advances in integrating advanced thermal packaging technologies into radar transmit/receive (TR) modules. New materials and techniques have been studied and compared to standard technologies.
Chen, Shuo; Ong, Yi Hong; Lin, Xiaoqian; Liu, Quan
2015-01-01
Raman spectroscopy has shown great potential in biomedical applications. However, intrinsically weak Raman signals cause slow data acquisition especially in Raman imaging. This problem can be overcome by narrow-band Raman imaging followed by spectral reconstruction. Our previous study has shown that Raman spectra free of fluorescence background can be reconstructed from narrow-band Raman measurements using traditional Wiener estimation. However, fluorescence-free Raman spectra are only available from those sophisticated Raman setups capable of fluorescence suppression. The reconstruction of Raman spectra with fluorescence background from narrow-band measurements is much more challenging due to the significant variation in fluorescence background. In this study, two advanced Wiener estimation methods, i.e. modified Wiener estimation and sequential weighted Wiener estimation, were optimized to achieve this goal. Both spontaneous Raman spectra and surface enhanced Raman spectra were evaluated. Compared with traditional Wiener estimation, two advanced methods showed significant improvement in the reconstruction of spontaneous Raman spectra. However, traditional Wiener estimation can work as effectively as the advanced methods for SERS spectra but much faster. The wise selection of these methods would enable accurate Raman reconstruction in a simple Raman setup without the function of fluorescence suppression for fast Raman imaging. PMID:26203387
Video Salient Object Detection via Fully Convolutional Networks.
Wang, Wenguan; Shen, Jianbing; Shao, Ling
This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).
Enhanced algorithms for stochastic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishna, Alamuru S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less
Evaluation of gravimetric techniques to estimate the microvascular filtration coefficient
Dongaonkar, R. M.; Laine, G. A.; Stewart, R. H.
2011-01-01
Microvascular permeability to water is characterized by the microvascular filtration coefficient (Kf). Conventional gravimetric techniques to estimate Kf rely on data obtained from either transient or steady-state increases in organ weight in response to increases in microvascular pressure. Both techniques result in considerably different estimates and neither account for interstitial fluid storage and lymphatic return. We therefore developed a theoretical framework to evaluate Kf estimation techniques by 1) comparing conventional techniques to a novel technique that includes effects of interstitial fluid storage and lymphatic return, 2) evaluating the ability of conventional techniques to reproduce Kf from simulated gravimetric data generated by a realistic interstitial fluid balance model, 3) analyzing new data collected from rat intestine, and 4) analyzing previously reported data. These approaches revealed that the steady-state gravimetric technique yields estimates that are not directly related to Kf and are in some cases directly proportional to interstitial compliance. However, the transient gravimetric technique yields accurate estimates in some organs, because the typical experimental duration minimizes the effects of interstitial fluid storage and lymphatic return. Furthermore, our analytical framework reveals that the supposed requirement of tying off all draining lymphatic vessels for the transient technique is unnecessary. Finally, our numerical simulations indicate that our comprehensive technique accurately reproduces the value of Kf in all organs, is not confounded by interstitial storage and lymphatic return, and provides corroboration of the estimate from the transient technique. PMID:21346245
Resolving Fast, Confined Diffusion in Bacteria with Image Correlation Spectroscopy.
Rowland, David J; Tuson, Hannah H; Biteen, Julie S
2016-05-24
By following single fluorescent molecules in a microscope, single-particle tracking (SPT) can measure diffusion and binding on the nanometer and millisecond scales. Still, although SPT can at its limits characterize the fastest biomolecules as they interact with subcellular environments, this measurement may require advanced illumination techniques such as stroboscopic illumination. Here, we address the challenge of measuring fast subcellular motion by instead analyzing single-molecule data with spatiotemporal image correlation spectroscopy (STICS) with a focus on measurements of confined motion. Our SPT and STICS analysis of simulations of the fast diffusion of confined molecules shows that image blur affects both STICS and SPT, and we find biased diffusion rate measurements for STICS analysis in the limits of fast diffusion and tight confinement due to fitting STICS correlation functions to a Gaussian approximation. However, we determine that with STICS, it is possible to correctly interpret the motion that blurs single-molecule images without advanced illumination techniques or fast cameras. In particular, we present a method to overcome the bias due to image blur by properly estimating the width of the correlation function by directly calculating the correlation function variance instead of using the typical Gaussian fitting procedure. Our simulation results are validated by applying the STICS method to experimental measurements of fast, confined motion: we measure the diffusion of cytosolic mMaple3 in living Escherichia coli cells at 25 frames/s under continuous illumination to illustrate the utility of STICS in an experimental parameter regime for which in-frame motion prevents SPT and tight confinement of fast diffusion precludes stroboscopic illumination. Overall, our application of STICS to freely diffusing cytosolic protein in small cells extends the utility of single-molecule experiments to the regime of fast confined diffusion without requiring advanced microscopy techniques. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Smith, Andrew; Davis, R. Ben; LaVerde, Bruce; Jones, Douglas
2012-01-01
The team of authors at Marshall Space Flight Center (MSFC) has been investigating estimating techniques for the vibration response of launch vehicle panels excited by acoustics and/or aero-fluctuating pressures. Validation of the approaches used to estimate these environments based on ground tests of flight like hardware is of major importance to new vehicle programs. The team at MSFC has recently expanded upon the first series of ground test cases completed in December 2010. The follow on tests recently completed are intended to illustrate differences in damping that might be expected when cable harnesses are added to the configurations under test. This validation study examines the effect on vibroacoustic response resulting from the installation of cable bundles on a curved orthogrid panel. Of interest is the level of damping provided by the installation of the cable bundles and whether this damping could be potentially leveraged in launch vehicle design. The results of this test are compared with baseline acoustic response tests without cables. Damping estimates from the measured response data are made using a new software tool that employs a finite element model (FEM) of the panel in conjunction with advanced optimization techniques. This paper will report on the \\damping trend differences. observed from response measurements for several different configurations of cable harnesses. The data should assist vibroacoustics engineers to make more informed damping assumptions when calculating vibration response estimates when using model based analysis approach. Achieving conservative estimates that have more flight like accuracy is desired. The paper may also assist analysts in determining how ground test data may relate to expected flight response levels. Empirical response estimates may also need to be adjusted if the measured response used as an input to the study came from a test article without flight like cable harnesses.
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
Advanced Respiratory Motion Compensation for Coronary MR Angiography
Henningsson, Markus; Botnar, Rene M.
2013-01-01
Despite technical advances, respiratory motion remains a major impediment in a substantial amount of patients undergoing coronary magnetic resonance angiography (CMRA). Traditionally, respiratory motion compensation has been performed with a one-dimensional respiratory navigator positioned on the right hemi-diaphragm, using a motion model to estimate and correct for the bulk respiratory motion of the heart. Recent technical advancements has allowed for direct respiratory motion estimation of the heart, with improved motion compensation performance. Some of these new methods, particularly using image-based navigators or respiratory binning, allow for more advanced motion correction which enables CMRA data acquisition throughout most or all of the respiratory cycle, thereby significantly reducing scan time. This review describes the three components typically involved in most motion compensation strategies for CMRA, including respiratory motion estimation, gating and correction, and how these processes can be utilized to perform advanced respiratory motion compensation. PMID:23708271
Analysis of the influence of advanced materials for aerospace products R&D and manufacturing cost
NASA Astrophysics Data System (ADS)
Shen, A. W.; Guo, J. L.; Wang, Z. J.
2015-12-01
In this paper, we pointed out the deficiency of traditional cost estimation model about aerospace products Research & Development (R&D) and manufacturing based on analyzing the widely use of advanced materials in aviation products. Then we put up with the estimating formulas of cost factor, which representing the influences of advanced materials on the labor cost rate and manufacturing materials cost rate. The values ranges of the common advanced materials such as composite materials, titanium alloy are present in the labor and materials two aspects. Finally, we estimate the R&D and manufacturing cost of F/A-18, F/A- 22, B-1B and B-2 aircraft based on the common DAPCA IV model and the modified model proposed by this paper. The calculation results show that the calculation precision improved greatly by the proposed method which considering advanced materials. So we can know the proposed method is scientific and reasonable.
NASA Astrophysics Data System (ADS)
Dave, Jaydev K.
Ultrasound contrast agents (UCAs) are encapsulated microbubbles that provide a source for acoustic impedance mismatch with the blood, due to difference in compressibility between the gas contained within these microbubbles and the blood. When insonified by an ultrasound beam, these UCAs act as nonlinear scatterers and enhance the echoes of the incident pulse, resulting in scattering of the incident ultrasound beam and emission of fundamental (f0), subharmonic (f0/2), harmonic (n*f0; n ∈ N) and ultraharmonic (((2n-1)/2)*f0; n ∈ N & n > 1) components in the echo response. A promising approach to monitor in vivo pressures revolves around the fact that the ultrasound transmit and receive parameters can be selected to induce an ambient pressure amplitude dependent subharmonic signal. This subharmonic signal may be used to estimate ambient pressure amplitude; such technique of estimating ambient pressure amplitude is referred to as subharmonic aided pressure estimation or SHAPE. This project develops and evaluates the feasibility of SHAPE to noninvasively monitor cardiac and hepatic pressures (using commercially available ultrasound scanners and UCAs) because invasive catheter based pressure measurements are used currently for these applications. Invasive catheter based pressure measurements pose risk of introducing infection while the catheter is guided towards the region of interest in the body through a percutaneous incision, pose risk of death due to structural or mechanical failure of the catheter (which has also triggered product recalls by the USA Food and Drug Administration) and may potentially modulate the pressures that are being measured. Also, catheterization procedures require fluoroscopic guidance to advance the catheter to the site of pressure measurements and such catheterization procedures are not performed in all clinical centers. Thus, a noninvasive technique to obtain ambient pressure values without the catheterization process is clinically helpful. While an intravenous injection is required to inject the UCAs into the body, this procedure is considered noninvasive as per the definition provided by the Center for Medicare and Medicaid Services; invasive procedures include surgical procedures as well as catheterization procedures while minor procedures such as drawing blood (which requires a similar approach as injecting UCAs) are considered noninvasive. In vitro results showed that the standard error between catheter pressures and SHAPE results is below 10 mmHg with a correlation coefficient value of above 0.9—this experimental error of 10 mmHg is less than the errors associated with other techniques utilizing UCAs for ambient pressure estimation. In vivo results proved the feasibility of SHAPE to noninvasively estimate clinically relevant left and right ventricular (LV and RV) pressures. The maximum error in estimating the LV and RV systolic and diastolic pressures was 3.5 mmHg. Thus, the SHAPE technique may be useful for systolic and diastolic pressure estimation given that the standard recommendations require the errors for these pressure measurements to be within 5 mmHg. The ability of SHAPE to identify induced portal hypertension (PH) was also proved. The changes in the SHAPE data correlated significantly (p < 0.05) with the changes in the portal vein (PV) pressures and the absolute amplitudes of the subharmonic signal also correlated with absolute PV pressures. The SHAPE technique provides the ability to noninvasively obtain in vivo pressures. This technique is applicable not only for critically ill patients, but also for screening symptomatic patients and potentially for other clinical pressure monitoring applications, as well.
An Information Retrieval Approach for Robust Prediction of Road Surface States.
Park, Jae-Hyung; Kim, Kwanho
2017-01-28
Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods.
An Information Retrieval Approach for Robust Prediction of Road Surface States
Park, Jae-Hyung; Kim, Kwanho
2017-01-01
Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods. PMID:28134859
Remote Sensing of Cryosphere: Estimation of Mass Balance Change in Himalayan Glaciers
NASA Astrophysics Data System (ADS)
Ambinakudige, Shrinidhi; Joshi, Kabindra
2012-07-01
Glacial changes are an important indicator of climate change. Our understanding mass balance change in Himalayan glaciers is limited. This study estimates mass balance of some major glaciers in the Sagarmatha National Park (SNP) in Nepal using remote sensing applications. Remote sensing technique to measure mass balance of glaciers is an important methodological advance in the highly rugged Himalayan terrain. This study uses ASTER VNIR, 3N (nadir view) and 3B (backward view) bands to generate Digital Elevation Models (DEMs) for the SNP area for the years 2002, 2003, 2004 and 2005. Glacier boundaries were delineated using combination of boundaries available in the Global land ice measurement (GLIMS) database and various band ratios derived from ASTER images. Elevation differences, glacial area, and ice densities were used to estimate the change in mass balance. The results indicated that the rate of glacier mass balance change was not uniform across glaciers. While there was a decrease in mass balance of some glaciers, some showed increase. This paper discusses how each glacier in the SNP area varied in its annual mass balance measurement during the study period.
State estimation for advanced control of wave energy converters
Coe, Ryan; Bacelli, Giorgio
2017-04-25
A report on state estimation for advanced control of wave energy converters (WECs), with supporting data models and slides from the overview presentation. The methods discussed are intended for use to enable real-time closed loop control of WECs.
Virtual Sensors for Designing Irrigation Controllers in Greenhouses
Sánchez, Jorge Antonio; Rodríguez, Francisco; Guzmán, José Luis; Arahal, Manuel R
2012-01-01
Monitoring the greenhouse transpiration for control purposes is currently a difficult task. The absence of affordable sensors that provide continuous transpiration measurements motivates the use of estimators. In the case of tomato crops, the availability of estimators allows the design of automatic fertirrigation (irrigation + fertilization) schemes in greenhouses, minimizing the dispensed water while fulfilling crop needs. This paper shows how system identification techniques can be applied to obtain nonlinear virtual sensors for estimating transpiration. The greenhouse used for this study is equipped with a microlysimeter, which allows one to continuously sample the transpiration values. While the microlysimeter is an advantageous piece of equipment for research, it is also expensive and requires maintenance. This paper presents the design and development of a virtual sensor to model the crop transpiration, hence avoiding the use of this kind of expensive sensor. The resulting virtual sensor is obtained by dynamical system identification techniques based on regressors taken from variables typically found in a greenhouse, such as global radiation and vapor pressure deficit. The virtual sensor is thus based on empirical data. In this paper, some effort has been made to eliminate some problems associated with grey-box models: advance phenomenon and overestimation. The results are tested with real data and compared with other approaches. Better results are obtained with the use of nonlinear Black-box virtual sensors. This sensor is based on global radiation and vapor pressure deficit (VPD) measurements. Predictive results for the three models are developed for comparative purposes. PMID:23202208
NASA Astrophysics Data System (ADS)
Hanachi, Houman; Liu, Jie; Banerjee, Avisekh; Chen, Ying
2015-06-01
Modern health management approaches for gas turbine engines (GTEs) aim to precisely estimate the health state of the GTE components to optimize maintenance decisions with respect to both economy and safety. In this research, we propose an advanced framework to identify the most likely degradation state of the turbine section in a GTE for prognostics and health management (PHM) applications. A novel nonlinear thermodynamic model is used to predict the performance parameters of the GTE given the measurements. The ratio between real efficiency of the GTE and simulated efficiency in the newly installed condition is defined as the health indicator and provided at each sequence. The symptom of nonrecoverable degradations in the turbine section, i.e. loss of turbine efficiency, is assumed to be the internal degradation state. A regularized auxiliary particle filter (RAPF) is developed to sequentially estimate the internal degradation state in nonuniform time sequences upon receiving sets of new measurements. The effectiveness of the technique is examined using the operating data over an entire time-between-overhaul cycle of a simple-cycle industrial GTE. The results clearly show the trend of degradation in the turbine section and the occasional fluctuations, which are well supported by the service history of the GTE. The research also suggests the efficacy of the proposed technique to monitor the health state of the turbine section of a GTE by implementing model-based PHM without the need for additional instrumentation.
The Impact of Advanced Greenhouse Gas Measurement Science on Policy Goals and Research Strategies
NASA Astrophysics Data System (ADS)
Abrahams, L.; Clavin, C.; McKittrick, A.
2016-12-01
In support of the Paris agreement, accurate characterizations of U.S. greenhouse gas (GHG) emissions estimates have been area of increased scientific focus. Over the last several years, the scientific community has placed significant emphasis on understanding, quantifying, and reconciling measurement and modeling methods that characterize methane emissions from petroleum and natural gas sources. This work has prompted national policy discussions and led to the improvement of regional and national methane emissions estimates. Research campaigns focusing on reconciling atmospheric measurements ("top-down") and process-based emissions estimates ("bottom-up") have sought to identify where measurement technology advances could inform policy objectives. A clear next step is development and deployment of advanced detection capabilities that could aid U.S. emissions mitigation and verification goals. The breadth of policy-relevant outcomes associated with advances in GHG measurement science are demonstrated by recent improvements in the petroleum and natural gas sector emission estimates in the EPA Greenhouse Gas Inventory, ambitious efforts to apply inverse modeling results to inform or validate national GHG inventory, and outcomes from federal GHG measurement science technology development programs. In this work, we explore the variety of policy-relevant outcomes impacted by advances in GHG measurement science, with an emphasis on improving GHG inventory estimates, identifying emissions mitigation strategies, and informing technology development requirements.
Neonatal Jaundice Detection System.
Aydın, Mustafa; Hardalaç, Fırat; Ural, Berkan; Karap, Serhat
2016-07-01
Neonatal jaundice is a common condition that occurs in newborn infants in the first week of life. Today, techniques used for detection are required blood samples and other clinical testing with special equipment. The aim of this study is creating a non-invasive system to control and to detect the jaundice periodically and helping doctors for early diagnosis. In this work, first, a patient group which is consisted from jaundiced babies and a control group which is consisted from healthy babies are prepared, then between 24 and 48 h after birth, 40 jaundiced and 40 healthy newborns are chosen. Second, advanced image processing techniques are used on the images which are taken with a standard smartphone and the color calibration card. Segmentation, pixel similarity and white balancing methods are used as image processing techniques and RGB values and pixels' important information are obtained exactly. Third, during feature extraction stage, with using colormap transformations and feature calculation, comparisons are done in RGB plane between color change values and the 8-color calibration card which is specially designed. Finally, in the bilirubin level estimation stage, kNN and SVR machine learning regressions are used on the dataset which are obtained from feature extraction. At the end of the process, when the control group is based on for comparisons, jaundice is succesfully detected for 40 jaundiced infants and the success rate is 85 %. Obtained bilirubin estimation results are consisted with bilirubin results which are obtained from the standard blood test and the compliance rate is 85 %.
Experimental equipment for an advanced ISOL facility[Isotope Separation On-Line Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baktash, C.; Lee, I.Y.; Rehm, K.E.
This report summarizes the proceedings and recommendations of the Workshop on the Experimental Equipment for an Advanced ISOL Facility which was held at Lawrence Berkeley National Laboratory on July 22--25, 1998. The purpose of this workshop was to discuss the performance requirements, manpower and cost estimates, as well as a schedule of the experimental equipment needed to fully exploit the new physics which can be studied at an advanced ISOL facility. An overview of the new physics opportunities that would be provided by such a facility has been presented in the White Paper that was issued following the Columbus Meeting.more » The reactions and experimental techniques discussed in the Columbus White Paper served as a guideline for the formulation of the detector needs at the Berkeley Workshop. As outlined a new ISOL facility with intense, high-quality beams of radioactive nuclei would provide exciting new research opportunities in the areas of: the nature of nucleonic matter; the origin of the elements; and tests of the Standard Model. After an introductory section, the following equipment is discussed: gamma-ray detectors; recoil separators; magnetic spectrographs; particle detectors; targets; and apparatus using non-accelerated beams.« less
NASA Technical Reports Server (NTRS)
Revell, J. D.; Balena, F. J.; Koval, L. R.
1980-01-01
The acoustical treatment mass penalties required to achieve an interior noise level of 80 dBA for high speed, fuel efficient propfan-powered aircraft are determined. The prediction method used is based on theory developed for the outer shell dynamics, and a modified approach for add-on noise control element performance. The present synthesis of these methods is supported by experimental data. Three different sized aircraft are studied, including a widebody, a narrowbody and a business sized aircraft. Noise control penalties are calculated for each aircraft for two kinds of noise control designs: add-on designs, where the outer wall structure cannot be changed, and advanced designs where the outer wall stiffness level and the materials usage can be altered. For the add-on designs, the mass penalties range from 1.7 to 2.4 percent of the takeoff gross weight (TOGW) of the various aircraft, similar to preliminary estimates. Results for advanced designs show significant reductions of the mass penalties. For the advanced aluminum designs the penalties are 1.5% of TOGW, and for an all composite aircraft the penalties range from 0.74 to 1.4% of TOGW.
The road to NHDPlus — Advancements in digital stream networks and associated catchments
Moore, Richard B.; Dewald, Thomas A.
2016-01-01
A progression of advancements in Geographic Information Systems techniques for hydrologic network and associated catchment delineation has led to the production of the National Hydrography Dataset Plus (NHDPlus). NHDPlus is a digital stream network for hydrologic modeling with catchments and a suite of related geospatial data. Digital stream networks with associated catchments provide a geospatial framework for linking and integrating water-related data. Advancements in the development of NHDPlus are expected to continue to improve the capabilities of this national geospatial hydrologic framework. NHDPlus is built upon the medium-resolution NHD and, like NHD, was developed by the U.S. Environmental Protection Agency and U.S. Geological Survey to support the estimation of streamflow and stream velocity used in fate-and-transport modeling. Catchments included with NHDPlus were created by integrating vector information from the NHD and from the Watershed Boundary Dataset with the gridded land surface elevation as represented by the National Elevation Dataset. NHDPlus is an actively used and continually improved dataset. Users recognize the importance of a reliable stream network and associated catchments. The NHDPlus spatial features and associated data tables will continue to be improved to support regional water quality and streamflow models and other user-defined applications.
NASA Astrophysics Data System (ADS)
Hashimoto, H.; Wang, W.; Ganguly, S.; Li, S.; Michaelis, A.; Higuchi, A.; Takenaka, H.; Nemani, R. R.
2017-12-01
New geostationary sensors such as the AHI (Advanced Himawari Imager on Himawari-8) and the ABI (Advanced Baseline Imager on GOES-16) have the potential to advance ecosystem modeling particularly of diurnally varying phenomenon through frequent observations. These sensors have similar channels as in MODIS (MODerate resolution Imaging Spectroradiometer), and allow us to utilize the knowledge and experience in MODIS data processing. Here, we developed sub-hourly Gross Primary Production (GPP) algorithm, leverating the MODIS 17 GPP algorithm. We run the model at 1-km resolution over Japan and Australia using geo-corrected AHI data. Solar radiation was directly calculated from AHI using a neural network technique. The other necessary climate data were derived from weather stations and other satellite data. The sub-hourly estimates of GPP were first compared with ground-measured GPP at various Fluxnet sites. We also compared the AHI GPP with MODIS 17 GPP, and analyzed the differences in spatial patterns and the effect of diurnal changes in climate forcing. The sub-hourly GPP products require massive storage and strong computational power. We use NEX (NASA Earth Exchange) facility to produce the GPP products. This GPP algorithm can be applied to other geostationary satellites including GOES-16 in future.
Raman Hyperspectral Imaging of Microfossils: Potential Pitfalls
Olcott Marshall, Alison
2013-01-01
Abstract Initially, Raman spectroscopy was a specialized technique used by vibrational spectroscopists; however, due to rapid advancements in instrumentation and imaging techniques over the last few decades, Raman spectrometers are widely available at many institutions, allowing Raman spectroscopy to become a widespread analytical tool in mineralogy and other geological sciences. Hyperspectral imaging, in particular, has become popular due to the fact that Raman spectroscopy can quickly delineate crystallographic and compositional differences in 2-D and 3-D at the micron scale. Although this rapid growth of applications to the Earth sciences has provided great insight across the geological sciences, the ease of application as the instruments become increasingly automated combined with nonspecialists using this techique has resulted in the propagation of errors and misunderstandings throughout the field. For example, the literature now includes misassigned vibration modes, inappropriate spectral processing techniques, confocal depth of laser penetration incorrectly estimated into opaque crystalline solids, and a misconstrued understanding of the anisotropic nature of sp2 carbons. Key Words: Raman spectroscopy—Raman imaging—Confocal Raman spectroscopy—Disordered sp2 carbons—Hematite—Microfossils. Astrobiology 13, 920–931. PMID:24088070
Nonlinear problems in data-assimilation : Can synchronization help?
NASA Astrophysics Data System (ADS)
Tribbia, J. J.; Duane, G. S.
2009-12-01
Over the past several years, operational weather centers have initiated ensemble prediction and assimilation techniques to estimate the error covariance of forecasts in the short and the medium range. The ensemble techniques used are based on linear methods. The theory This technique s been shown to be a useful indicator of skill in the linear range where forecast errors are small relative to climatological variance. While this advance has been impressive, there are still ad hoc aspects of its use in practice, like the need for covariance inflation which are troubling. Furthermore, to be of utility in the nonlinear range an ensemble assimilation and prediction method must be capable of giving probabilistic information for the situation where a probability density forecast becomes multi-modal. A prototypical, simplest example of such a situation is the planetary-wave regime transition where the pdf is bimodal. Our recent research show how the inconsistencies and extensions of linear methodology can be consistently treated using the paradigm of synchronization which views the problems of assimilation and forecasting as that of optimizing the forecast model state with respect to the future evolution of the atmosphere.
NASA Technical Reports Server (NTRS)
Baumgardner, M. F. (Principal Investigator)
1973-01-01
The author has identified the following significant results. The most significant result was the use of the temporal overlay technique where the computer was used to overlay ERTS-1 data from three different dates (9 Oct., 14 Nov., 2 Dec.). The registration of MSS digital data from different dates was estimated to be accurate within one half resolution element. The temporal overlay capability provides a significant advance in machine-processing of MSS data. It is no longer essential to go through the tedious exercise of locating ground observation sites on the digital data from each ERTS-1 overpass. Once the address of a ground observation site has been located on a digital tape from any ERTS-1 overpass, the overlay technique can be used to locate the same address on a digital tape of MSS data from any other ERTS-1 pass over the same area. The temporal overlay technique also adds a valuable dimension for identifying and mapping changes in vegetation, water, and other dynamic surface features.
NASA Technical Reports Server (NTRS)
Bentley, P. B.
1975-01-01
The measurement of the volume flow-rate of blood in an artery or vein requires both an estimate of the flow velocity and its spatial distribution and the corresponding cross-sectional area. Transcutaneous measurements of these parameters can be performed using ultrasonic techniques that are analogous to the measurement of moving objects by use of a radar. Modern digital data recording and preprocessing methods were applied to the measurement of blood-flow velocity by means of the CW Doppler ultrasonic technique. Only the average flow velocity was measured and no distribution or size information was obtained. Evaluations of current flowmeter design and performance, ultrasonic transducer fabrication methods, and other related items are given. The main thrust was the development of effective data-handling and processing methods by application of modern digital techniques. The evaluation resulted in useful improvements in both the flowmeter instrumentation and the ultrasonic transducers. Effective digital processing algorithms that provided enhanced blood-flow measurement accuracy and sensitivity were developed. Block diagrams illustrative of the equipment setup are included.
Myers, Teresa A.; Maibach, Edward; Peters, Ellen; Leiserowitz, Anthony
2015-01-01
Human-caused climate change is happening; nearly all climate scientists are convinced of this basic fact according to surveys of experts and reviews of the peer-reviewed literature. Yet, among the American public, there is widespread misunderstanding of this scientific consensus. In this paper, we report results from two experiments, conducted with national samples of American adults, that tested messages designed to convey the high level of agreement in the climate science community about human-caused climate change. The first experiment tested hypotheses about providing numeric versus non-numeric assertions concerning the level of scientific agreement. We found that numeric statements resulted in higher estimates of the scientific agreement. The second experiment tested the effect of eliciting respondents’ estimates of scientific agreement prior to presenting them with a statement about the level of scientific agreement. Participants who estimated the level of agreement prior to being shown the corrective statement gave higher estimates of the scientific consensus than respondents who were not asked to estimate in advance, indicating that incorporating an “estimation and reveal” technique into public communication about scientific consensus may be effective. The interaction of messages with political ideology was also tested, and demonstrated that messages were approximately equally effective among liberals and conservatives. Implications for theory and practice are discussed. PMID:25812121
Myers, Teresa A; Maibach, Edward; Peters, Ellen; Leiserowitz, Anthony
2015-01-01
Human-caused climate change is happening; nearly all climate scientists are convinced of this basic fact according to surveys of experts and reviews of the peer-reviewed literature. Yet, among the American public, there is widespread misunderstanding of this scientific consensus. In this paper, we report results from two experiments, conducted with national samples of American adults, that tested messages designed to convey the high level of agreement in the climate science community about human-caused climate change. The first experiment tested hypotheses about providing numeric versus non-numeric assertions concerning the level of scientific agreement. We found that numeric statements resulted in higher estimates of the scientific agreement. The second experiment tested the effect of eliciting respondents' estimates of scientific agreement prior to presenting them with a statement about the level of scientific agreement. Participants who estimated the level of agreement prior to being shown the corrective statement gave higher estimates of the scientific consensus than respondents who were not asked to estimate in advance, indicating that incorporating an "estimation and reveal" technique into public communication about scientific consensus may be effective. The interaction of messages with political ideology was also tested, and demonstrated that messages were approximately equally effective among liberals and conservatives. Implications for theory and practice are discussed.
NASA Astrophysics Data System (ADS)
Karami, Shawgar; Madani, Hassan; Katibeh, Homayoon; Fatehi Marj, Ahmad
2018-03-01
Geostatistical methods are one of the advanced techniques used for interpolation of groundwater quality data. The results obtained from geostatistics will be useful for decision makers to adopt suitable remedial measures to protect the quality of groundwater sources. Data used in this study were collected from 78 wells in Varamin plain aquifer located in southeast of Tehran, Iran, in 2013. Ordinary kriging method was used in this study to evaluate groundwater quality parameters. According to what has been mentioned in this paper, seven main quality parameters (i.e. total dissolved solids (TDS), sodium adsorption ratio (SAR), electrical conductivity (EC), sodium (Na+), total hardness (TH), chloride (Cl-) and sulfate (SO4 2-)), have been analyzed and interpreted by statistical and geostatistical methods. After data normalization by Nscore method in WinGslib software, variography as a geostatistical tool to define spatial regression was compiled and experimental variograms were plotted by GS+ software. Then, the best theoretical model was fitted to each variogram based on the minimum RSS. Cross validation method was used to determine the accuracy of the estimated data. Eventually, estimation maps of groundwater quality were prepared in WinGslib software and estimation variance map and estimation error map were presented to evaluate the quality of estimation in each estimated point. Results showed that kriging method is more accurate than the traditional interpolation methods.
[Advanced online search techniques and dedicated search engines for physicians].
Nahum, Yoav
2008-02-01
In recent years search engines have become an essential tool in the work of physicians. This article will review advanced search techniques from the world of information specialists, as well as some advanced search engine operators that may help physicians improve their online search capabilities, and maximize the yield of their searches. This article also reviews popular dedicated scientific and biomedical literature search engines.
NASA Technical Reports Server (NTRS)
Ding, Robert J.
2010-01-01
Some of the applications of advanced welding techniques are shown in this poster presentation. Included are brief explanations of the use on the Ares I and Ares V launch vehicle and on the Space Shuttle Launch vehicle. Also included are microstructural views from four advanced welding techniques: Variable Polarity Plasma Arc (VPPA) weld (fusion), self-reacting friction stir welding (SR-FSW), conventional FSW, and Tube Socket Weld (TSW) on aluminum.
Gite, Venkat A; Nikose, Jayant V; Bote, Sachin M; Patil, Saurabh R
2017-07-02
Many techniques have been described to correct anterior hypospadias with variable results. Anterior urethral advancement as one stage technique was first described by Ti Chang Shing in 1984. It was also used for the repair of strictures and urethrocutaneous fistulae involving distal urethra. We report our experience of using this technique with some modification for the repair of anterior hypospadias. In the period between 2013-2015, 20 cases with anterior hypospadias including 2 cases of glanular, 3 cases of coronal, 12 cases of subcoronal and 3 cases of distal penile hypospadias were treated with anterior urethral advancement technique. Patients' age groups ranged from 18 months to 10 years. Postoperatively, patients were passing urine from tip of neomeatus with satisfactory stream during follow up period of 6 months to 2 years. There were no major complications in any of our patients except in one patient who developed meatal stenosis which was treated by periodic dilatation. Three fold urethral mobilization was sufficient in all cases. Anterior urethral advancement technique is a single-stage procedure with good cosmetic results and least complications for anterior hypospadias repair in properly selected cases.
Guevara-Oquendo, Víctor H; Zhang, Huihua; Yu, Peiqiang
2018-04-13
To date, advanced synchrotron-based and globar-sourced techniques are almost unknown to food and feed scientists. There has been little application of these advanced techniques to study blend pellet products at a molecular level. This article aims to provide recent research on advanced synchrotron and globar vibrational molecular spectroscopy contributions to advances in blend pellet products research on molecular structure and molecular nutrition interaction. How processing induced molecular structure changes in relation to nutrient availability and utilization of the blend pellet products. The study reviews Utilization of co-product components for blend pellet product in North America; Utilization and benefits of inclusion of pulse screenings; Utilization of additives in blend pellet products; Application of pellet processing in blend pellet products; Conventional evaluation techniques and methods for blend pellet products. The study focus on recent applications of cutting-edge vibrational molecular spectroscopy for molecular structure and molecular structure association with nutrient utilization in blend pellet products. The information described in this article gives better insight on how advanced molecular (micro)spectroscopy contributions to advances in blend pellet products research on molecular structure and molecular nutrition interaction.
Seliske, L; Norwood, T A; McLaughlin, J R; Wang, S; Palleschi, C; Holowaty, E
2016-06-07
An important public health goal is to decrease the prevalence of key behavioural risk factors, such as tobacco use and obesity. Survey information is often available at the regional level, but heterogeneity within large geographic regions cannot be assessed. Advanced spatial analysis techniques are demonstrated to produce sensible micro area estimates of behavioural risk factors that enable identification of areas with high prevalence. A spatial Bayesian hierarchical model was used to estimate the micro area prevalence of current smoking and excess bodyweight for the Erie-St. Clair region in southwestern Ontario. Estimates were mapped for male and female respondents of five cycles of the Canadian Community Health Survey (CCHS). The micro areas were 2006 Census Dissemination Areas, with an average population of 400-700 people. Two individual-level models were specified: one controlled for survey cycle and age group (model 1), and one controlled for survey cycle, age group and micro area median household income (model 2). Post-stratification was used to derive micro area behavioural risk factor estimates weighted to the population structure. SaTScan analyses were conducted on the granular, postal-code level CCHS data to corroborate findings of elevated prevalence. Current smoking was elevated in two urban areas for both sexes (Sarnia and Windsor), and an additional small community (Chatham) for males only. Areas of excess bodyweight were prevalent in an urban core (Windsor) among males, but not females. Precision of the posterior post-stratified current smoking estimates was improved in model 2, as indicated by narrower credible intervals and a lower coefficient of variation. For excess bodyweight, both models had similar precision. Aggregation of the micro area estimates to CCHS design-based estimates validated the findings. This is among the first studies to apply a full Bayesian model to complex sample survey data to identify micro areas with variation in risk factor prevalence, accounting for spatial correlation and other covariates. Application of micro area analysis techniques helps define areas for public health planning, and may be informative to surveillance and research modeling of relevant chronic disease outcomes.
NASA Technical Reports Server (NTRS)
Ivanco, Marie L.; Domack, Marcia S.; Stoner, Mary Cecilia; Hehir, Austin R.
2016-01-01
Low Technology Readiness Levels (TRLs) and high levels of uncertainty make it challenging to develop cost estimates of new technologies in the R&D phase. It is however essential for NASA to understand the costs and benefits associated with novel concepts, in order to prioritize research investments and evaluate the potential for technology transfer and commercialization. This paper proposes a framework to perform a cost-benefit analysis of a technology in the R&D phase. This framework was developed and used to assess the Advanced Near Net Shape Technology (ANNST) manufacturing process for fabricating integrally stiffened cylinders. The ANNST method was compared with the conventional multi-piece metallic construction and composite processes for fabricating integrally stiffened cylinders. Following the definition of a case study for a cryogenic tank cylinder of specified geometry, data was gathered through interviews with Subject Matter Experts (SMEs), with particular focus placed on production costs and process complexity. This data served as the basis to produce process flowcharts and timelines, mass estimates, and rough order-of-magnitude cost and schedule estimates. The scalability of the results was subsequently investigated to understand the variability of the results based on tank size. Lastly, once costs and benefits were identified, the Analytic Hierarchy Process (AHP) was used to assess the relative value of these achieved benefits for potential stakeholders. These preliminary, rough order-of-magnitude results predict a 46 to 58 percent reduction in production costs and a 7-percent reduction in weight over the conventional metallic manufacturing technique used in this study for comparison. Compared to the composite manufacturing technique, these results predict cost savings of 35 to 58 percent; however, the ANNST concept was heavier. In this study, the predicted return on investment of equipment required for the ANNST method was ten cryogenic tank barrels when compared with conventional metallic manufacturing. The AHP study results revealed that decreased final cylinder mass and improved quality assurance were the most valued benefits of cylinder manufacturing methods, therefore emphasizing the relevance of the benefits achieved with the ANNST process for future projects.
Haghighat, F; Lee, C S; Ghaly, W S
2002-06-01
The measurement and prediction of building material emission rates have been the subject of intensive research over the past decade, resulting in the development of advanced sensory and chemical analysis measurement techniques as well as the development of analytical and numerical models. One of the important input parameters for these models is the diffusion coefficient. Several experimental techniques have been applied to estimate the diffusion coefficient. An extensive literature review of the techniques used to measure this coefficient was carried out, for building materials exposed to volatile organic compounds (VOC). This paper reviews these techniques; it also analyses the results and discusses the possible causes of difference in the reported data. It was noted that the discrepancy between the different results was mainly because of the assumptions made in and the techniques used to analyze the data. For a given technique, the results show that there can be a difference of up to 700% in the reported data. Moreover, the paper proposes what is referred to as the mass exchanger method, to calculate diffusion coefficients considering both diffusion and convection. The results obtained by this mass exchanger method were compared with those obtained by the existing method considering only diffusion. It was demonstrated that, for porous materials, the convection resistance could not be ignored when compared with the diffusion resistance.
Short-term spatial and temporal variability in greenhouse gas fluxes in riparian zones.
Vidon, P; Marchese, S; Welsh, M; McMillan, S
2015-08-01
Recent research indicates that riparian zones have the potential to contribute significant amounts of greenhouse gases (GHG: N2O, CO2, CH4) to the atmosphere. Yet, the short-term spatial and temporal variability in GHG emission in these systems is poorly understood. Using two transects of three static chambers at two North Carolina agricultural riparian zones (one restored, one unrestored), we show that estimates of the average GHG flux at the site scale can vary by one order of magnitude depending on whether the mean or the median is used as a measure of central tendency. Because the median tends to mute the effect of outlier points (hot spots and hot moments), we propose that both must be reported or that other more advanced spatial averaging techniques (e.g., kriging, area-weighted average) should be used to estimate GHG fluxes at the site scale. Results also indicate that short-term temporal variability in GHG fluxes (a few days) under seemingly constant temperature and hydrological conditions can be as large as spatial variability at the site scale, suggesting that the scientific community should rethink sampling protocols for GHG at the soil-atmosphere interface to include repeated measures over short periods of time at select chambers to estimate GHG emissions in the field. Although recent advances in technology provide tools to address these challenges, their cost is often too high for widespread implementation. Until technology improves, sampling design strategies will need to be carefully considered to balance cost, time, and spatial and temporal representativeness of measurements.
Scholz, Stefan; Mittendorf, Thomas
2014-12-01
Rheumatoid arthritis (RA) is a chronic, inflammatory disease with severe effects on the functional ability of patients. Due to the prevalence of 0.5 to 1.0 percent in western countries, new treatment options are a major concern for decision makers with regard to their budget impact. In this context, cost-effectiveness analyses are a helpful tool to evaluate new treatment options for reimbursement schemes. To analyze and compare decision analytic modeling techniques and to explore their use in RA with regard to their advantages and shortcomings. A systematic literature review was conducted in PubMED and 58 studies reporting health economics decision models were analyzed with regard to the modeling technique used. From the 58 reviewed publications, we found 13 reporting decision tree-analysis, 25 (cohort) Markov models, 13 publications on individual sampling methods (ISM) and seven discrete event simulations (DES). Thereby 26 studies were identified as presenting independently developed models and 32 models as adoptions. The modeling techniques used were found to differ in their complexity and in the number of treatment options compared. Methodological features are presented in the article and a comprehensive overview of the cost-effectiveness estimates is given in Additional files 1 and 2. When compared to the other modeling techniques, ISM and DES have advantages in the coverage of patient heterogeneity and, additionally, DES is capable to model more complex treatment sequences and competing risks in RA-patients. Nevertheless, the availability of sufficient data is necessary to avoid assumptions in ISM and DES exercises, thereby enabling biased results. Due to the different settings, time frames and interventions in the reviewed publications, no direct comparison of modeling techniques was applicable. The results from other indications suggest that incremental cost-effective ratios (ICERs) do not differ significantly between Markov and DES models, but DES is able to report more outcome parameters. Given a sufficient data supply, DES is the modeling technique of choice when modeling cost-effectiveness in RA. Otherwise transparency on the data inputs is crucial for valid results and to inform decision makers about possible biases. With regard to ICERs, Markov models might provide similar estimates as more advanced modeling techniques.
VIII. THE PAST, PRESENT, AND FUTURE OF DEVELOPMENTAL METHODOLOGY.
Little, Todd D; Wang, Eugene W; Gorrall, Britt K
2017-06-01
This chapter selectively reviews the evolution of quantitative practices in the field of developmental methodology. The chapter begins with an overview of the past in developmental methodology, discussing the implementation and dissemination of latent variable modeling and, in particular, longitudinal structural equation modeling. It then turns to the present state of developmental methodology, highlighting current methodological advances in the field. Additionally, this section summarizes ample quantitative resources, ranging from key quantitative methods journal articles to the various quantitative methods training programs and institutes. The chapter concludes with the future of developmental methodology and puts forth seven future innovations in the field. The innovations discussed span the topics of measurement, modeling, temporal design, and planned missing data designs. Lastly, the chapter closes with a brief overview of advanced modeling techniques such as continuous time models, state space models, and the application of Bayesian estimation in the field of developmental methodology. © 2017 The Society for Research in Child Development, Inc.
Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics
NASA Technical Reports Server (NTRS)
Stowers, S. T.; Bass, J. M.; Oden, J. T.
1993-01-01
A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.
NASA Astrophysics Data System (ADS)
Lin-Liu, Y. R.; Chan, V. S.; Luce, T. C.; Prater, R.
1998-11-01
Owing to relativistic mass shift in the cyclotron resonance condition, a simple and accurate interpolation formula for estimating the current drive efficiency, such as those(S.C. Chiu et al.), Nucl. Fusion 29, 2175 (1989).^,(D.A. Ehst and C.F.F. Karney, Nucl. Fusion 31), 1933 (1991). commonly used in FWCD, is not available in the case of ECCD. In this work, we model ECCD using the adjoint techniques. A semi-analytic adjoint function appropriate for general tokamak geometry is obtained using Fisch's relativistic collision model. Predictions of off-axis ECCD qualitatively and semi-quantitatively agrees with those of Cohen,(R.H. Cohen, Phys. Fluids 30), 2442 (1987). currently implemented in the raytracing code TORAY. The dependences of the current drive efficiency on the wave launch configuration and the plasma parameters will be presented. Strong absorption of the wave away from the resonance layer is shown to be an important factor in optimizing the off-axis ECCD for application to advanced tokamak operations.
Advanced Software V&V for Civil Aviation and Autonomy
NASA Technical Reports Server (NTRS)
Brat, Guillaume P.
2017-01-01
With the advances in high-computing platform (e.g., advanced graphical processing units or multi-core processors), computationally-intensive software techniques such as the ones used in artificial intelligence or formal methods have provided us with an opportunity to further increase safety in the aviation industry. Some of these techniques have facilitated building safety at design time, like in aircraft engines or software verification and validation, and others can introduce safety benefits during operations as long as we adapt our processes. In this talk, I will present how NASA is taking advantage of these new software techniques to build in safety at design time through advanced software verification and validation, which can be applied earlier and earlier in the design life cycle and thus help also reduce the cost of aviation assurance. I will then show how run-time techniques (such as runtime assurance or data analytics) offer us a chance to catch even more complex problems, even in the face of changing and unpredictable environments. These new techniques will be extremely useful as our aviation systems become more complex and more autonomous.
Genetics of pediatric obesity.
Manco, Melania; Dallapiccola, Bruno
2012-07-01
Onset of obesity has been anticipated at earlier ages, and prevalence has dramatically increased worldwide over the past decades. Epidemic obesity is mainly attributable to modern lifestyle, but family studies prove the significant role of genes in the individual's predisposition to obesity. Advances in genotyping technologies have raised great hope and expectations that genetic testing will pave the way to personalized medicine and that complex traits such as obesity will be prevented even before birth. In the presence of the pressing offer of direct-to-consumer genetic testing services from private companies to estimate the individual's risk for complex phenotypes including obesity, the present review offers pediatricians an update of the state of the art on genomics obesity in childhood. Discrepancies with respect to genomics of adult obesity are discussed. After an appraisal of findings from genome-wide association studies in pediatric populations, the rare variant-common disease hypothesis, the theoretical soil for next-generation sequencing techniques, is discussed as opposite to the common disease-common variant hypothesis. Next-generation sequencing techniques are expected to fill the gap of "missing heritability" of obesity, identifying rare variants associated with the trait and clarifying the role of epigenetics in its heritability. Pediatric obesity emerges as a complex phenotype, modulated by unique gene-environment interactions that occur in periods of life and are "permissive" for the programming of adult obesity. With the advent of next-generation sequencing techniques and advances in the field of exposomics, sensitive and specific tools to predict the obesity risk as early as possible are the challenge for the next decade.
Deb, Dibyendu; Singh, J P; Deb, Shovik; Datta, Debajit; Ghosh, Arunava; Chaurasia, R S
2017-10-20
Determination of above ground biomass (AGB) of any forest is a longstanding scientific endeavor, which helps to estimate net primary productivity, carbon stock and other biophysical parameters of that forest. With advancement of geospatial technology in last few decades, AGB estimation now can be done using space-borne and airborne remotely sensed data. It is a well-established, time saving and cost effective technique with high precision and is frequently applied by the scientific community. It involves development of allometric equations based on correlations of ground-based forest biomass measurements with vegetation indices derived from remotely sensed data. However, selection of the best-fit and explanatory models of biomass estimation often becomes a difficult proposition with respect to the image data resolution (spatial and spectral) as well as the sensor platform position in space. Using Resourcesat-2 satellite data and Normalized Difference Vegetation Index (NDVI), this pilot scale study compared traditional linear and nonlinear models with an artificial intelligence-based non-parametric technique, i.e. artificial neural network (ANN) for formulation of the best-fit model to determine AGB of forest of the Bundelkhand region of India. The results confirmed the superiority of ANN over other models in terms of several statistical significance and reliability assessment measures. Accordingly, this study proposed the use of ANN instead of traditional models for determination of AGB and other bio-physical parameters of any dry deciduous forest of tropical sub-humid or semi-arid area. In addition, large numbers of sampling sites with different quadrant sizes for trees, shrubs, and herbs as well as application of LiDAR data as predictor variable were recommended for very high precision modelling in ANN for a large scale study.
Li, Zhi; Wei, Henglu; Zhou, Wei; Duan, Zhemin
2018-01-01
Dynamic thermal management (DTM) mechanisms utilize embedded thermal sensors to collect fine-grained temperature information for monitoring the real-time thermal behavior of multi-core processors. However, embedded thermal sensors are very susceptible to a variety of sources of noise, including environmental uncertainty and process variation. This causes the discrepancies between actual temperatures and those observed by on-chip thermal sensors, which seriously affect the efficiency of DTM. In this paper, a smoothing filter-based Kalman prediction technique is proposed to accurately estimate the temperatures from noisy sensor readings. For the multi-sensor estimation scenario, the spatial correlations among different sensor locations are exploited. On this basis, a multi-sensor synergistic calibration algorithm (known as MSSCA) is proposed to improve the simultaneous prediction accuracy of multiple sensors. Moreover, an infrared imaging-based temperature measurement technique is also proposed to capture the thermal traces of an advanced micro devices (AMD) quad-core processor in real time. The acquired real temperature data are used to evaluate our prediction performance. Simulation shows that the proposed synergistic calibration scheme can reduce the root-mean-square error (RMSE) by 1.2 ∘C and increase the signal-to-noise ratio (SNR) by 15.8 dB (with a very small average runtime overhead) compared with assuming the thermal sensor readings to be ideal. Additionally, the average false alarm rate (FAR) of the corrected sensor temperature readings can be reduced by 28.6%. These results clearly demonstrate that if our approach is used to perform temperature estimation, the response mechanisms of DTM can be triggered to adjust the voltages, frequencies, and cooling fan speeds at more appropriate times. PMID:29393862
Li, Xin; Ou, Xingtao; Li, Zhi; Wei, Henglu; Zhou, Wei; Duan, Zhemin
2018-02-02
Dynamic thermal management (DTM) mechanisms utilize embedded thermal sensors to collect fine-grained temperature information for monitoring the real-time thermal behavior of multi-core processors. However, embedded thermal sensors are very susceptible to a variety of sources of noise, including environmental uncertainty and process variation. This causes the discrepancies between actual temperatures and those observed by on-chip thermal sensors, which seriously affect the efficiency of DTM. In this paper, a smoothing filter-based Kalman prediction technique is proposed to accurately estimate the temperatures from noisy sensor readings. For the multi-sensor estimation scenario, the spatial correlations among different sensor locations are exploited. On this basis, a multi-sensor synergistic calibration algorithm (known as MSSCA) is proposed to improve the simultaneous prediction accuracy of multiple sensors. Moreover, an infrared imaging-based temperature measurement technique is also proposed to capture the thermal traces of an advanced micro devices (AMD) quad-core processor in real time. The acquired real temperature data are used to evaluate our prediction performance. Simulation shows that the proposed synergistic calibration scheme can reduce the root-mean-square error (RMSE) by 1.2 ∘ C and increase the signal-to-noise ratio (SNR) by 15.8 dB (with a very small average runtime overhead) compared with assuming the thermal sensor readings to be ideal. Additionally, the average false alarm rate (FAR) of the corrected sensor temperature readings can be reduced by 28.6%. These results clearly demonstrate that if our approach is used to perform temperature estimation, the response mechanisms of DTM can be triggered to adjust the voltages, frequencies, and cooling fan speeds at more appropriate times.
NASA Astrophysics Data System (ADS)
Branson, O.; Vetter, L.; Fehrenbacher, J. S.; Spero, H. J.
2016-12-01
The geochemical variability between individual foraminifera within single core intervals records both palaeo-oecanographic conditions and ecology. Within the biological context of foraminiferal species, this population variability may be interpreted to provide unparalleled paleoenvironmental information. For example, coupled trace element and stable isotope analyses of single O. universa offer a powerful tool for reconstructing the δ18O of Laurentide Ice Sheet (LIS) meltwater, by calculating the intercept between temperature-corrected δ18O water and Ba/Ca salinity estimates (Vetter et al., in review). This offers valuable insights into the dynamics of ice sheet melting at the end of the last glacial maximum. Here we apply similar coupled single-shell laser ablation (LA-ICP-MS) and isotope ratio mass spectrometry (IRMS) techniques to explore the δ18O of Laurentide meltwater during H4 and bracketing intervals. The application of these methods to down-core samples requires the development of robust LA-ICP-MS data processing techniques to identify primary signals within Ba contaminated samples, and careful consideration of palaeo Ba/Ca-salinity relationships. Our analyses offer a significant advance in systematic LA-ICP-MS data processing methods, offer constraints on the variability of riverine Ba fluxes, and ultimately provide δ18O estimates of LIS meltwater during H4.
Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data
NASA Astrophysics Data System (ADS)
Pathak, Jaideep; Lu, Zhixin; Hunt, Brian R.; Girvan, Michelle; Ott, Edward
2017-12-01
We use recent advances in the machine learning area known as "reservoir computing" to formulate a method for model-free estimation from data of the Lyapunov exponents of a chaotic process. The technique uses a limited time series of measurements as input to a high-dimensional dynamical system called a "reservoir." After the reservoir's response to the data is recorded, linear regression is used to learn a large set of parameters, called the "output weights." The learned output weights are then used to form a modified autonomous reservoir designed to be capable of producing an arbitrarily long time series whose ergodic properties approximate those of the input signal. When successful, we say that the autonomous reservoir reproduces the attractor's "climate." Since the reservoir equations and output weights are known, we can compute the derivatives needed to determine the Lyapunov exponents of the autonomous reservoir, which we then use as estimates of the Lyapunov exponents for the original input generating system. We illustrate the effectiveness of our technique with two examples, the Lorenz system and the Kuramoto-Sivashinsky (KS) equation. In the case of the KS equation, we note that the high dimensional nature of the system and the large number of Lyapunov exponents yield a challenging test of our method, which we find the method successfully passes.
García-Gómez, Joaquín; Rosa-Zurera, Manuel; Romero-Camacho, Antonio; Jiménez-Garrido, Jesús Antonio; García-Benavides, Víctor
2018-01-01
Pipeline inspection is a topic of particular interest to the companies. Especially important is the defect sizing, which allows them to avoid subsequent costly repairs in their equipment. A solution for this issue is using ultrasonic waves sensed through Electro-Magnetic Acoustic Transducer (EMAT) actuators. The main advantage of this technology is the absence of the need to have direct contact with the surface of the material under investigation, which must be a conductive one. Specifically interesting is the meander-line-coil based Lamb wave generation, since the directivity of the waves allows a study based in the circumferential wrap-around received signal. However, the variety of defect sizes changes the behavior of the signal when it passes through the pipeline. Because of that, it is necessary to apply advanced techniques based on Smart Sound Processing (SSP). These methods involve extracting useful information from the signals sensed with EMAT at different frequencies to obtain nonlinear estimations of the depth of the defect, and to select the features that better estimate the profile of the pipeline. The proposed technique has been tested using both simulated and real signals in steel pipelines, obtaining good results in terms of Root Mean Square Error (RMSE). PMID:29518927
Benzy, V K; Jasmin, E A; Koshy, Rachel Cherian; Amal, Frank; Indiradevi, K P
2018-01-01
The advancement in medical research and intelligent modeling techniques has lead to the developments in anaesthesia management. The present study is targeted to estimate the depth of anaesthesia using cognitive signal processing and intelligent modeling techniques. The neurophysiological signal that reflects cognitive state of anaesthetic drugs is the electroencephalogram signal. The information available on electroencephalogram signals during anaesthesia are drawn by extracting relative wave energy features from the anaesthetic electroencephalogram signals. Discrete wavelet transform is used to decomposes the electroencephalogram signals into four levels and then relative wave energy is computed from approximate and detail coefficients of sub-band signals. Relative wave energy is extracted to find out the degree of importance of different electroencephalogram frequency bands associated with different anaesthetic phases awake, induction, maintenance and recovery. The Kruskal-Wallis statistical test is applied on the relative wave energy features to check the discriminating capability of relative wave energy features as awake, light anaesthesia, moderate anaesthesia and deep anaesthesia. A novel depth of anaesthesia index is generated by implementing a Adaptive neuro-fuzzy inference system based fuzzy c-means clustering algorithm which uses relative wave energy features as inputs. Finally, the generated depth of anaesthesia index is compared with a commercially available depth of anaesthesia monitor Bispectral index.
Integrating Formal Methods and Testing 2002
NASA Technical Reports Server (NTRS)
Cukic, Bojan
2002-01-01
Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graf, Peter; Damiani, Rick R.; Dykes, Katherine
2017-01-09
A new adaptive stratified importance sampling (ASIS) method is proposed as an alternative approach for the calculation of the 50 year extreme load under operational conditions, as in design load case 1.1 of the the International Electrotechnical Commission design standard. ASIS combines elements of the binning and extrapolation technique, currently described by the standard, and of the importance sampling (IS) method to estimate load probability of exceedances (POEs). Whereas a Monte Carlo (MC) approach would lead to the sought level of POE with a daunting number of simulations, IS-based techniques are promising as they target the sampling of the inputmore » parameters on the parts of the distributions that are most responsible for the extreme loads, thus reducing the number of runs required. We compared the various methods on select load channels as output from FAST, an aero-hydro-servo-elastic tool for the design and analysis of wind turbines developed by the National Renewable Energy Laboratory (NREL). Our newly devised method, although still in its infancy in terms of tuning of the subparameters, is comparable to the others in terms of load estimation and its variance versus computational cost, and offers great promise going forward due to the incorporation of adaptivity into the already powerful importance sampling concept.« less
Muñoz-Huerta, Rafael F.; Guevara-Gonzalez, Ramon G.; Contreras-Medina, Luis M.; Torres-Pacheco, Irineo; Prado-Olivarez, Juan; Ocampo-Velazquez, Rosalia V.
2013-01-01
Nitrogen (N) plays a key role in the plant life cycle. It is the main plant mineral nutrient needed for chlorophyll production and other plant cell components (proteins, nucleic acids, amino acids). Crop yield is affected by plant N status. Thus, the optimization of nitrogen fertilization has become the object of intense research due to its environmental and economic impact. This article focuses on reviewing current methods and techniques used to determine plant N status. Kjeldahl digestion and Dumas combustion have been used as reference methods for N determination in plants, but they are destructive and time consuming. By using spectroradiometers, reflectometers, imagery from satellite sensors and digital cameras, optical properties have been measured to estimate N in plants, such as crop canopy reflectance, leaf transmittance, chlorophyll and polyphenol fluorescence. High correlation has been found between optical parameters and plant N status, and those techniques are not destructive. However, some drawbacks include chlorophyll saturation, atmospheric and soil interference, and the high cost of instruments. Electrical properties of plant tissue have been used to estimate quality in fruits, and water content in plants, as well as nutrient deficiency, which suggests that they have potential for use in plant N determination. PMID:23959242
Nathan, Lucas M; Simmons, Megan; Wegleitner, Benjamin J; Jerde, Christopher L; Mahon, Andrew R
2014-11-04
The use of molecular surveillance techniques has become popular among aquatic researchers and managers due to the improved sensitivity and efficiency compared to traditional sampling methods. Rapid expansion in the use of environmental DNA (eDNA), paired with the advancement of molecular technologies, has resulted in new detection platforms and techniques. In this study we present a comparison of three eDNA surveillance platforms: traditional polymerase chain reaction (PCR), quantitative PCR (qPCR), and digital droplet PCR (ddPCR) in which water samples were collected over a 24 h time period from mesocosm experiments containing a population gradient of invasive species densities. All platforms reliably detected the presence of DNA, even at low target organism densities within the first hour. The two quantitative platforms (qPCR and ddPCR) produced similar estimates of DNA concentrations. The analyses completed with ddPCR was faster from sample collection through analyses and cost approximately half the expenditure of qPCR. Although a new platform for eDNA surveillance of aquatic species, ddPCR was consistent with more commonly used qPCR and a cost-effective means of estimating DNA concentrations. Use of ddPCR by researchers and managers should be considered in future eDNA surveillance applications.
NASA Astrophysics Data System (ADS)
Doležel, Jiří; Novák, Drahomír; Petrů, Jan
2017-09-01
Transportation routes of oversize and excessive loads are currently planned in relation to ensure the transit of a vehicle through critical points on the road. Critical points are level-intersection of roads, bridges etc. This article presents a comprehensive procedure to determine a reliability and a load-bearing capacity level of the existing bridges on highways and roads using the advanced methods of reliability analysis based on simulation techniques of Monte Carlo type in combination with nonlinear finite element method analysis. The safety index is considered as a main criterion of the reliability level of the existing construction structures and the index is described in current structural design standards, e.g. ISO and Eurocode. An example of a single-span slab bridge made of precast prestressed concrete girders of the 60 year current time and its load bearing capacity is set for the ultimate limit state and serviceability limit state. The structure’s design load capacity was estimated by the full probability nonlinear MKP analysis using a simulation technique Latin Hypercube Sampling (LHS). Load-bearing capacity values based on a fully probabilistic analysis are compared with the load-bearing capacity levels which were estimated by deterministic methods of a critical section of the most loaded girders.
Advanced analysis of complex seismic waveforms to characterize the subsurface Earth structure
NASA Astrophysics Data System (ADS)
Jia, Tianxia
2011-12-01
This thesis includes three major parts, (1) Body wave analysis of mantle structure under the Calabria slab, (2) Spatial Average Coherency (SPAC) analysis of microtremor to characterize the subsurface structure in urban areas, and (3) Surface wave dispersion inversion for shear wave velocity structure. Although these three projects apply different techniques and investigate different parts of the Earth, their aims are the same, which is to better understand and characterize the subsurface Earth structure by analyzing complex seismic waveforms that are recorded on the Earth surface. My first project is body wave analysis of mantle structure under the Calabria slab. Its aim is to better understand the subduction structure of the Calabria slab by analyzing seismograms generated by natural earthquakes. The rollback and subduction of the Calabrian Arc beneath the southern Tyrrhenian Sea is a case study of slab morphology and slab-mantle interactions at short spatial scale. I analyzed the seismograms traversing the Calabrian slab and upper mantle wedge under the southern Tyrrhenian Sea through body wave dispersion, scattering and attenuation, which are recorded during the PASSCAL CAT/SCAN experiment. Compressional body waves exhibit dispersion correlating with slab paths, which is high-frequency components arrivals being delayed relative to low-frequency components. Body wave scattering and attenuation are also spatially correlated with slab paths. I used this correlation to estimate the positions of slab boundaries, and further suggested that the observed spatial variation in near-slab attenuation could be ascribed to mantle flow patterns around the slab. My second project is Spatial Average Coherency (SPAC) analysis of microtremors for subsurface structure characterization. Shear-wave velocity (Vs) information in soil and rock has been recognized as a critical parameter for site-specific ground motion prediction study, which is highly necessary for urban areas located in seismic active zones. SPAC analysis of microtremors provides an efficient way to estimate Vs structure. Compared with other Vs estimating methods, SPAC is noninvasive and does not require any active sources, and therefore, it is especially useful in big cities. I applied SPAC method in two urban areas. The first is the historic city, Charleston, South Carolina, where high levels of seismic hazard lead to great public concern. Accurate Vs information, therefore, is critical for seismic site classification and site response studies. The second SPAC study is in Manhattan, New York City, where depths of high velocity contrast and soil-to-bedrock are different along the island. The two experiments show that Vs structure could be estimated with good accuracy using SPAC method compared with borehole and other techniques. SPAC is proved to be an effective technique for Vs estimation in urban areas. One important issue in seismology is the inversion of subsurface structures from surface recordings of seismograms. My third project focuses on solving this complex geophysical inverse problems, specifically, surface wave phase velocity dispersion curve inversion for shear wave velocity. In addition to standard linear inversion, I developed advanced inversion techniques including joint inversion using borehole data as constrains, nonlinear inversion using Monte Carlo, and Simulated Annealing algorithms. One innovative way of solving the inverse problem is to make inference from the ensemble of all acceptable models. The statistical features of the ensemble provide a better way to characterize the Earth model.
Estimation of the mechanical properties of the eye through the study of its vibrational modes
2017-01-01
Measuring the eye’s mechanical properties in vivo and with minimally invasive techniques can be the key for individualized solutions to a number of eye pathologies. The development of such techniques largely relies on a computational modelling of the eyeball and, it optimally requires the synergic interplay between experimentation and numerical simulation. In Astrophysics and Geophysics the remote measurement of structural properties of the systems of their realm is performed on the basis of (helio-)seismic techniques. As a biomechanical system, the eyeball possesses normal vibrational modes encompassing rich information about its structure and mechanical properties. However, the integral analysis of the eyeball vibrational modes has not been performed yet. Here we develop a new finite difference method to compute both the spheroidal and, specially, the toroidal eigenfrequencies of the human eye. Using this numerical model, we show that the vibrational eigenfrequencies of the human eye fall in the interval 100 Hz–10 MHz. We find that compressible vibrational modes may release a trace on high frequency changes of the intraocular pressure, while incompressible normal modes could be registered analyzing the scattering pattern that the motions of the vitreous humour leave on the retina. Existing contact lenses with embebed devices operating at high sampling frequency could be used to register the microfluctuations of the eyeball shape we obtain. We advance that an inverse problem to obtain the mechanical properties of a given eye (e.g., Young’s modulus, Poisson ratio) measuring its normal frequencies is doable. These measurements can be done using non-invasive techniques, opening very interesting perspectives to estimate the mechanical properties of eyes in vivo. Future research might relate various ocular pathologies with anomalies in measured vibrational frequencies of the eye. PMID:28922351
Synchrotron IR microspectroscopy for protein structure analysis: Potential and questions
Yu, Peiqiang
2006-01-01
Synchrotron radiation-based Fourier transform infrared microspectroscopy (S-FTIR) has been developed as a rapid, direct, non-destructive, bioanalytical technique. This technique takes advantage of synchrotron light brightness and small effective source size and is capable of exploring the molecular chemical make-up within microstructures of a biological tissue without destruction of inherent structures at ultra-spatial resolutions within cellular dimension. To date there has been very little application of this advanced technique to the study of pure protein inherent structure at a cellular level in biological tissues. In this review, a novel approach was introduced to show the potential of the newly developed, advancedmore » synchrotron-based analytical technology, which can be used to localize relatively “pure“ protein in the plant tissues and relatively reveal protein inherent structure and protein molecular chemical make-up within intact tissue at cellular and subcellular levels. Several complex protein IR spectra data analytical techniques (Gaussian and Lorentzian multi-component peak modeling, univariate and multivariate analysis, principal component analysis (PCA), and hierarchical cluster analysis (CLA) are employed to relatively reveal features of protein inherent structure and distinguish protein inherent structure differences between varieties/species and treatments in plant tissues. By using a multi-peak modeling procedure, RELATIVE estimates (but not EXACT determinations) for protein secondary structure analysis can be made for comparison purpose. The issues of pro- and anti-multi-peaking modeling/fitting procedure for relative estimation of protein structure were discussed. By using the PCA and CLA analyses, the plant molecular structure can be qualitatively separate one group from another, statistically, even though the spectral assignments are not known. The synchrotron-based technology provides a new approach for protein structure research in biological tissues at ultraspatial resolutions.« less
Estimation of the mechanical properties of the eye through the study of its vibrational modes.
Aloy, M Á; Adsuara, J E; Cerdá-Durán, P; Obergaulinger, M; Esteve-Taboada, J J; Ferrer-Blasco, T; Montés-Micó, R
2017-01-01
Measuring the eye's mechanical properties in vivo and with minimally invasive techniques can be the key for individualized solutions to a number of eye pathologies. The development of such techniques largely relies on a computational modelling of the eyeball and, it optimally requires the synergic interplay between experimentation and numerical simulation. In Astrophysics and Geophysics the remote measurement of structural properties of the systems of their realm is performed on the basis of (helio-)seismic techniques. As a biomechanical system, the eyeball possesses normal vibrational modes encompassing rich information about its structure and mechanical properties. However, the integral analysis of the eyeball vibrational modes has not been performed yet. Here we develop a new finite difference method to compute both the spheroidal and, specially, the toroidal eigenfrequencies of the human eye. Using this numerical model, we show that the vibrational eigenfrequencies of the human eye fall in the interval 100 Hz-10 MHz. We find that compressible vibrational modes may release a trace on high frequency changes of the intraocular pressure, while incompressible normal modes could be registered analyzing the scattering pattern that the motions of the vitreous humour leave on the retina. Existing contact lenses with embebed devices operating at high sampling frequency could be used to register the microfluctuations of the eyeball shape we obtain. We advance that an inverse problem to obtain the mechanical properties of a given eye (e.g., Young's modulus, Poisson ratio) measuring its normal frequencies is doable. These measurements can be done using non-invasive techniques, opening very interesting perspectives to estimate the mechanical properties of eyes in vivo. Future research might relate various ocular pathologies with anomalies in measured vibrational frequencies of the eye.
Concept for a Satellite-Based Advanced Air Traffic Management System : Volume 7. System Cost.
DOT National Transportation Integrated Search
1973-02-01
The volume presents estimates of the federal government and user costs for the Satellite-Based Advanced Air Traffic Management System and the supporting rationale. The system configuration is that presented in volumes II and III. The cost estimates a...
Recent advances in vacuum sciences and applications
NASA Astrophysics Data System (ADS)
Mozetič, M.; Ostrikov, K.; Ruzic, D. N.; Curreli, D.; Cvelbar, U.; Vesel, A.; Primc, G.; Leisch, M.; Jousten, K.; Malyshev, O. B.; Hendricks, J. H.; Kövér, L.; Tagliaferro, A.; Conde, O.; Silvestre, A. J.; Giapintzakis, J.; Buljan, M.; Radić, N.; Dražić, G.; Bernstorff, S.; Biederman, H.; Kylián, O.; Hanuš, J.; Miloševič, S.; Galtayries, A.; Dietrich, P.; Unger, W.; Lehocky, M.; Sedlarik, V.; Stana-Kleinschek, K.; Drmota-Petrič, A.; Pireaux, J. J.; Rogers, J. W.; Anderle, M.
2014-04-01
Recent advances in vacuum sciences and applications are reviewed. Novel optical interferometer cavity devices enable pressure measurements with ppm accuracy. The innovative dynamic vacuum standard allows for pressure measurements with temporal resolution of 2 ms. Vacuum issues in the construction of huge ultra-high vacuum devices worldwide are reviewed. Recent advances in surface science and thin films include new phenomena observed in electron transport near solid surfaces as well as novel results on the properties of carbon nanomaterials. Precise techniques for surface and thin-film characterization have been applied in the conservation technology of cultural heritage objects and recent advances in the characterization of biointerfaces are presented. The combination of various vacuum and atmospheric-pressure techniques enables an insight into the complex phenomena of protein and other biomolecule conformations on solid surfaces. Studying these phenomena at solid-liquid interfaces is regarded as the main issue in the development of alternative techniques for drug delivery, tissue engineering and thus the development of innovative techniques for curing cancer and cardiovascular diseases. A review on recent advances in plasma medicine is presented as well as novel hypotheses on cell apoptosis upon treatment with gaseous plasma. Finally, recent advances in plasma nanoscience are illustrated with several examples and a roadmap for future activities is presented.
DOT National Transportation Integrated Search
2010-01-01
The current project, funded by MIOH-UTC for the period 1/1/2009- 4/30/2010, is concerned : with the development of the framework for a transportation facility inspection system using : advanced image processing techniques. The focus of this study is ...
A score to estimate the likelihood of detecting advanced colorectal neoplasia at colonoscopy
Kaminski, Michal F; Polkowski, Marcin; Kraszewska, Ewa; Rupinski, Maciej; Butruk, Eugeniusz; Regula, Jaroslaw
2014-01-01
Objective This study aimed to develop and validate a model to estimate the likelihood of detecting advanced colorectal neoplasia in Caucasian patients. Design We performed a cross-sectional analysis of database records for 40-year-old to 66-year-old patients who entered a national primary colonoscopy-based screening programme for colorectal cancer in 73 centres in Poland in the year 2007. We used multivariate logistic regression to investigate the associations between clinical variables and the presence of advanced neoplasia in a randomly selected test set, and confirmed the associations in a validation set. We used model coefficients to develop a risk score for detection of advanced colorectal neoplasia. Results Advanced colorectal neoplasia was detected in 2544 of the 35 918 included participants (7.1%). In the test set, a logistic-regression model showed that independent risk factors for advanced colorectal neoplasia were: age, sex, family history of colorectal cancer, cigarette smoking (p<0.001 for these four factors), and Body Mass Index (p=0.033). In the validation set, the model was well calibrated (ratio of expected to observed risk of advanced neoplasia: 1.00 (95% CI 0.95 to 1.06)) and had moderate discriminatory power (c-statistic 0.62). We developed a score that estimated the likelihood of detecting advanced neoplasia in the validation set, from 1.32% for patients scoring 0, to 19.12% for patients scoring 7–8. Conclusions Developed and internally validated score consisting of simple clinical factors successfully estimates the likelihood of detecting advanced colorectal neoplasia in asymptomatic Caucasian patients. Once externally validated, it may be useful for counselling or designing primary prevention studies. PMID:24385598
A score to estimate the likelihood of detecting advanced colorectal neoplasia at colonoscopy.
Kaminski, Michal F; Polkowski, Marcin; Kraszewska, Ewa; Rupinski, Maciej; Butruk, Eugeniusz; Regula, Jaroslaw
2014-07-01
This study aimed to develop and validate a model to estimate the likelihood of detecting advanced colorectal neoplasia in Caucasian patients. We performed a cross-sectional analysis of database records for 40-year-old to 66-year-old patients who entered a national primary colonoscopy-based screening programme for colorectal cancer in 73 centres in Poland in the year 2007. We used multivariate logistic regression to investigate the associations between clinical variables and the presence of advanced neoplasia in a randomly selected test set, and confirmed the associations in a validation set. We used model coefficients to develop a risk score for detection of advanced colorectal neoplasia. Advanced colorectal neoplasia was detected in 2544 of the 35,918 included participants (7.1%). In the test set, a logistic-regression model showed that independent risk factors for advanced colorectal neoplasia were: age, sex, family history of colorectal cancer, cigarette smoking (p<0.001 for these four factors), and Body Mass Index (p=0.033). In the validation set, the model was well calibrated (ratio of expected to observed risk of advanced neoplasia: 1.00 (95% CI 0.95 to 1.06)) and had moderate discriminatory power (c-statistic 0.62). We developed a score that estimated the likelihood of detecting advanced neoplasia in the validation set, from 1.32% for patients scoring 0, to 19.12% for patients scoring 7-8. Developed and internally validated score consisting of simple clinical factors successfully estimates the likelihood of detecting advanced colorectal neoplasia in asymptomatic Caucasian patients. Once externally validated, it may be useful for counselling or designing primary prevention studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Advanced techniques for determining long term compatibility of materials with propellants
NASA Technical Reports Server (NTRS)
Green, R. L.; Stebbins, J. P.; Smith, A. W.; Pullen, K. E.
1973-01-01
A method for the prediction of propellant-material compatibility for periods of time up to ten years is presented. Advanced sensitive measurement techniques used in the prediction method are described. These include: neutron activation analysis, radioactive tracer technique, and atomic absorption spectroscopy with a graphite tube furnace sampler. The results of laboratory tests performed to verify the prediction method are presented.
2010-09-01
ADVANCEMENT OF TECHNIQUES FOR MODELING THE EFFECTS OF ATMOSPHERIC GRAVITY-WAVE-INDUCED INHOMOGENEITIES ON INFRASOUND PROPAGATION Robert G...number of infrasound observations indicate that fine-scale atmospheric inhomogeneities contribute to infrasonic arrivals that are not predicted by...standard modeling techniques. In particular, gravity waves, or buoyancy waves, are believed to contribute to the multipath nature of infrasound
Bayesian techniques for surface fuel loading estimation
Kathy Gray; Robert Keane; Ryan Karpisz; Alyssa Pedersen; Rick Brown; Taylor Russell
2016-01-01
A study by Keane and Gray (2013) compared three sampling techniques for estimating surface fine woody fuels. Known amounts of fine woody fuel were distributed on a parking lot, and researchers estimated the loadings using different sampling techniques. An important result was that precise estimates of biomass required intensive sampling for both the planar intercept...
Advanced imaging in COPD: insights into pulmonary pathophysiology
Milne, Stephen
2014-01-01
Chronic obstructive pulmonary disease (COPD) involves a complex interaction of structural and functional abnormalities. The two have long been studied in isolation. However, advanced imaging techniques allow us to simultaneously assess pathological processes and their physiological consequences. This review gives a comprehensive account of the various advanced imaging modalities used to study COPD, including computed tomography (CT), magnetic resonance imaging (MRI), and the nuclear medicine techniques positron emission tomography (PET) and single-photon emission computed tomography (SPECT). Some more recent developments in imaging technology, including micro-CT, synchrotron imaging, optical coherence tomography (OCT) and electrical impedance tomography (EIT), are also described. The authors identify the pathophysiological insights gained from these techniques, and speculate on the future role of advanced imaging in both clinical and research settings. PMID:25478198
NASA Technical Reports Server (NTRS)
1977-01-01
Results of planetary advanced studies and planning support are summarized. The scope of analyses includes cost estimation research, planetary mission performance, penetrator advanced studies, Mercury mission transport requirements, definition of super solar electric propulsion/solar sail mission discriminators, and advanced planning activities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorensen, James; Smith, Steven; Kurz, Bethany
Tight oil formations such as those in the Bakken petroleum system are known to hold hundreds of billions of barrels of oil in place; however, the primary recovery factor for these plays is typically less than 10%. Tight oil formations, including the Bakken Formation, therefore, may be attractive candidates for enhanced oil recovery (EOR) using CO 2. Multiphase fluid behavior and flow in fluid-rich shales can vary substantially depending on the size of pore throats, and properties such as fluid viscosity and density are much different in nanoscale pores than in macroscale pores. Thus it is critical to understand themore » nature and distribution of nano-, micro-, and macroscale pores and fracture networks. To address these issues, the Energy & Environmental Research Center (EERC) has been conducting a research program entitled “Improved Characterization and Modeling of Tight Oil Formations for CO 2 Enhanced Oil Recovery Potential and Storage Capacity Estimation.” The objectives of the project are 1) the use of advanced characterization methods to better understand and quantify the petrophysical and geomechanical factors that control CO 2 and oil mobility within tight oil formation samples, 2) the determination of CO 2 permeation and oil extraction rates in tight reservoir rocks and organic-rich shales of the Bakken, and 3) the integration of the laboratory-based CO 2 permeation and oil extraction data and the characterization data into geologic models and dynamic simulations to develop predictions of CO 2 storage resource and EOR in the Bakken tight oil formation. A combination of standard and advanced petrophysical characterization techniques were applied to characterize samples of Bakken Formation tight reservoir rock and shales from multiple wells. Techniques included advanced computer tomography (CT) imaging, scanning electron microscopy (SEM) techniques, whole-core and micro x-ray CT imaging, field emission (FE) SEM, and focused ion beam (FIB) SEM. Selected samples were also analyzed for geomechanical properties. X-ray CT imaging yielded information on the occurrence of fractures, bedding planes, fossils, and bioturbation in core, as well as data on bulk density and photoelectric factor logs, which were used to interpret porosity, organic content, and mineralogy. FESEM was used for characterization of nano- and microscale features, including nanoscale pore visualization and micropore and pore throat mineralogy. FIBSEM yielded micro- to nanoscale visualization of fracture networks, porosity and pore-size distribution, connected versus isolated porosity, and distribution of organics. Results from the characterization activities provide insight on nanoscale fracture properties, pore throat mineralogy and connectivity, rock matrix characteristics, mineralogy, and organic content. Laboratory experiments demonstrated that CO 2 can permeate the tight matrix of Bakken shale and nonshale reservoir samples and mobilize oil from those samples. Geologic models were created at scales ranging from the core plug to the reservoir, and dynamic simulations were conducted. The data from the characterization and laboratory-based activities were integrated into modeling research activities to determine the fundamental mechanisms controlling fluid transport in the Bakken, which support EOR scheme design and estimation of CO 2 storage potential in tight oil formations. Simulation results suggest a CO 2 storage resource estimate range of 169 million to 1.5 billion tonnes for the Bakken in North Dakota, possibly resulting in 1.8 billion to 16 billion barrels of incremental oil.« less
Ultrasound elasticity imaging of human posterior tibial tendon
NASA Astrophysics Data System (ADS)
Gao, Liang
Posterior tibial tendon dysfunction (PTTD) is a common degenerative condition leading to a severe impairment of gait. There is currently no effective method to determine whether a patient with advanced PTTD would benefit from several months of bracing and physical therapy or ultimately require surgery. Tendon degeneration is closely associated with irreversible degradation of its collagen structure, leading to changes to its mechanical properties. If these properties could be monitored in vivo, it could be used to quantify the severity of tendonosis and help determine the appropriate treatment. Ultrasound elasticity imaging (UEI) is a real-time, noninvasive technique to objectively measure mechanical properties in soft tissue. It consists of acquiring a sequence of ultrasound frames and applying speckle tracking to estimate displacement and strain at each pixel. The goals of my dissertation were to 1) use acoustic simulations to investigate the performance of UEI during tendon deformation with different geometries; 2) develop and validate UEI as a potentially noninvasive technique for quantifying tendon mechanical properties in human cadaver experiments; 3) design a platform for UEI to measure mechanical properties of the PTT in vivo and determine whether there are detectable and quantifiable differences between healthy and diseased tendons. First, ultrasound simulations of tendon deformation were performed using an acoustic modeling program. The effects of different tendon geometries (cylinder and curved cylinder) on the performance of UEI were investigated. Modeling results indicated that UEI accurately estimated the strain in the cylinder geometry, but underestimated in the curved cylinder. The simulation also predicted that the out-of-the-plane motion of the PTT would cause a non-uniform strain pattern within incompressible homogeneous isotropic material. However, to average within a small region of interest determined by principal component analysis (PCA) would improve the estimation. Next, UEI was performed on five human cadaver feet mounted in a materials testing system (MTS) while the PTT was attached to a force actuator. A portable ultrasound scanner collected 2D data during loading cycles. Young's modulus was calculated from the strain, loading force and cross sectional area of the PTT. Average Young's modulus for the five tendons was (0.45+/-0.16GPa) using UEI. This was consistent with simultaneous measurements made by the MTS across the whole tendon (0.52+/-0.18GPa). We also calculated the scaling factor (0.12+/-0.01) between the load on the PTT and the inversion force at the forefoot, a measurable quantity in vivo. This study suggests that UEI could be a reliable in vivo technique for estimating the mechanical properties of the human PTT. Finally, we built a custom ankle inversion platform for in vivo imaging of human subjects (eight healthy volunteers and nine advanced PTTD patients). We found non-linear elastic properties of the PTTD, which could be quantified by the slope between the elastic modulus (E) and the inversion force (F). This slope (DeltaE/DeltaF), or Non-linear Elasticity Parameter (NEP), was significantly different for the two groups: 0.16+/-0.20 MPa/N for healthy tendons and 0.45+/-0.43 MPa/N for PTTD tendons. A receiver operating characteristic (ROC) curve revealed an area under the curve (AUC) of 0.83+/-0.07, which indicated that the classifier system is valid. In summary, the acoustic modeling, cadaveric studies, and in vivo experiments together demonstrated that UEI accurately quantifies tendon mechanical properties. As a valuable clinical tool, UEI also has the potential to help guide treatment decisions for advanced PTTD and other tendinopathies.
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Harp, D.
2010-12-01
The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.
Expert Systems for Real-Time Volcano Monitoring
NASA Astrophysics Data System (ADS)
Cassisi, C.; Cannavo, F.; Montalto, P.; Motta, P.; Schembra, G.; Aliotta, M. A.; Cannata, A.; Patanè, D.; Prestifilippo, M.
2014-12-01
In the last decade, the capability to monitor and quickly respond to remote detection of volcanic activity has been greatly improved through use of advanced techniques and semi-automatic software applications installed in most of the 24h control rooms devoted to volcanic surveillance. Ability to monitor volcanoes is being advanced by new technology, such as broad-band seismology, microphone networks mainly recording in the infrasonic frequency band, satellite observations of ground deformation, high quality video surveillance systems, also in infrared band, improved sensors for volcanic gas measurements, and advances in computer power and speed, leading to improvements in data transmission, data analysis and modeling techniques. One of the most critical point in the real-time monitoring chain is the evaluation of the volcano state from all the measurements. At the present, most of this task is delegated to one or more human experts in volcanology. Unfortunately, the volcano state assessment becomes harder if we observe that, due to the coupling of highly non-linear and complex volcanic dynamic processes, the measurable effects can show a rich range of different behaviors. Moreover, due to intrinsic uncertainties and possible failures in some recorded data, precise state assessment is usually not achievable. Hence, the volcano state needs to be expressed in probabilistic terms that take account of uncertainties. In the framework of the project PON SIGMA (Integrated Cloud-Sensor System for Advanced Multirisk Management) work, we have developed an expert system approach to estimate the ongoing volcano state from all the available measurements and with minimal human interaction. The approach is based on hidden markov model and deals with uncertainties and probabilities. We tested the proposed approach on data coming from the Mt. Etna (Italy) continuous monitoring networks for the period 2011-2013. Results show that this approach can be a valuable tool to aid the operator in volcano real-time monitoring.
Lidke, Diane S; Lidke, Keith A
2012-06-01
A fundamental goal in biology is to determine how cellular organization is coupled to function. To achieve this goal, a better understanding of organelle composition and structure is needed. Although visualization of cellular organelles using fluorescence or electron microscopy (EM) has become a common tool for the cell biologist, recent advances are providing a clearer picture of the cell than ever before. In particular, advanced light-microscopy techniques are achieving resolutions below the diffraction limit and EM tomography provides high-resolution three-dimensional (3D) images of cellular structures. The ability to perform both fluorescence and electron microscopy on the same sample (correlative light and electron microscopy, CLEM) makes it possible to identify where a fluorescently labeled protein is located with respect to organelle structures visualized by EM. Here, we review the current state of the art in 3D biological imaging techniques with a focus on recent advances in electron microscopy and fluorescence super-resolution techniques.
Swiat, Maciej; Weigele, John; Hurst, Robert W; Kasner, Scott E; Pawlak, Mikolaj; Arkuszewski, Michal; Al-Okaili, Riyadh N; Swiercz, Miroslaw; Ustymowicz, Andrzej; Opala, Grzegorz; Melhem, Elias R; Krejza, Jaroslaw
2009-03-01
To prospectively compare accuracies of transcranial color-coded duplex sonography (TCCS) and transcranial Doppler sonography (TCD) in the diagnosis of middle cerebral artery (MCA) vasospasm. Prospective blinded head-to-head comparison TCD and TCCS methods using digital subtraction angiography (DSA) as the reference standard. Department of Radiology in a tertiary university health center in a metropolitan area. Eighty-one consecutive patients (mean age, 53.9 +/- 13.9 years; 48 women). The indication for DSA was subarachnoid hemorrhage in 71 patients (87.6%), stroke or transient ischemic attack in five patients (6.2%), and other reasons in five patients (6.2%). The MCA was graded as normal, narrowed <50%, and >50% using DSA. The accuracy of ultrasound methods was estimated by total area (Az) under receiver operator characteristic curve. To compare sensitivities of ultrasound methods, McNemar's test was used with mean velocity thresholds of 120 cm/sec for the detection of less advanced, and 200 cm/sec for the more advanced MCA narrowing. Angiographic MCA narrowing
Gaze inspired subtitle position evaluation for MOOCs videos
NASA Astrophysics Data System (ADS)
Chen, Hongli; Yan, Mengzhen; Liu, Sijiang; Jiang, Bo
2017-06-01
Online educational resources, such as MOOCs, is becoming increasingly popular, especially in higher education field. One most important media type for MOOCs is course video. Besides traditional bottom-position subtitle accompany to the videos, in recent years, researchers try to develop more advanced algorithms to generate speaker-following style subtitles. However, the effectiveness of such subtitle is still unclear. In this paper, we investigate the relationship between subtitle position and the learning effect after watching the video on tablet devices. Inspired with image based human eye tracking technique, this work combines the objective gaze estimation statistics with subjective user study to achieve a convincing conclusion - speaker-following subtitles are more suitable for online educational videos.
[Application of computer-assisted 3D imaging simulation for surgery].
Matsushita, S; Suzuki, N
1994-03-01
This article describes trends in application of various imaging technology in surgical planning, navigation, and computer aided surgery. Imaging information is essential factor for simulation in medicine. It includes three dimensional (3D) image reconstruction, neuro-surgical navigation, creating substantial model based on 3D imaging data and etc. These developments depend mostly on 3D imaging technique, which is much contributed by recent computer technology. 3D imaging can offer new intuitive information to physician and surgeon, and this method is suitable for mechanical control. By utilizing simulated results, we can obtain more precise surgical orientation, estimation, and operation. For more advancement, automatic and high speed recognition of medical imaging is being developed.
Humanity's unsustainable environmental footprint.
Hoekstra, Arjen Y; Wiedmann, Thomas O
2014-06-06
Within the context of Earth's limited natural resources and assimilation capacity, the current environmental footprint of humankind is not sustainable. Assessing land, water, energy, material, and other footprints along supply chains is paramount in understanding the sustainability, efficiency, and equity of resource use from the perspective of producers, consumers, and government. We review current footprints and relate those to maximum sustainable levels, highlighting the need for future work on combining footprints, assessing trade-offs between them, improving computational techniques, estimating maximum sustainable footprint levels, and benchmarking efficiency of resource use. Ultimately, major transformative changes in the global economy are necessary to reduce humanity's environmental footprint to sustainable levels. Copyright © 2014, American Association for the Advancement of Science.
2017-04-01
ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH (VISRIDER) PROGRAM TASK 6: POINT CLOUD...To) OCT 2013 – SEP 2014 4. TITLE AND SUBTITLE ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH...various point cloud visualization techniques for viewing large scale LiDAR datasets. Evaluate their potential use for thick client desktop platforms
Advancement of the anterior maxilla by distraction (case report).
Karakasis, Dimitri; Hadjipetrou, Loucia
2004-06-01
Several techniques of distraction osteogenesis have been applied for the correction of compromised midface in patients with clefts of the lip, alveolus and palate. This article presents a technique of callus distraction applied in a specific case of hypoplasia of a cleft maxilla with the sagittal advancement of the maxilla thus not affecting velopharyngeal function. The decision to apply distraction osteogenesis for advancement of the anterior maxillary segment in cleft patients offers many advantages.
Karki, Sandhya; Elsgaard, Lars; Kandel, Tanka P; Lærke, Poul Erik
2015-03-01
Empirical greenhouse gas (GHG) flux estimates from diverse peatlands are required in order to derive emission factors for managed peatlands. This study on a drained fen peatland quantified the annual GHG balance (Carbon dioxide (CO2), nitrous oxide (N2O), methane (CH4), and C exported in crop yield) from spring barley (SB) and reed canary grass (RCG) using static opaque chambers for GHG flux measurements and biomass yield for indirectly estimating gross primary production (GPP). Estimates of ecosystem respiration (ER) and GPP were compared with more advanced but costly and labor-intensive dynamic chamber studies. Annual GHG balance for the two cropping systems was 4.0 ± 0.7 and 8.1 ± 0.2 Mg CO2-Ceq ha(-1) from SB and RCG, respectively (mean ± standard error, n = 3). Annual CH4 emissions were negligible (<0.006 Mg CO2-Ceq ha(-1)), and N2O emissions contributed only 4-13 % of the full GHG balance (0.5 and 0.3 Mg CO2-Ceq ha(-1) for SB and RCG, respectively). The statistical significance of low CH4 and N2O fluxes was evaluated by a simulation procedure which showed that most of CH4 fluxes were within the range that could arise from random variation associated with actual zero-flux situations. ER measured by static chamber and dynamic chamber methods was similar, particularly when using nonlinear regression techniques for flux calculations. A comparison of GPP derived from aboveground biomass and from measuring net ecosystem exchange (NEE) showed that GPP estimation from biomass might be useful, or serve as validation, for more advanced flux measurement methods. In conclusion, combining static opaque chambers for measuring ER of CO2 and CH4 and N2O fluxes with biomass yield for GPP estimation worked well in the drained fen peatland cropped to SB and RCG and presented a valid alternative to estimating the full GHG balance by dynamic chambers.
Wafer hot spot identification through advanced photomask characterization techniques
NASA Astrophysics Data System (ADS)
Choi, Yohan; Green, Michael; McMurran, Jeff; Ham, Young; Lin, Howard; Lan, Andy; Yang, Richer; Lung, Mike
2016-10-01
As device manufacturers progress through advanced technology nodes, limitations in standard 1-dimensional (1D) mask Critical Dimension (CD) metrics are becoming apparent. Historically, 1D metrics such as Mean to Target (MTT) and CD Uniformity (CDU) have been adequate for end users to evaluate and predict the mask impact on the wafer process. However, the wafer lithographer's process margin is shrinking at advanced nodes to a point that the classical mask CD metrics are no longer adequate to gauge the mask contribution to wafer process error. For example, wafer CDU error at advanced nodes is impacted by mask factors such as 3-dimensional (3D) effects and mask pattern fidelity on subresolution assist features (SRAFs) used in Optical Proximity Correction (OPC) models of ever-increasing complexity. These items are not quantifiable with the 1D metrology techniques of today. Likewise, the mask maker needs advanced characterization methods in order to optimize the mask process to meet the wafer lithographer's needs. These advanced characterization metrics are what is needed to harmonize mask and wafer processes for enhanced wafer hot spot analysis. In this paper, we study advanced mask pattern characterization techniques and their correlation with modeled wafer performance.
Improving Agricultural Water Resources Management Using Ground-based Infrared Thermometry
NASA Astrophysics Data System (ADS)
Taghvaeian, S.
2014-12-01
Irrigated agriculture is the largest user of freshwater resources in arid/semi-arid parts of the world. Meeting rapidly growing demands in food, feed, fiber, and fuel while minimizing environmental pollution under a changing climate requires significant improvements in agricultural water management and irrigation scheduling. Although recent advances in remote sensing techniques and hydrological modeling has provided valuable information on agricultural water resources and their management, real improvements will only occur if farmers, the decision makers on the ground, are provided with simple, affordable, and practical tools to schedule irrigation events. This presentation reviews efforts in developing methods based on ground-based infrared thermometry and thermography for day-to-day management of irrigation systems. The results of research studies conducted in Colorado and Oklahoma show that ground-based remote sensing methods can be used effectively in quantifying water stress and consequently triggering irrigation events. Crop water use estimates based on stress indices have also showed to be in good agreement with estimates based on other methods (e.g. surface energy balance, root zone soil water balance, etc.). Major challenges toward the adoption of this approach by agricultural producers include the reduced accuracy under cloudy and humid conditions and its inability to forecast irrigation date, which is a critical knowledge since many irrigators need to decide about irrigations a few days in advance.
Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peplow, Douglas E.; Blakeman, Edward D; Wagner, John C
2007-01-01
More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficientlymore » optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS).« less
Flow Control Research at NASA Langley in Support of High-Lift Augmentation
NASA Technical Reports Server (NTRS)
Sellers, William L., III; Jones, Gregory S.; Moore, Mark D.
2002-01-01
The paper describes the efforts at NASA Langley to apply active and passive flow control techniques for improved high-lift systems, and advanced vehicle concepts utilizing powered high-lift techniques. The development of simplified high-lift systems utilizing active flow control is shown to provide significant weight and drag reduction benefits based on system studies. Active flow control that focuses on separation, and the development of advanced circulation control wings (CCW) utilizing unsteady excitation techniques will be discussed. The advanced CCW airfoils can provide multifunctional controls throughout the flight envelope. Computational and experimental data are shown to illustrate the benefits and issues with implementation of the technology.
Estimation of correlation functions by stochastic approximation.
NASA Technical Reports Server (NTRS)
Habibi, A.; Wintz, P. A.
1972-01-01
Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.
Wintermark, M; Sanelli, P C; Anzai, Y; Tsiouris, A J; Whitlow, C T
2015-02-01
Neuroimaging plays a critical role in the evaluation of patients with traumatic brain injury, with NCCT as the first-line of imaging for patients with traumatic brain injury and MR imaging being recommended in specific settings. Advanced neuroimaging techniques, including MR imaging DTI, blood oxygen level-dependent fMRI, MR spectroscopy, perfusion imaging, PET/SPECT, and magnetoencephalography, are of particular interest in identifying further injury in patients with traumatic brain injury when conventional NCCT and MR imaging findings are normal, as well as for prognostication in patients with persistent symptoms. These advanced neuroimaging techniques are currently under investigation in an attempt to optimize them and substantiate their clinical relevance in individual patients. However, the data currently available confine their use to the research arena for group comparisons, and there remains insufficient evidence at the time of this writing to conclude that these advanced techniques can be used for routine clinical use at the individual patient level. TBI imaging is a rapidly evolving field, and a number of the recommendations presented will be updated in the future to reflect the advances in medical knowledge. © 2015 by American Journal of Neuroradiology.
Numerical simulation of coupled electrochemical and transport processes in battery systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liaw, B.Y.; Gu, W.B.; Wang, C.Y.
1997-12-31
Advanced numerical modeling to simulate dynamic battery performance characteristics for several types of advanced batteries is being conducted using computational fluid dynamics (CFD) techniques. The CFD techniques provide efficient algorithms to solve a large set of highly nonlinear partial differential equations that represent the complex battery behavior governed by coupled electrochemical reactions and transport processes. The authors have recently successfully applied such techniques to model advanced lead-acid, Ni-Cd and Ni-MH cells. In this paper, the authors briefly discuss how the governing equations were numerically implemented, show some preliminary modeling results, and compare them with other modeling or experimental data reportedmore » in the literature. The authors describe the advantages and implications of using the CFD techniques and their capabilities in future battery applications.« less
On the Composition and Temperature of the Terrestrial Planetary Core
NASA Astrophysics Data System (ADS)
Fei, Yingwei
2013-06-01
The existence of liquid cores of terrestrial planets such as the Earth, Mar, and Mercury has been supported by various observation. The liquid state of the core provides a unique opportunity for us to estimate the temperature of the core if we know the melting temperature of the core materials at core pressure. Dynamic compression by shock wave, laser-heating in diamond-anvil cell, and resistance-heating in the multi-anvil device can melt core materials over a wide pressure range. There have been significant advances in both dynamic and static experimental techniques and characterization tool. In this tal, I will review some of the recent advances and results relevant to the composition and thermal state of the terrestrial core. I will also present new development to analyze the quenched samples recovered from laser-heating diamond-anvil cell experiments using combination of focused ion beam milling, high-resolution SEM imaging, and quantitative chemical analysi. With precision milling of the laser-heating spo, the melting point and element partitioning between solid and liquid can be precisely determined. It is also possible to re-construct 3D image of the laser-heating spot at multi-megabar pressures to better constrain melting point and understanding melting process. The new techniques allow us to extend precise measurements of melting relations to core pressures, providing better constraint on the temperature of the cor. The research is supported by NASA and NSF grants.
Survey of cogeneration: Advanced cogeneration research study
NASA Technical Reports Server (NTRS)
Slonski, M. L.
1983-01-01
The consumption of electricity, natural gas, or fuel oil was surveyed. The potential electricity that could be generated in the SCE service territory using cogeneration technology was estimated. It was found that an estimated 3700 MWe could potentially be generated in Southern California using cogenerated technology. It is suggested that current technology could provide 2600 MWe and advanced technology could provide 1100 MWe. Approximately 1600 MWt is considered not feasible to produce electricity with either current or advanced cogeneration technology.
Automated Training Evaluation (ATE). Final Report.
ERIC Educational Resources Information Center
Charles, John P.; Johnson, Robert M.
The automation of weapons system training presents the potential for significant savings in training costs in terms of manpower, time, and money. The demonstration of the technical feasibility of automated training through the application of advanced digital computer techniques and advanced training techniques is essential before the application…
Neuroprotective "agents" in surgery. Secret "agent" man, or common "agent" machine?
NASA Technical Reports Server (NTRS)
Andrews, R. J.
1999-01-01
The search for clinically-effective neuroprotective agents has received enormous support in recent years--an estimated $200 million by pharmaceutical companies on clinical trials for traumatic brain injury alone. At the same time, the pathophysiology of brain injury has proved increasingly complex, rendering the likelihood of a single agent "magic bullet" even more remote. On the other hand, great progress continues with technology that makes surgery less invasive and less risky. One example is the application of endovascular techniques to treat coronary artery stenosis, where both the invasiveness of sternotomy and the significant neurological complication rate (due to microemboli showering the cerebral vasculature) can be eliminated. In this paper we review aspects of intraoperative neuroprotection both present and future. Explanations for the slow progress on pharmacologic neuroprotection during surgery are presented. Examples of technical advances that have had great impact on neuroprotection during surgery are given both from coronary artery stenosis surgery and from surgery for Parkinson's disease. To date, the progress in neuroprotection resulting from such technical advances is an order of magnitude greater than that resulting from pharmacologic agents used during surgery. The progress over the last 20 years in guidance during surgery (CT and MRI image-guidance) and in surgical access (endoscopic and endovascular techniques) will soon be complemented by advances in our ability to evaluate biological tissue intraoperatively in real-time. As an example of such technology, the NASA Smart Probe project is considered. In the long run (i.e., in 10 years or more), pharmacologic "agents" aimed at the complex pathophysiology of nervous system injury in man will be the key to true intraoperative neuroprotection. In the near term, however, it is more likely that mundane "agents" based on computers, microsensors, and microeffectors will be the major impetus to improved intraoperative neuroprotection.
NASA Astrophysics Data System (ADS)
Tucker, Brian J.; Diaz, Aaron A.; Eckenrode, Brian A.
2006-03-01
Government agencies and homeland security related organizations have identified the need to develop and establish a wide range of unprecedented capabilities for providing scientific and technical forensic services to investigations involving hazardous chemical, biological, and radiological materials, including extremely dangerous chemical and biological warfare agents. Pacific Northwest National Laboratory (PNNL) has developed a prototype portable, hand-held, hazardous materials acoustic inspection prototype that provides noninvasive container interrogation and material identification capabilities using nondestructive ultrasonic velocity and attenuation measurements. Due to the wide variety of fluids as well as container sizes and materials encountered in various law enforcement inspection activities, the need for high measurement sensitivity and advanced ultrasonic measurement techniques were identified. The prototype was developed using a versatile electronics platform, advanced ultrasonic wave propagation methods, and advanced signal processing techniques. This paper primarily focuses on the ultrasonic measurement methods and signal processing techniques incorporated into the prototype. High bandwidth ultrasonic transducers combined with an advanced pulse compression technique allowed researchers to 1) obtain high signal-to-noise ratios and 2) obtain accurate and consistent time-of-flight (TOF) measurements through a variety of highly attenuative containers and fluid media. Results of work conducted in the laboratory have demonstrated that the prototype experimental measurement technique also provided information regarding container properties, which will be utilized in future container-independent measurements of hidden liquids.
Emerging nondestructive inspection methods for aging aircraft
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beattie, A; Dahlke, L; Gieske, J
This report identifies and describes emerging nondestructive inspection (NDI) methods that can potentially be used to inspect commercial transport and commuter aircraft for structural damage. The nine categories of emerging NDI techniques are: acoustic emission, x-ray computed tomography, backscatter radiation, reverse geometry x-ray, advanced electromagnetics, including magnetooptic imaging and advanced eddy current techniques, coherent optics, advanced ultrasonics, advanced visual, and infrared thermography. The physical principles, generalized performance characteristics, and typical applications associated with each method are described. In addition, aircraft inspection applications are discussed along with the associated technical considerations. Finally, the status of each technique is presented, with amore » discussion on when it may be available for use in actual aircraft maintenance programs. It should be noted that this is a companion document to DOT/FAA/CT-91/5, Current Nondestructive Inspection Methods for Aging Aircraft.« less
Using diurnal temperature signals to infer vertical groundwater-surface water exchange
Irvine, Dylan J.; Briggs, Martin A.; Lautz, Laura K.; Gordon, Ryan P.; McKenzie, Jeffrey M.; Cartwright, Ian
2017-01-01
Heat is a powerful tracer to quantify fluid exchange between surface water and groundwater. Temperature time series can be used to estimate pore water fluid flux, and techniques can be employed to extend these estimates to produce detailed plan-view flux maps. Key advantages of heat tracing include cost-effective sensors and ease of data collection and interpretation, without the need for expensive and time-consuming laboratory analyses or induced tracers. While the collection of temperature data in saturated sediments is relatively straightforward, several factors influence the reliability of flux estimates that are based on time series analysis (diurnal signals) of recorded temperatures. Sensor resolution and deployment are particularly important in obtaining robust flux estimates in upwelling conditions. Also, processing temperature time series data involves a sequence of complex steps, including filtering temperature signals, selection of appropriate thermal parameters, and selection of the optimal analytical solution for modeling. This review provides a synthesis of heat tracing using diurnal temperature oscillations, including details on optimal sensor selection and deployment, data processing, model parameterization, and an overview of computing tools available. Recent advances in diurnal temperature methods also provide the opportunity to determine local saturated thermal diffusivity, which can improve the accuracy of fluid flux modeling and sensor spacing, which is related to streambed scour and deposition. These parameters can also be used to determine the reliability of flux estimates from the use of heat as a tracer.
Bygren, Magnus; Szulkin, Ryszard
2017-07-01
It is common in the context of evaluations that participants have not been selected on the basis of transparent participation criteria, and researchers and evaluators many times have to make do with observational data to estimate effects of job training programs and similar interventions. The techniques developed by researchers in such endeavours are useful not only to researchers narrowly focused on evaluations, but also to social and population science more generally, as observational data overwhelmingly are the norm, and the endogeneity challenges encountered in the estimation of causal effects with such data are not trivial. The aim of this article is to illustrate how register data can be used strategically to evaluate programs and interventions and to estimate causal effects of participation in these. We use propensity score matching on pretreatment-period variables to derive a synthetic control group, and we use this group as a comparison to estimate the employment-treatment effect of participation in a large job-training program. We find the effect of treatment to be small and positive but transient. Our method reveals a strong regression to the mean effect, extremely easy to interpret as a treatment effect had a less advanced design been used (e.g. a within-subjects panel data analysis), and illustrates one of the unique advantages of using population register data for research purposes.
Sim, Kok Swee; NorHisham, Syafiq
2016-11-01
A technique based on linear Least Squares Regression (LSR) model is applied to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. In order to test the accuracy of this technique on SNR estimation, a number of SEM images are initially corrupted with white noise. The autocorrelation function (ACF) of the original and the corrupted SEM images are formed to serve as the reference point to estimate the SNR value of the corrupted image. The LSR technique is then compared with the previous three existing techniques known as nearest neighbourhood, first-order interpolation, and the combination of both nearest neighborhood and first-order interpolation. The actual and the estimated SNR values of all these techniques are then calculated for comparison purpose. It is shown that the LSR technique is able to attain the highest accuracy compared to the other three existing techniques as the absolute difference between the actual and the estimated SNR value is relatively small. SCANNING 38:771-782, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Advances in Modern Botnet Understanding and the Accurate Enumeration of Infected Hosts
ERIC Educational Resources Information Center
Nunnery, Christopher Edward
2011-01-01
Botnets remain a potent threat due to evolving modern architectures, inadequate remediation methods, and inaccurate measurement techniques. In response, this research exposes the architectures and operations of two advanced botnets, techniques to enumerate infected hosts, and pursues the scientific refinement of infected-host enumeration data by…
NASA Technical Reports Server (NTRS)
Davis, Steven B.
1990-01-01
Visual aids are valuable assets to engineers for design, demonstration, and evaluation. Discussed here are a variety of advanced three-dimensional graphic techniques used to enhance the displays of test aircraft dynamics. The new software's capabilities are examined and possible future uses are considered.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-27
... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-729] Certain Semiconductor Products Made by... the sale within the United States after importation of certain semiconductor products made by advanced lithography techniques and products containing same by reason of infringement of certain claims of U.S. Patent...
Power Management and Distribution (PMAD) Model Development: Final Report
NASA Technical Reports Server (NTRS)
Metcalf, Kenneth J.
2011-01-01
Power management and distribution (PMAD) models were developed in the early 1990's to model candidate architectures for various Space Exploration Initiative (SEI) missions. They were used to generate "ballpark" component mass estimates to support conceptual PMAD system design studies. The initial set of models was provided to NASA Lewis Research Center (since renamed Glenn Research Center) in 1992. They were developed to estimate the characteristics of power conditioning components predicted to be available in the 2005 timeframe. Early 90's component and device designs and material technologies were projected forward to the 2005 timeframe, and algorithms reflecting those design and material improvements were incorporated into the models to generate mass, volume, and efficiency estimates for circa 2005 components. The models are about ten years old now and NASA GRC requested a review of them to determine if they should be updated to bring them into agreement with current performance projections or to incorporate unforeseen design or technology advances. This report documents the results of this review and the updated power conditioning models and new transmission line models generated to estimate post 2005 PMAD system masses and sizes. This effort continues the expansion and enhancement of a library of PMAD models developed to allow system designers to assess future power system architectures and distribution techniques quickly and consistently.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, J.; Elmore, R.; Kennedy, C.
This research is to illustrate the use of statistical inference techniques in order to quantify the uncertainty surrounding reliability estimates in a step-stress accelerated degradation testing (SSADT) scenario. SSADT can be used when a researcher is faced with a resource-constrained environment, e.g., limits on chamber time or on the number of units to test. We apply the SSADT methodology to a degradation experiment involving concentrated solar power (CSP) mirrors and compare the results to a more traditional multiple accelerated testing paradigm. Specifically, our work includes: (1) designing a durability testing plan for solar mirrors (3M's new improved silvered acrylic "Solarmore » Reflector Film (SFM) 1100") through the ultra-accelerated weathering system (UAWS), (2) defining degradation paths of optical performance based on the SSADT model which is accelerated by high UV-radiant exposure, and (3) developing service lifetime prediction models for solar mirrors using advanced statistical inference. We use the method of least squares to estimate the model parameters and this serves as the basis for the statistical inference in SSADT. Several quantities of interest can be estimated from this procedure, e.g., mean-time-to-failure (MTTF) and warranty time. The methods allow for the estimation of quantities that may be of interest to the domain scientists.« less
Advanced liner-cooling techniques for gas turbine combustors
NASA Technical Reports Server (NTRS)
Norgren, C. T.; Riddlebaugh, S. M.
1985-01-01
Component research for advanced small gas turbine engines is currently underway at the NASA Lewis Research Center. As part of this program, a basic reverse-flow combustor geometry was being maintained while different advanced liner wall cooling techniques were investigated. Performance and liner cooling effectiveness of the experimental combustor configuration featuring counter-flow film-cooled panels is presented and compared with two previously reported combustors featuring: splash film-cooled liner walls; and transpiration cooled liner walls (Lamilloy).
Wallerian Degeneration Beyond the Corticospinal Tracts: Conventional and Advanced MRI Findings.
Chen, Yin Jie; Nabavizadeh, Seyed Ali; Vossough, Arastoo; Kumar, Sunil; Loevner, Laurie A; Mohan, Suyash
2017-05-01
Wallerian degeneration (WD) is defined as progressive anterograde disintegration of axons and accompanying demyelination after an injury to the proximal axon or cell body. Since the 1980s and 1990s, conventional magnetic resonance imaging (MRI) sequences have been shown to be sensitive to changes of WD in the subacute to chronic phases. More recently, advanced MRI techniques, such as diffusion-weighted imaging (DWI) and diffusion tensor imaging (DTI), have demonstrated some of earliest changes attributed to acute WD, typically on the order of days. In addition, there is increasing evidence on the value of advanced MRI techniques in providing important prognostic information related to WD. This article reviews the utility of conventional and advanced MRI techniques for assessing WD, by focusing not only on the corticospinal tract but also other neural tracts less commonly thought of, including corticopontocerebellar tract, dentate-rubro-olivary pathway, posterior column of the spinal cord, corpus callosum, limbic circuit, and optic pathway. The basic anatomy of these neural pathways will be discussed, followed by a comprehensive review of existing literature supported by instructive clinical examples. The goal of this review is for readers to become more familiar with both conventional and advanced MRI findings of WD involving important neural pathways, as well as to illustrate increasing utility of advanced MRI techniques in providing important prognostic information for various pathologies. Copyright © 2016 by the American Society of Neuroimaging.
Detection and Sizing of Fatigue Cracks in Steel Welds with Advanced Eddy Current Techniques
NASA Astrophysics Data System (ADS)
Todorov, E. I.; Mohr, W. C.; Lozev, M. G.
2008-02-01
Butt-welded specimens were fatigued to produce cracks in the weld heat-affected zone. Advanced eddy current (AEC) techniques were used to detect and size the cracks through a coating. AEC results were compared with magnetic particle and phased-array ultrasonic techniques. Validation through destructive crack measurements was also conducted. Factors such as geometry, surface treatment, and crack tightness interfered with depth sizing. AEC inspection techniques have the potential of providing more accurate and complete sizing flaw data for manufacturing and in-service inspections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coble, Jamie B.; Coles, Garill A.; Ramuhalli, Pradeep
Advanced small modular reactors (aSMRs) can provide the United States with a safe, sustainable, and carbon-neutral energy source. The controllable day-to-day costs of aSMRs are expected to be dominated by operation and maintenance costs. Health and condition assessment coupled with online risk monitors can potentially enhance affordability of aSMRs through optimized operational planning and maintenance scheduling. Currently deployed risk monitors are an extension of probabilistic risk assessment (PRA). For complex engineered systems like nuclear power plants, PRA systematically combines event likelihoods and the probability of failure (POF) of key components, so that when combined with the magnitude of possible adversemore » consequences to determine risk. Traditional PRA uses population-based POF information to estimate the average plant risk over time. Currently, most nuclear power plants have a PRA that reflects the as-operated, as-modified plant; this model is updated periodically, typically once a year. Risk monitors expand on living PRA by incorporating changes in the day-by-day plant operation and configuration (e.g., changes in equipment availability, operating regime, environmental conditions). However, population-based POF (or population- and time-based POF) is still used to populate fault trees. Health monitoring techniques can be used to establish condition indicators and monitoring capabilities that indicate the component-specific POF at a desired point in time (or over a desired period), which can then be incorporated in the risk monitor to provide a more accurate estimate of the plant risk in different configurations. This is particularly important for active systems, structures, and components (SSCs) proposed for use in aSMR designs. These SSCs may differ significantly from those used in the operating fleet of light-water reactors (or even in LWR-based SMR designs). Additionally, the operating characteristics of aSMRs can present significantly different requirements, including the need to operate in different coolant environments, higher operating temperatures, and longer operating cycles between planned refueling and maintenance outages. These features, along with the relative lack of operating experience for some of the proposed advanced designs, may limit the ability to estimate event probability and component POF with a high degree of certainty. Incorporating real-time estimates of component POF may compensate for a relative lack of established knowledge about the long-term component behavior and improve operational and maintenance planning and optimization. The particular eccentricities of advanced reactors and small modular reactors provide unique challenges and needs for advanced instrumentation, control, and human-machine interface (ICHMI) techniques such as enhanced risk monitors (ERM) in aSMRs. Several features of aSMR designs increase the need for accurate characterization of the real-time risk during operation and maintenance activities. A number of technical gaps in realizing ERM exist, and these gaps are largely independent of the specific reactor technology. As a result, the development of a framework for ERM would enable greater situational awareness regardless of the specific class of reactor technology. A set of research tasks are identified in a preliminary research plan to enable the development, testing, and demonstration of such a framework. Although some aspects of aSMRs, such as specific operational characteristics, will vary and are not now completely defined, the proposed framework is expected to be relevant regardless of such uncertainty. The development of an ERM framework will provide one of the key technical developments necessary to ensure the economic viability of aSMRs.« less
As-built design specification for proportion estimate software subsystem
NASA Technical Reports Server (NTRS)
Obrien, S. (Principal Investigator)
1980-01-01
The Proportion Estimate Processor evaluates four estimation techniques in order to get an improved estimate of the proportion of a scene that is planted in a selected crop. The four techniques to be evaluated were provided by the techniques development section and are: (1) random sampling; (2) proportional allocation, relative count estimate; (3) proportional allocation, Bayesian estimate; and (4) sequential Bayesian allocation. The user is given two options for computation of the estimated mean square error. These are referred to as the cluster calculation option and the segment calculation option. The software for the Proportion Estimate Processor is operational on the IBM 3031 computer.
Noise levels from a model turbofan engine with simulated noise control measures applied
NASA Technical Reports Server (NTRS)
Hall, David G.; Woodward, Richard P.
1993-01-01
A study of estimated full-scale noise levels based on measured levels from the Advanced Ducted Propeller (ADP) sub-scale model is presented. Testing of this model was performed in the NASA Lewis Low Speed Anechoic Wind Tunnel at a simulated takeoff condition of Mach 0.2. Effective Perceived Noise Level (EPNL) estimates for the baseline configuration are documented, and also used as the control case in a study of the potential benefits of two categories of noise control. The effect of active noise control is evaluated by artificially removing various rotor-stator interaction tones. Passive noise control is simulated by applying a notch filter to the wind tunnel data. Cases with both techniques are included to evaluate hybrid active-passive noise control. The results for EPNL values are approximate because the original source data was limited in bandwidth and in sideline angular coverage. The main emphasis is on comparisons between the baseline and configurations with simulated noise control measures.
Full-envelope aerodynamic modeling of the Harrier aircraft
NASA Technical Reports Server (NTRS)
Mcnally, B. David
1986-01-01
A project to identify a full-envelope model of the YAV-8B Harrier using flight-test and parameter identification techniques is described. As part of the research in advanced control and display concepts for V/STOL aircraft, a full-envelope aerodynamic model of the Harrier is identified, using mathematical model structures and parameter identification methods. A global-polynomial model structure is also used as a basis for the identification of the YAV-8B aerodynamic model. State estimation methods are used to ensure flight data consistency prior to parameter identification.Equation-error methods are used to identify model parameters. A fixed-base simulator is used extensively to develop flight test procedures and to validate parameter identification software. Using simple flight maneuvers, a simulated data set was created covering the YAV-8B flight envelope from about 0.3 to 0.7 Mach and about -5 to 15 deg angle of attack. A singular value decomposition implementation of the equation-error approach produced good parameter estimates based on this simulated data set.
Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers
NASA Technical Reports Server (NTRS)
Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.
2012-01-01
Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.
Lunar PMAD technology assessment
NASA Technical Reports Server (NTRS)
Metcalf, Kenneth J.
1992-01-01
This report documents an initial set of power conditioning models created to generate 'ballpark' power management and distribution (PMAD) component mass and size estimates. It contains converter, rectifier, inverter, transformer, remote bus isolator (RBI), and remote power controller (RPC) models. These models allow certain studies to be performed; however, additional models are required to assess a full range of PMAD alternatives. The intent is to eventually form a library of PMAD models that will allow system designers to evaluate various power system architectures and distribution techniques quickly and consistently. The models in this report are designed primarily for space exploration initiative (SEI) missions requiring continuous power and supporting manned operations. The mass estimates were developed by identifying the stages in a component and obtaining mass breakdowns for these stages from near term electronic hardware elements. Technology advances were then incorporated to generate hardware masses consistent with the 2000 to 2010 time period. The mass of a complete component is computed by algorithms that calculate the masses of the component stages, control and monitoring, enclosure, and thermal management subsystem.
Cost estimating methods for advanced space systems
NASA Technical Reports Server (NTRS)
Cyr, Kelley
1988-01-01
The development of parametric cost estimating methods for advanced space systems in the conceptual design phase is discussed. The process of identifying variables which drive cost and the relationship between weight and cost are discussed. A theoretical model of cost is developed and tested using a historical data base of research and development projects.
Development of Advanced Methods of Structural and Trajectory Analysis for Transport Aircraft
NASA Technical Reports Server (NTRS)
Ardema, Mark D.
1996-01-01
In this report the author describes: (1) development of advanced methods of structural weight estimation, and (2) development of advanced methods of flight path optimization. A method of estimating the load-bearing fuselage weight and wing weight of transport aircraft based on fundamental structural principles has been developed. This method of weight estimation represents a compromise between the rapid assessment of component weight using empirical methods based on actual weights of existing aircraft and detailed, but time-consuming, analysis using the finite element method. The method was applied to eight existing subsonic transports for validation and correlation. Integration of the resulting computer program, PDCYL, has been made into the weights-calculating module of the AirCraft SYNThesis (ACSYNT) computer program. ACSYNT bas traditionally used only empirical weight estimation methods; PDCYL adds to ACSYNT a rapid, accurate means of assessing the fuselage and wing weights of unconventional aircraft. PDCYL also allows flexibility in the choice of structural concept, as well as a direct means of determining the impact of advanced materials on structural weight.
Cammack, J A; Reiskind, M H; Guisewite, L M; Denning, S S; Watson, D W
2017-11-01
In forensic cases involving entomological evidence, establishing the postcolonization interval (post-CI) is a critical component of the investigation. Traditional methods of estimating the post-CI rely on estimating the age of immature blow flies (Diptera: Calliphoridae) collected from remains. However, in cases of delayed discovery (e.g., when remains are located indoors), these insects may have completed their development and be present in the environment as adults. Adult fly collections are often ignored in cases of advanced decomposition because of a presumed little relevance to the investigation; herein we present information on how these insects can be of value. In this study we applied an age-grading technique to estimate the age of adults of Chrysomya megacephala (Fabricius), Cochliomyia macellaria (Fabricius), and Phormia regina (Meigen), based on the temperature-dependent accumulation of pteridines in the compound eyes, when reared at temperatures ranging from 5 to 35°C. Age could be estimated for all species*sex*rearing temperature combinations (mean r 2 ±SE: 0.90±0.01) for all but P. regina reared at 5.4°C. These models can be used to increase the precision of post-CI estimates for remains found indoors, and the high r 2 values of 22 of the 24 regression equations indicates that this is a valid method for estimating the age of adult blow flies at temperatures ≥15°C. Copyright © 2017 Elsevier B.V. All rights reserved.
Use of Empirical Estimates of Shrinkage in Multiple Regression: A Caution.
ERIC Educational Resources Information Center
Kromrey, Jeffrey D.; Hines, Constance V.
1995-01-01
The accuracy of four empirical techniques to estimate shrinkage in multiple regression was studied through Monte Carlo simulation. None of the techniques provided unbiased estimates of the population squared multiple correlation coefficient, but the normalized jackknife and bootstrap techniques demonstrated marginally acceptable performance with…
How Unusual were Hurricane Harvey's Rains?
NASA Astrophysics Data System (ADS)
Emanuel, K.
2017-12-01
We apply an advanced technique for hurricane risk assessment to evaluate the probability of hurricane rainfall of Harvey's magnitude. The technique embeds a detailed computational hurricane model in the large-scale conditions represented by climate reanalyses and by climate models. We simulate 3700 hurricane events affecting the state of Texas, from each of three climate reanalyses spanning the period 1980-2016, and 2000 events from each of six climate models for each of two periods: the period 1981-2000 from historical simulations, and the period 2081-2100 from future simulations under Representative Concentration Pathway (RCP) 8.5. On the basis of these simulations, we estimate that hurricane rain of Harvey's magnitude in the state of Texas would have had an annual probability of 0.01 in the late twentieth century, and will have an annual probability of 0.18 by the end of this century, with remarkably small scatter among the six climate models downscaled. If the event frequency is changing linearly over time, this would yield an annual probability of 0.06 in 2017.
NASA Astrophysics Data System (ADS)
Mohymont, B.; Demarée, G. R.; Faka, D. N.
2004-05-01
The establishment of Intensity-Duration-Frequency (IDF) curves for precipitation remains a powerful tool in the risk analysis of natural hazards. Indeed the IDF-curves allow for the estimation of the return period of an observed rainfall event or conversely of the rainfall amount corresponding to a given return period for different aggregation times. There is a high need for IDF-curves in the tropical region of Central Africa but unfortunately the adequate long-term data sets are frequently not available. The present paper assesses IDF-curves for precipitation for three stations in Central Africa. More physically based models for the IDF-curves are proposed. The methodology used here has been advanced by Koutsoyiannis et al. (1998) and an inter-station and inter-technique comparison is being carried out. The IDF-curves for tropical Central Africa are an interesting tool to be used in sewer system design to combat the frequently occurring inundations in semi-urbanized and urbanized areas of the Kinshasa megapolis.
Shingrani, Rahul; Krenz, Gary; Molthen, Robert
2010-01-01
With advances in medical imaging scanners, it has become commonplace to generate large multidimensional datasets. These datasets require tools for a rapid, thorough analysis. To address this need, we have developed an automated algorithm for morphometric analysis incorporating A Visualization Workshop computational and image processing libraries for three-dimensional segmentation, vascular tree generation and structural hierarchical ordering with a two-stage numeric optimization procedure for estimating vessel diameters. We combine this new technique with our mathematical models of pulmonary vascular morphology to quantify structural and functional attributes of lung arterial trees. Our physiological studies require repeated measurements of vascular structure to determine differences in vessel biomechanical properties between animal models of pulmonary disease. Automation provides many advantages including significantly improved speed and minimized operator interaction and biasing. The results are validated by comparison with previously published rat pulmonary arterial micro-CT data analysis techniques, in which vessels were manually mapped and measured using intense operator intervention. Published by Elsevier Ireland Ltd.
On constraining pilot point calibration with regularization in PEST
Fienen, M.N.; Muffels, C.T.; Hunt, R.J.
2009-01-01
Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.
Ovanesyan, Zaven; Mimun, L. Christopher; Kumar, Gangadharan Ajith; Yust, Brian G.; Dannangoda, Chamath; Martirosyan, Karen S.; Sardar, Dhiraj K.
2015-01-01
Molecular imaging is very promising technique used for surgical guidance, which requires advancements related to properties of imaging agents and subsequent data retrieval methods from measured multispectral images. In this article, an upconversion material is introduced for subsurface near-infrared imaging and for the depth recovery of the material embedded below the biological tissue. The results confirm significant correlation between the analytical depth estimate of the material under the tissue and the measured ratio of emitted light from the material at two different wavelengths. Experiments with biological tissue samples demonstrate depth resolved imaging using the rare earth doped multifunctional phosphors. In vitro tests reveal no significant toxicity, whereas the magnetic measurements of the phosphors show that the particles are suitable as magnetic resonance imaging agents. The confocal imaging of fibroblast cells with these phosphors reveals their potential for in vivo imaging. The depth-resolved imaging technique with such phosphors has broad implications for real-time intraoperative surgical guidance. PMID:26322519
A Structured Light Sensor System for Tree Inventory
NASA Technical Reports Server (NTRS)
Chien, Chiun-Hong; Zemek, Michael C.
2000-01-01
Tree Inventory is referred to measurement and estimation of marketable wood volume in a piece of land or forest for purposes such as investment or for loan applications. Exist techniques rely on trained surveyor conducting measurements manually using simple optical or mechanical devices, and hence are time consuming subjective and error prone. The advance of computer vision techniques makes it possible to conduct automatic measurements that are more efficient, objective and reliable. This paper describes 3D measurements of tree diameters using a uniquely designed ensemble of two line laser emitters rigidly mounted on a video camera. The proposed laser camera system relies on a fixed distance between two parallel laser planes and projections of laser lines to calculate tree diameters. Performance of the laser camera system is further enhanced by fusion of information induced from structured lighting and that contained in video images. Comparison will be made between the laser camera sensor system and a stereo vision system previously developed for measurements of tree diameters.
Reliability analysis of the F-8 digital fly-by-wire system
NASA Technical Reports Server (NTRS)
Brock, L. D.; Goodman, H. A.
1981-01-01
The F-8 Digital Fly-by-Wire (DFBW) flight test program intended to provide the technology for advanced control systems, giving aircraft enhanced performance and operational capability is addressed. A detailed analysis of the experimental system was performed to estimated the probabilities of two significant safety critical events: (1) loss of primary flight control function, causing reversion to the analog bypass system; and (2) loss of the aircraft due to failure of the electronic flight control system. The analysis covers appraisal of risks due to random equipment failure, generic faults in design of the system or its software, and induced failure due to external events. A unique diagrammatic technique was developed which details the combinatorial reliability equations for the entire system, promotes understanding of system failure characteristics, and identifies the most likely failure modes. The technique provides a systematic method of applying basic probability equations and is augmented by a computer program written in a modular fashion that duplicates the structure of these equations.
Unmixing AVHRR Imagery to Assess Clearcuts and Forest Regrowth in Oregon
NASA Technical Reports Server (NTRS)
Hlavka, Christine A.; Spanner, Michael A.
1995-01-01
Advanced Very High Resolution Radiometer imagery provides frequent and low-cost coverage of the earth, but its coarse spatial resolution (approx. 1.1 km by 1.1 km) does not lend itself to standard techniques of automated categorization of land cover classes because the pixels are generally mixed; that is, the extent of the pixel includes several land use/cover classes. Unmixing procedures were developed to extract land use/cover class signatures from mixed pixels, using Landsat Thematic Mapper data as a source for the training set, and to estimate fractions of class coverage within pixels. Application of these unmixing procedures to mapping forest clearcuts and regrowth in Oregon indicated that unmixing is a promising approach for mapping major trends in land cover with AVHRR bands 1 and 2. Including thermal bands by unmixing AVHRR bands 1-4 did not lead to significant improvements in accuracy, but experiments with unmixing these four bands did indicate that use of weighted least squares techniques might lead to improvements in other applications of unmixing.
Zhou, Xiao-Rong; Huang, Shui-Sheng; Gong, Xin-Guo; Cen, Li-Ping; Zhang, Cong; Zhu, Hong; Yang, Jun-Jing; Chen, Li
2012-04-01
To construct a performance evaluation and management system on advanced schistosomiasis medical treatment, and analyze and evaluate the work of the advanced schistosomiasis medical treatment over the years. By applying the database management technique and C++ programming technique, we inputted the information of the advanced schistosomiasis cases into the system, and comprehensively evaluated the work of the advanced schistosomiasis medical treatment through the cost-effect analysis, cost-effectiveness analysis, and cost-benefit analysis. We made a set of software formula about cost-effect analysis, cost-effectiveness analysis, and cost-benefit analysis. This system had many features such as clear building, easy to operate, friendly surface, convenient information input and information search. It could benefit the performance evaluation of the province's advanced schistosomiasis medical treatment work. This system can satisfy the current needs of advanced schistosomiasis medical treatment work and can be easy to be widely used.
NASA Technical Reports Server (NTRS)
Suit, W. T.; Cannaday, R. L.
1979-01-01
The longitudinal and lateral stability and control parameters for a high wing, general aviation, airplane are examined. Estimations using flight data obtained at various flight conditions within the normal range of the aircraft are presented. The estimations techniques, an output error technique (maximum likelihood) and an equation error technique (linear regression), are presented. The longitudinal static parameters are estimated from climbing, descending, and quasi steady state flight data. The lateral excitations involve a combination of rudder and ailerons. The sensitivity of the aircraft modes of motion to variations in the parameter estimates are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-03-01
The module provides an overview of general techniques that owners and operators of reporting facilities may use to estimate their toxic chemical releases. It exlains the basic release estimation techniques used to determine the chemical quantities reported on the Form R and uses those techniques, along with fundamental chemical or physical principles and properties, to estimate releases of listed toxic chemicals. It converts units of mass, volume, and time. It states the rules governing significant figures and rounding techniques, and references general and industry-specific estimation documents.
Kranz, Christine
2014-01-21
In recent years, major developments in scanning electrochemical microscopy (SECM) have significantly broadened the application range of this electroanalytical technique from high-resolution electrochemical imaging via nanoscale probes to large scale mapping using arrays of microelectrodes. A major driving force in advancing the SECM methodology is based on developing more sophisticated probes beyond conventional micro-disc electrodes usually based on noble metals or carbon microwires. This critical review focuses on the design and development of advanced electrochemical probes particularly enabling combinations of SECM with other analytical measurement techniques to provide information beyond exclusively measuring electrochemical sample properties. Consequently, this critical review will focus on recent progress and new developments towards multifunctional imaging.
NASA Technical Reports Server (NTRS)
Smith, Paul H.
1988-01-01
The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.
Asif, Muhammad Khan; Nambiar, Phrabhakaran; Mani, Shani Ann; Ibrahim, Norliza Binti; Khan, Iqra Muhammad; Sukumaran, Prema
2018-02-01
The methods of dental age estimation and identification of unknown deceased individuals are evolving with the introduction of advanced innovative imaging technologies in forensic investigations. However, assessing small structures like root canal volumes can be challenging in spite of using highly advanced technology. The aim of the study was to investigate which amongst the two methods of volumetric analysis of maxillary central incisors displayed higher strength of correlation between chronological age and pulp/tooth volume ratio for Malaysian adults. Volumetric analysis of pulp cavity/tooth ratio was employed in Method 1 and pulp chamber/crown ratio (up to cemento-enamel junction) was analysed in Method 2. The images were acquired employing CBCT scans and enhanced by manipulating them with the Mimics software. These scans belonged to 56 males and 54 females and their ages ranged from 16 to 65 years. Pearson correlation and regression analysis indicated that both methods used for volumetric measurements had strong correlation between chronological age and pulp/tooth volume ratio. However, Method 2 gave higher coefficient of determination value (R2 = 0.78) when compared to Method 1 (R2 = 0.64). Moreover, manipulation in Method 2 was less time consuming and revealed higher inter-examiner reliability (0.982) as no manual intervention during 'multiple slice editing phase' of the software was required. In conclusion, this study showed that volumetric analysis of pulp cavity/tooth ratio is a valuable gender independent technique and the Method 2 regression equation should be recommended for dental age estimation. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Household-level disparities in cancer risks from vehicular air pollution in Miami
NASA Astrophysics Data System (ADS)
Collins, Timothy W.; Grineski, Sara E.; Chakraborty, Jayajit
2015-09-01
Environmental justice (EJ) research has relied on ecological analyses of socio-demographic data from areal units to determine if particular populations are disproportionately burdened by toxic risks. This article advances quantitative EJ research by (a) examining whether statistical associations found for geographic units translate to relationships at the household level; (b) testing alternative explanations for distributional injustices never before investigated; and (c) applying a novel statistical technique appropriate for geographically-clustered data. Our study makes these advances by using generalized estimating equations to examine distributive environmental inequities in the Miami (Florida) metropolitan area, based on primary household-level survey data and census block-level cancer risk estimates of hazardous air pollutant (HAP) exposure from on-road mobile emission sources. In addition to modeling determinants of on-road HAP cancer risk among all survey participants, two subgroup models are estimated to examine whether determinants of risk differ based on disadvantaged minority (Hispanic and non-Hispanic Black) versus non-Hispanic white racial/ethnic status. Results reveal multiple determinants of risk exposure disparities. In the model including all survey participants, renter-occupancy, Hispanic and non-Hispanic black race/ethnicity, the desire to live close to work/urban services or public transportation, and higher risk perception are associated with greater on-road HAP cancer risk; the desire to live in an amenity-rich environment is associated with less risk. Divergent subgroup model results shed light on the previously unexamined role of racial/ethnic status in shaping determinants of risk exposures. While lower socioeconomic status and higher risk perception predict significantly greater on-road HAP cancer risk among disadvantaged minorities, the desire to live near work/urban services or public transport predict significantly greater risk among non-Hispanic whites. Findings have important implications for EJ research and practice in Miami and elsewhere.
Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model
ERIC Educational Resources Information Center
Lamsal, Sunil
2015-01-01
Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…
1984-01-01
P AD-A14l 969 CONFERENCE PROCEEDINGS ON GUIDANCE AND CONTROL 1 TECHNIQUES FOR ADVANCED SP-.(U,) ADVISORY GROUP FOR AEROSPACE RESEARCH AND DEVELOPMENT...findings of these various planning groups relativie to the ’e for advanced controls technology, and the perceived status of the technology t. me-,t... control of large flexible spacecraft. The program has also involved experimental activities to guide Ind validate the theoretical work. The
Innovative experimental particle physics through technological advances: Past, present and future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Harry W.K.; /Fermilab
This mini-course gives an introduction to the techniques used in experimental particle physics with an emphasis on the impact of technological advances. The basic detector types and particle accelerator facilities will be briefly covered with examples of their use and with comparisons. The mini-course ends with what can be expected in the near future from current technology advances. The mini-course is intended for graduate students and post-docs and as an introduction to experimental techniques for theorists.
NASA Astrophysics Data System (ADS)
Rebillat, Marc; Schoukens, Maarten
2018-05-01
Linearity is a common assumption for many real-life systems, but in many cases the nonlinear behavior of systems cannot be ignored and must be modeled and estimated. Among the various existing classes of nonlinear models, Parallel Hammerstein Models (PHM) are interesting as they are at the same time easy to interpret as well as to estimate. One way to estimate PHM relies on the fact that the estimation problem is linear in the parameters and thus that classical least squares (LS) estimation algorithms can be used. In that area, this article introduces a regularized LS estimation algorithm inspired on some of the recently developed regularized impulse response estimation techniques. Another mean to estimate PHM consists in using parametric or non-parametric exponential sine sweeps (ESS) based methods. These methods (LS and ESS) are founded on radically different mathematical backgrounds but are expected to tackle the same issue. A methodology is proposed here to compare them with respect to (i) their accuracy, (ii) their computational cost, and (iii) their robustness to noise. Tests are performed on simulated systems for several values of methods respective parameters and of signal to noise ratio. Results show that, for a given set of data points, the ESS method is less demanding in computational resources than the LS method but that it is also less accurate. Furthermore, the LS method needs parameters to be set in advance whereas the ESS method is not subject to conditioning issues and can be fully non-parametric. In summary, for a given set of data points, ESS method can provide a first, automatic, and quick overview of a nonlinear system than can guide more computationally demanding and precise methods, such as the regularized LS one proposed here.
Fabrication of advanced electrochemical energy materials using sol-gel processing techniques
NASA Technical Reports Server (NTRS)
Chu, C. T.; Chu, Jay; Zheng, Haixing
1995-01-01
Advanced materials play an important role in electrochemical energy devices such as batteries, fuel cells, and electrochemical capacitors. They are being used as both electrodes and electrolytes. Sol-gel processing is a versatile solution technique used in fabrication of ceramic materials with tailored stoichiometry, microstructure, and properties. The application of sol-gel processing in the fabrication of advanced electrochemical energy materials will be presented. The potentials of sol-gel derived materials for electrochemical energy applications will be discussed along with some examples of successful applications. Sol-gel derived metal oxide electrode materials such as V2O5 cathodes have been demonstrated in solid-slate thin film batteries; solid electrolytes materials such as beta-alumina for advanced secondary batteries had been prepared by the sol-gel technique long time ago; and high surface area transition metal compounds for capacitive energy storage applications can also be synthesized with this method.
Technological advances in perioperative monitoring: Current concepts and clinical perspectives
Chilkoti, Geetanjali; Wadhwa, Rachna; Saxena, Ashok Kumar
2015-01-01
Minimal mandatory monitoring in the perioperative period recommended by Association of Anesthetists of Great Britain and Ireland and American Society of Anesthesiologists are universally acknowledged and has become an integral part of the anesthesia practice. The technologies in perioperative monitoring have advanced, and the availability and clinical applications have multiplied exponentially. Newer monitoring techniques include depth of anesthesia monitoring, goal-directed fluid therapy, transesophageal echocardiography, advanced neurological monitoring, improved alarm system and technological advancement in objective pain assessment. Various factors that need to be considered with the use of improved monitoring techniques are their validation data, patient outcome, safety profile, cost-effectiveness, awareness of the possible adverse events, knowledge of technical principle and ability of the convenient routine handling. In this review, we will discuss the new monitoring techniques in anesthesia, their advantages, deficiencies, limitations, their comparison to the conventional methods and their effect on patient outcome, if any. PMID:25788767
Technological advances in perioperative monitoring: Current concepts and clinical perspectives.
Chilkoti, Geetanjali; Wadhwa, Rachna; Saxena, Ashok Kumar
2015-01-01
Minimal mandatory monitoring in the perioperative period recommended by Association of Anesthetists of Great Britain and Ireland and American Society of Anesthesiologists are universally acknowledged and has become an integral part of the anesthesia practice. The technologies in perioperative monitoring have advanced, and the availability and clinical applications have multiplied exponentially. Newer monitoring techniques include depth of anesthesia monitoring, goal-directed fluid therapy, transesophageal echocardiography, advanced neurological monitoring, improved alarm system and technological advancement in objective pain assessment. Various factors that need to be considered with the use of improved monitoring techniques are their validation data, patient outcome, safety profile, cost-effectiveness, awareness of the possible adverse events, knowledge of technical principle and ability of the convenient routine handling. In this review, we will discuss the new monitoring techniques in anesthesia, their advantages, deficiencies, limitations, their comparison to the conventional methods and their effect on patient outcome, if any.
Statistical Tests of Reliability of NDE
NASA Technical Reports Server (NTRS)
Baaklini, George Y.; Klima, Stanley J.; Roth, Don J.; Kiser, James D.
1987-01-01
Capabilities of advanced material-testing techniques analyzed. Collection of four reports illustrates statistical method for characterizing flaw-detecting capabilities of sophisticated nondestructive evaluation (NDE). Method used to determine reliability of several state-of-the-art NDE techniques for detecting failure-causing flaws in advanced ceramic materials considered for use in automobiles, airplanes, and space vehicles.
ERIC Educational Resources Information Center
Maurye, Praveen; Basu, Arpita; Biswas, Jayanta Kumar; Bandyopadhyay, Tapas Kumar; Naskar, Malay
2018-01-01
Polyacrylamide gel electrophoresis (PAGE) is the most classical technique favored worldwide for resolution of macromolecules in many biochemistry laboratories due to its incessant advanced developments and wide modifications. These ever-growing advancements in the basic laboratory equipments lead to emergence of many expensive, complex, and tricky…
Distributed and Problem-based Learning Techniques for the Family Communication Course.
ERIC Educational Resources Information Center
LeBlanc, H. Paul, III
Current technological advances have made possible teaching techniques which were previously impossible. Distance and distributed learning technologies have made it possible to instruct outside of the classroom setting. An advantage to this advance includes that ability to reach students who are unable to relocate to the university. However, there…
We have developed a research program in metabolism that involves numerous collaborators across EPA as well as other federal and academic labs. A primary goal is to develop and apply advanced in vitro techniques to measure, understand and predict the kinetics and mechanisms of xen...
Advanced Marketing Core Curriculum. Test Items and Assessment Techniques.
ERIC Educational Resources Information Center
Smith, Clifton L.; And Others
This document contains duties and tasks, multiple-choice test items, and other assessment techniques for Missouri's advanced marketing core curriculum. The core curriculum begins with a list of 13 suggested textbook resources. Next, nine duties with their associated tasks are given. Under each task appears one or more citations to appropriate…
NASA Technical Reports Server (NTRS)
Atkinson, W. H.; Cyr, M. A.; Strange, R. R.
1988-01-01
The report presents the final results of Tasks 1 and 2, Development of Sensors for Ceramic Components in Advanced Propulsion Systems (NASA program NAS3-25141). During Task 1, an extensive survey was conducted of sensor concepts which have the potential for measuring surface temperature, strain and heat flux on ceramic components for advanced propulsion systems. Each sensor concept was analyzed and evaluated under Task 2; sensor concepts were then recommended for further development. For temperature measurement, both pyrometry and thermographic phosphors are recommended for measurements up to and beyond the melting point of ceramic materials. For lower temperature test programs, the thin-film techniques offer advantages in the installation of temperature sensors. Optical strain measurement techniques are recommended because they offer the possibility of being useful at very high temperature levels. Techniques for the measurement of heat flux are recommended for development based on both a surface mounted sensor and the measurement of the temperature differential across a portion of a ceramic component or metallic substrate.
Cancer drug discovery: recent innovative approaches to tumor modeling.
Lovitt, Carrie J; Shelper, Todd B; Avery, Vicky M
2016-09-01
Cell culture models have been at the heart of anti-cancer drug discovery programs for over half a century. Advancements in cell culture techniques have seen the rapid evolution of more complex in vitro cell culture models investigated for use in drug discovery. Three-dimensional (3D) cell culture research has become a strong focal point, as this technique permits the recapitulation of the tumor microenvironment. Biologically relevant 3D cellular models have demonstrated significant promise in advancing cancer drug discovery, and will continue to play an increasing role in the future. In this review, recent advances in 3D cell culture techniques and their application in tumor modeling and anti-cancer drug discovery programs are discussed. The topics include selection of cancer cells, 3D cell culture assays (associated endpoint measurements and analysis), 3D microfluidic systems and 3D bio-printing. Although advanced cancer cell culture models and techniques are becoming commonplace in many research groups, the use of these approaches has yet to be fully embraced in anti-cancer drug applications. Furthermore, limitations associated with analyzing information-rich biological data remain unaddressed.
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton; Hultgren, Lennart S.
2015-01-01
The study of noise from a two-shaft contra-rotating open rotor (CROR) is challenging since the shafts are not phase locked in most cases. Consequently, phase averaging of the acoustic data keyed to a single shaft rotation speed is not meaningful. An unaligned spectrum procedure that was developed to estimate a signal coherence threshold and reveal concealed spectral lines in turbofan engine combustion noise is applied to fan and CROR acoustic data in this paper (also available as NASA/TM-2015-218865). The NASA Advanced Air Vehicles Program, Advanced Air Transport Technology Project, Aircraft Noise Reduction Subproject supported the current work. The fan and open rotor data were obtained under previous efforts supported by the NASA Quiet Aircraft Technology (QAT) Project and the NASA Environmentally Responsible Aviation (ERA) Project of the Integrated Systems Research Program in collaboration with GE Aviation, respectively. The overarching goal of the Advanced Air Transport (AATT) Project is to explore and develop technologies and concepts to revolutionize the energy efficiency and environmental compatibility of fixed wing transport aircrafts. These technological solutions are critical in reducing the impact of aviation on the environment even as this industry and the corresponding global transportation system continue to grow.
Torres-Climent, A; Gomis, P; Martín-Mata, J; Bustamante, M A; Marhuenda-Egea, F C; Pérez-Murcia, M D; Pérez-Espinosa, A; Paredes, C; Moral, R
2015-01-01
The objective of this work was to study the co-composting process of wastes from the winery and distillery industry with animal manures, using the classical chemical methods traditionally used in composting studies together with advanced instrumental methods (thermal analysis, FT-IR and CPMAS 13C NMR techniques), to evaluate the development of the process and the quality of the end-products obtained. For this, three piles were elaborated by the turning composting system, using as raw materials winery-distillery wastes (grape marc and exhausted grape marc) and animal manures (cattle manure and poultry manure). The classical analytical methods showed a suitable development of the process in all the piles, but these techniques were ineffective to study the humification process during the composting of this type of materials. However, their combination with the advanced instrumental techniques clearly provided more information regarding the turnover of the organic matter pools during the composting process of these materials. Thermal analysis allowed to estimate the degradability of the remaining material and to assess qualitatively the rate of OM stabilization and recalcitrant C in the compost samples, based on the energy required to achieve the same mass losses. FT-IR spectra mainly showed variations between piles and time of sampling in the bands associated to complex organic compounds (mainly at 1420 and 1540 cm-1) and to nitrate and inorganic components (at 875 and 1384 cm-1, respectively), indicating composted material stability and maturity; while CPMAS 13C NMR provided semi-quantitatively partition of C compounds and structures during the process, being especially interesting their variation to evaluate the biotransformation of each C pool, especially in the comparison of recalcitrant C vs labile C pools, such as Alkyl /O-Alkyl ratio.
Torres-Climent, A.; Gomis, P.; Martín-Mata, J.; Bustamante, M. A.; Marhuenda-Egea, F. C.; Pérez-Murcia, M. D.; Pérez-Espinosa, A.; Paredes, C.; Moral, R.
2015-01-01
The objective of this work was to study the co-composting process of wastes from the winery and distillery industry with animal manures, using the classical chemical methods traditionally used in composting studies together with advanced instrumental methods (thermal analysis, FT-IR and CPMAS 13C NMR techniques), to evaluate the development of the process and the quality of the end-products obtained. For this, three piles were elaborated by the turning composting system, using as raw materials winery-distillery wastes (grape marc and exhausted grape marc) and animal manures (cattle manure and poultry manure). The classical analytical methods showed a suitable development of the process in all the piles, but these techniques were ineffective to study the humification process during the composting of this type of materials. However, their combination with the advanced instrumental techniques clearly provided more information regarding the turnover of the organic matter pools during the composting process of these materials. Thermal analysis allowed to estimate the degradability of the remaining material and to assess qualitatively the rate of OM stabilization and recalcitrant C in the compost samples, based on the energy required to achieve the same mass losses. FT-IR spectra mainly showed variations between piles and time of sampling in the bands associated to complex organic compounds (mainly at 1420 and 1540 cm-1) and to nitrate and inorganic components (at 875 and 1384 cm-1, respectively), indicating composted material stability and maturity; while CPMAS 13C NMR provided semi-quantitatively partition of C compounds and structures during the process, being especially interesting their variation to evaluate the biotransformation of each C pool, especially in the comparison of recalcitrant C vs labile C pools, such as Alkyl /O-Alkyl ratio. PMID:26418458
Kailasa, Suresh Kumar; Wu, Hui-Fen
2013-07-01
Recently, mass spectrometric related techniques have been widely applied for the identification and quantification of neurochemicals and their metabolites in biofluids. This article presents an overview of mass spectrometric techniques applied in the detection of neurological substances and their metabolites from biological samples. In addition, the advances of chromatographic methods (LC, GC and CE) coupled with mass spectrometric techniques for analysis of neurochemicals in pharmaceutical and biological samples are also discussed.
Schaefer, David R; Adams, Jimi; Haas, Steven A
2013-10-01
Adolescent smoking and friendship networks are related in many ways that can amplify smoking prevalence. Understanding and developing interventions within such a complex system requires new analytic approaches. We draw on recent advances in dynamic network modeling to develop a technique that explores the implications of various intervention strategies targeted toward micro-level processes. Our approach begins by estimating a stochastic actor-based model using data from one school in the National Longitudinal Study of Adolescent Health. The model provides estimates of several factors predicting friendship ties and smoking behavior. We then use estimated model parameters to simulate the coevolution of friendship and smoking behavior under potential intervention scenarios. Namely, we manipulate the strength of peer influence on smoking and the popularity of smokers relative to nonsmokers. We measure how these manipulations affect smoking prevalence, smoking initiation, and smoking cessation. Results indicate that both peer influence and smoking-based popularity affect smoking behavior and that their joint effects are nonlinear. This study demonstrates how a simulation-based approach can be used to explore alternative scenarios that may be achievable through intervention efforts and offers new hypotheses about the association between friendship and smoking.
A hybrid approach to estimate the complex motions of clouds in sky images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Zhenzhou; Yu, Dantong; Huang, Dong
Tracking the motion of clouds is essential to forecasting the weather and to predicting the short-term solar energy generation. Existing techniques mainly fall into two categories: variational optical flow, and block matching. In this article, we summarize recent advances in estimating cloud motion using ground-based sky imagers and quantitatively evaluate state-of-the-art approaches. Then we propose a hybrid tracking framework to incorporate the strength of both block matching and optical flow models. To validate the accuracy of the proposed approach, we introduce a series of synthetic images to simulate the cloud movement and deformation, and thereafter comprehensively compare our hybrid approachmore » with several representative tracking algorithms over both simulated and real images collected from various sites/imagers. The results show that our hybrid approach outperforms state-of-the-art models by reducing at least 30% motion estimation errors compared with the ground-truth motions in most of simulated image sequences. Furthermore, our hybrid model demonstrates its superior efficiency in several real cloud image datasets by lowering at least 15% Mean Absolute Error (MAE) between predicted images and ground-truth images.« less
Schaefer, David R.; adams, jimi; Haas, Steven A.
2015-01-01
Adolescent smoking and friendship networks are related in many ways that can amplify smoking prevalence. Understanding and developing interventions within such a complex system requires new analytic approaches. We draw upon recent advances in dynamic network modeling to develop a technique that explores the implications of various intervention strategies targeted toward micro-level processes. Our approach begins by estimating a stochastic actor-based model using data from one school in the National Longitudinal Study of Adolescent Health. The model provides estimates of several factors predicting friendship ties and smoking behavior. We then use estimated model parameters to simulate the co-evolution of friendship and smoking behavior under potential intervention scenarios. Namely, we manipulate the strength of peer influence on smoking and the popularity of smokers relative to nonsmokers. We measure how these manipulations affect smoking prevalence, smoking initiation, and smoking cessation. Results indicate that both peer influence and smoking-based popularity affect smoking behavior, and that their joint effects are nonlinear. This study demonstrates how a simulation-based approach can be used to explore alternative scenarios that may be achievable through intervention efforts and offers new hypotheses about the association between friendship and smoking. PMID:24084397
A hybrid approach to estimate the complex motions of clouds in sky images
Peng, Zhenzhou; Yu, Dantong; Huang, Dong; ...
2016-09-14
Tracking the motion of clouds is essential to forecasting the weather and to predicting the short-term solar energy generation. Existing techniques mainly fall into two categories: variational optical flow, and block matching. In this article, we summarize recent advances in estimating cloud motion using ground-based sky imagers and quantitatively evaluate state-of-the-art approaches. Then we propose a hybrid tracking framework to incorporate the strength of both block matching and optical flow models. To validate the accuracy of the proposed approach, we introduce a series of synthetic images to simulate the cloud movement and deformation, and thereafter comprehensively compare our hybrid approachmore » with several representative tracking algorithms over both simulated and real images collected from various sites/imagers. The results show that our hybrid approach outperforms state-of-the-art models by reducing at least 30% motion estimation errors compared with the ground-truth motions in most of simulated image sequences. Furthermore, our hybrid model demonstrates its superior efficiency in several real cloud image datasets by lowering at least 15% Mean Absolute Error (MAE) between predicted images and ground-truth images.« less
Smartphone assessment of knee flexion compared to radiographic standards.
Dietz, Matthew J; Sprando, Daniel; Hanselman, Andrew E; Regier, Michael D; Frye, Benjamin M
2017-03-01
Measuring knee range of motion (ROM) is an important assessment for the outcomes of total knee arthroplasty. Recent technological advances have led to the development and use of accelerometer-based smartphone applications to measure knee ROM. The purpose of this study was to develop, standardize, and validate methods of utilizing smartphone accelerometer technology compared to radiographic standards, visual estimation, and goniometric evaluation. Participants used visual estimation, a long-arm goniometer, and a smartphone accelerometer to determine range of motion of a cadaveric lower extremity; these results were compared to radiographs taken at the same angles. The optimal smartphone position was determined to be on top of the leg at the distal femur and proximal tibia location. Between methods, it was found that the smartphone and goniometer were comparably reliable in measuring knee flexion (ICC=0.94; 95% CI: 0.91-0.96). Visual estimation was found to be the least reliable method of measurement. The results suggested that the smartphone accelerometer was non-inferior when compared to the other measurement techniques, demonstrated similar deviations from radiographic standards, and did not appear to be influenced by the person performing the measurements or the girth of the extremity. Copyright © 2016 Elsevier B.V. All rights reserved.
Smartphone Assessment of Knee Flexion Compared to Radiographic Standards
Dietz, Matthew J.; Sprando, Daniel; Hanselman, Andrew E.; Regier, Michael D.; Frye, Benjamin M.
2017-01-01
Purpose Measuring knee range of motion (ROM) is an important assessment for the outcomes of total knee arthroplasty. Recent technological advances have led to the development and use of accelerometer-based smartphone applications to measure knee ROM. The purpose of this study was to develop, standardize, and validate methods of utilizing smartphone accelerometer technology compared to radiographic standards, visual estimation, and goniometric evaluation. Methods Participants used visual estimation, a long-arm goniometer, and a smartphone accelerometer to determine range of motion of a cadaveric lower extremity; these results were compared to radiographs taken at the same angles. Results The optimal smartphone position was determined to be on top of the leg at the distal femur and proximal tibia location. Between methods, it was found that the smartphone and goniometer were comparably reliable in measuring knee flexion (ICC = 0.94; 95% CI: 0.91–0.96). Visual estimation was found to be the least reliable method of measurement. Conclusions The results suggested that the smartphone accelerometer was non-inferior when compared to the other measurement techniques, demonstrated similar deviations from radiographic standards, and did not appear to be influenced by the person performing the measurements or the girth of the extremity. PMID:28179062