Support vector machines for nuclear reactor state estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavaljevski, N.; Gross, K. C.
2000-02-14
Validation of nuclear power reactor signals is often performed by comparing signal prototypes with the actual reactor signals. The signal prototypes are often computed based on empirical data. The implementation of an estimation algorithm which can make predictions on limited data is an important issue. A new machine learning algorithm called support vector machines (SVMS) recently developed by Vladimir Vapnik and his coworkers enables a high level of generalization with finite high-dimensional data. The improved generalization in comparison with standard methods like neural networks is due mainly to the following characteristics of the method. The input data space is transformedmore » into a high-dimensional feature space using a kernel function, and the learning problem is formulated as a convex quadratic programming problem with a unique solution. In this paper the authors have applied the SVM method for data-based state estimation in nuclear power reactors. In particular, they implemented and tested kernels developed at Argonne National Laboratory for the Multivariate State Estimation Technique (MSET), a nonlinear, nonparametric estimation technique with a wide range of applications in nuclear reactors. The methodology has been applied to three data sets from experimental and commercial nuclear power reactor applications. The results are promising. The combination of MSET kernels with the SVM method has better noise reduction and generalization properties than the standard MSET algorithm.« less
Anomaly Monitoring Method for Key Components of Satellite
Fan, Linjun; Xiao, Weidong; Tang, Jun
2014-01-01
This paper presented a fault diagnosis method for key components of satellite, called Anomaly Monitoring Method (AMM), which is made up of state estimation based on Multivariate State Estimation Techniques (MSET) and anomaly detection based on Sequential Probability Ratio Test (SPRT). On the basis of analysis failure of lithium-ion batteries (LIBs), we divided the failure of LIBs into internal failure, external failure, and thermal runaway and selected electrolyte resistance (R e) and the charge transfer resistance (R ct) as the key parameters of state estimation. Then, through the actual in-orbit telemetry data of the key parameters of LIBs, we obtained the actual residual value (R X) and healthy residual value (R L) of LIBs based on the state estimation of MSET, and then, through the residual values (R X and R L) of LIBs, we detected the anomaly states based on the anomaly detection of SPRT. Lastly, we conducted an example of AMM for LIBs, and, according to the results of AMM, we validated the feasibility and effectiveness of AMM by comparing it with the results of threshold detective method (TDM). PMID:24587703
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt Beran; John Christenson; Dragos Nica
2002-12-15
The goal of the project is to enable plant operators to detect with high sensitivity and reliability the onset of decalibration drifts in all of the instrumentation used as input to the reactor heat balance calculations. To achieve this objective, the collaborators developed and implemented at DBNPS an extension of the Multivariate State Estimation Technique (MSET) pattern recognition methodology pioneered by ANAL. The extension was implemented during the second phase of the project and fully achieved the project goal.
Jiao, Yong; Zhang, Yu; Wang, Yu; Wang, Bei; Jin, Jing; Wang, Xingyu
2018-05-01
Multiset canonical correlation analysis (MsetCCA) has been successfully applied to optimize the reference signals by extracting common features from multiple sets of electroencephalogram (EEG) for steady-state visual evoked potential (SSVEP) recognition in brain-computer interface application. To avoid extracting the possible noise components as common features, this study proposes a sophisticated extension of MsetCCA, called multilayer correlation maximization (MCM) model for further improving SSVEP recognition accuracy. MCM combines advantages of both CCA and MsetCCA by carrying out three layers of correlation maximization processes. The first layer is to extract the stimulus frequency-related information in using CCA between EEG samples and sine-cosine reference signals. The second layer is to learn reference signals by extracting the common features with MsetCCA. The third layer is to re-optimize the reference signals set in using CCA with sine-cosine reference signals again. Experimental study is implemented to validate effectiveness of the proposed MCM model in comparison with the standard CCA and MsetCCA algorithms. Superior performance of MCM demonstrates its promising potential for the development of an improved SSVEP-based brain-computer interface.
NASA Technical Reports Server (NTRS)
Monroe, Joseph; Kelkar, Ajit
2003-01-01
The NASA PAIR program incorporated the NASA-Sponsored research into the undergraduate environment at North Carolina Agricultural and Technical State University. This program is designed to significantly improve undergraduate education in the areas of mathematics, science, engineering, and technology (MSET) by directly benefiting from the experiences of NASA field centers, affiliated industrial partners and academic institutions. The three basic goals of the program were enhancing core courses in MSET curriculum, upgrading core-engineering laboratories to compliment upgraded MSET curriculum, and conduct research training for undergraduates in MSET disciplines through a sophomore shadow program and through Research Experience for Undergraduates (REU) programs. Since the inception of the program nine courses have been modified to include NASA related topics and research. These courses have impacted over 900 students in the first three years of the program. The Electrical Engineering circuit's lab is completely re-equipped to include Computer controlled and data acquisition equipment. The Physics lab is upgraded to implement better sensory data acquisition to enhance students understanding of course concepts. In addition a new instrumentation laboratory in the department of Mechanical Engineering is developed. Research training for A&T students was conducted through four different programs: Apprentice program, Developers program, Sophomore Shadow program and Independent Research program. These programs provided opportunities for an average of forty students per semester.
The effects of parameter variation on MSET models of the Crystal River-3 feedwater flow system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miron, A.
1998-04-01
In this paper we develop further the results reported in Reference 1 to include a systematic study of the effects of varying MSET models and model parameters for the Crystal River-3 (CR) feedwater flow system The study used archived CR process computer files from November 1-December 15, 1993 that were provided by Florida Power Corporation engineers Fairman Bockhorst and Brook Julias. The results support the conclusion that an optimal MSET model, properly trained and deriving its inputs in real-time from no more than 25 of the sensor signals normally provided to a PWR plant process computer, should be able tomore » reliably detect anomalous variations in the feedwater flow venturis of less than 0.1% and in the absence of a venturi sensor signal should be able to generate a virtual signal that will be within 0.1% of the correct value of the missing signal.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
2011-01-01
An effective risk assessment system is needed to address the threat posed by an active or passive insider who, acting alone or in collusion, could attempt diversion or theft of nuclear material. The material control and accountability (MC&A) system effectiveness tool (MSET) is a self-assessment or inspection tool utilizing probabilistic risk assessment (PRA) methodology to calculate the system effectiveness of a nuclear facility's material protection, control, and accountability (MPC&A) system. The MSET process is divided into four distinct and separate parts: (1) Completion of the questionnaire that assembles information about the operations of every aspect of the MPC&A system; (2)more » Conversion of questionnaire data into numeric values associated with risk; (3) Analysis of the numeric data utilizing the MPC&A fault tree and the SAPHIRE computer software; and (4) Self-assessment using the MSET reports to perform the effectiveness evaluation of the facility's MPC&A system. The process should lead to confirmation that mitigating features of the system effectively minimize the threat, or it could lead to the conclusion that system improvements or upgrades are necessary to achieve acceptable protection against the threat. If the need for system improvements or upgrades is indicated when the system is analyzed, MSET provides the capability to evaluate potential or actual system improvements or upgrades. A facility's MC&A system can be evaluated at a point in time. The system can be reevaluated after upgrades are implemented or after other system changes occur. The total system or specific subareas within the system can be evaluated. Areas of potential system improvement can be assessed to determine where the most beneficial and cost-effective improvements should be made. Analyses of risk importance factors show that sustainability is essential for optimal performance and reveals where performance degradation has the greatest impact on total system risk. The risk importance factors show the amount of risk reduction achievable with potential upgrades and the amount of risk reduction achieved after upgrades are completed. Applying the risk assessment tool gives support to budget prioritization by showing where budget support levels must be sustained for MC&A functions most important to risk. Results of the risk assessment are also useful in supporting funding justifications for system improvements that significantly reduce system risk. The functional model, the system risk assessment tool, and the facility evaluation questionnaire are valuable educational tools for MPC&A personnel. These educational tools provide a framework for ongoing dialogue between organizations regarding the design, development, implementation, operation, assessment, and sustainability of MPC&A systems. An organization considering the use of MSET as an analytical tool for evaluating the effectiveness of its MPC&A system will benefit from conducting a complete MSET exercise at an existing nuclear facility.« less
Renison, Belinda; Ponsford, Jennie; Testa, Renee; Richardson, Barry; Brownfield, Kylie
2012-05-01
Virtual reality (VR) assessment paradigms have the potential to address the limited ecological validity of pen and paper measures of executive function (EF) and the pragmatic and reliability issues associated with functional measures. To investigate the ecological validity and construct validity of a newly developed VR measure of EF, the Virtual Library Task (VLT); a real life analogous task--the Real Library Task (RLT); and five neuropsychological measures of EF were administered to 30 patients with traumatic brain injury (TBI) and 30 healthy Controls. Significant others for each participant also completed the Dysexecutive Questionnaire (DEX), which is a behavioral rating scale of everyday EF. Performances on the VLT and the RLT were significantly positively correlated indicating that VR performance is similar to real world performance. The TBI group performed significantly worse than the Control group on the VLT and the Modified Six Elements Test (MSET) but the other four neuropsychological measures of EF failed to differentiate the groups. Both the MSET and the VLT significantly predicted everyday EF suggesting that they are both ecologically valid tools for the assessment of EF. The VLT has the advantage over the MSET of providing objective measurement of individual components of EF.
Education and Training Report. Performance Report, FY 1997
NASA Technical Reports Server (NTRS)
1997-01-01
During FY 97, 152 MUREP education and training projects were conducted at OMU institutions. The institutions conducted precollege and bridge programs, education partnerships with other universities and industry, NRTS, teacher training, and graduate and/or PI undergraduate programs. These programs reached a total of 23,748 participants, with the predominant number at the precollege level and achieved major goals of heightening students' interest and awareness of career opportunities in MSET fields, and exposing students to the NASA mission, research and advanced technology through role models, mentors, and participation in research and other educational activities. Also in FY 1997, NASA continued a very meaningful relationship with the Hispanic Association of Colleges students and Universities (HACU) through Proyecto Access, a consortium through which HACU links seven HSI's together to conduct 8-week summer programs. OMU Institutions reported 4,334 high school student in NASA programs and 3,404 of those students selected college preparatory MSET courses. Three hundred and forty-nine (349) graduated from high school, 343 enrolled in college, and 199 selected MSET majors. There were 130 high school graduates (bridge students) in NASA programs, 57 of whom successfully completed their freshman year. There were 307 teachers in teacher programs and 48 teachers received certificates. Of the 389 undergraduate students, 75 received under graduate degrees, and eight students are employed in a NASA-related field.
NASA Astrophysics Data System (ADS)
2001-09-01
The Editor welcomes letters, by e-mail to ped@iop.org or by post to Dirac House, Temple Back, Bristol BS1 6BE, UK. Contents: M-set as metaphor The abuse of algebra M-set as metaphor 'To see a World in a Grain of Sand And a Heaven in a Wild Flower Hold Infinity in the palm of your hand And Eternity in an hour' William Blake's implied relativity of spatial and temporal scales is intriguing and, given the durability of this worlds-within-worlds concept (he wrote in 1803) in art, literature and science, the blurring of distinctions between the very large and the very small must strike some kind of harmonious chord in the human mind. Could this concept apply to the physical world? To be honest, we cannot be absolutely sure. Most cosmological thinking still retains the usual notions of a finite universe and an absolute size scale extending from smallest to largest objects. In the boundless realm of mathematics, however, the story is quite different. The M-set was discovered by the French mathematician Benoit Mandelbrot in 1980, created by just a few simple lines of computer code that are repeated recursively. As in Blake's poem, this 'world' has no bottom we have an almost palpable archetype for the concept of infinity. I would use the word 'tangible', but one of the defining features of the M-set is that nowhere in the labyrinth can one find a surface smooth enough for a tangent. Upon magnification even surfaces that appeared to be smooth explode with quills and scrolls and lightning bolts and spiral staircases. And there is something more, something truly sublime. Observe a small patch with unlimited magnifying power and, as you observe the M-set on ever-smaller scales, down through literally endless layers of ornate structure, you occasionally come upon a rapidly expanding cortex of dazzling colour with a small black structure at its centre. The black spot appears to be the M-set itself! There is no end to the hierarchy, no bottom-most level, just endless recursive worlds within worlds within worlds. Scale is no longer fixed and absolute, but is purely relative. These beautiful symmetries convey an immediate aesthetic pleasure and also compel one to think about these strange concepts of self-similarity, infinity and relativity of scale. Our present science tends to favour reductionism. We surmise that the physics of our world has a most fundamental level and all phenomena are built up from these quarks or strings. Mathe-matics need not be so limited: here the mind is set free to dream of universes with the most exquisite symmetries and infinities. I urge you to explore the M-set. The epiphanies you experience will be worth the effort. Robert L Oldershaw Physics Department, Amherst College, Amherst, MA 01002, USA rlolders@unix.amherst.edu Video copies of The Colors of Infinity are available from Humanities, Inc. Princeton, New Jersey, priced 30. There are also several websites such as www.softlab.ntua.gr/mandel/mandel.html or tqd.advanced.org/3288. The abuse of algebra What a pleasure it is to read the work of students whose reasoning is easy to follow, who observe the rules of grammar in all their writing, and who remember that an algebraic equation is and must be a sentence in their native language, albeit written in a universal shorthand. About thirty years ago the ASE encouraged us all to use 'Quantity Algebra' consistently rather than to muddle on with inconsistent (and therefore incorrect) hybrids of 'Number' and 'Quantity' Algebra. Number Algebra is tedious if used correctly in physics. But Quantity Algebra seems to petrify Maths departments, whose incoherent practices undermine the efforts of Physics teachers to persuade their pupils to reason both logically and clearly. When I read a pupil's work, the final answer (or conclusion) interests me far less than the reasoning that leads to that conclusion. I want to be able to check the work as I read it, and it helps greatly if units are included when values are substituted for symbols. Textbooks which set out their worked examples in Quantity Algebra are especially appreciated, not only for illustrating the 'good practice' we want to encourage, but, of course, in helping the student keep sight of the physics throughout. Physics texts which do not use Quantity Algebra in their worked examples invariably demonstrate faulty logic ... besides hiding the physics. Here is a very simple example: Good practice: Force = 70 kg × 10 N/kg = 700 N Bad practice: Force = 70 × 10 = 700 (or Force = 700 N) The final 'slide-rule' manipulation is of numbers, of course; but we should keep sight of the route to those numbers. Years ago the Head of Maths at a large comprehensive school described how he persuaded all departments to convert to Quantity Algebra. But he ended with an admission: that such an initiative must come from the Head of Maths. That enlightened man understood the problem: his fellow mathematicians. Tim Watson Worcester
2010-03-20
For Inspiration and Recognition of Science and Technology; FIRST Robotics Competition 2010 Silicon Valley Regional held at San Jose State University, San Jose, California (NASA Ames/Mike Dininny sponsored) Cheesy Poofs, Bellarmine College Preparatory, CA Robot name Gizmo Team 254, Spartan Robotics Mountain View H.S. Team 971 and MSET, Saratoga H.S. Team 649. Three teams placed first in the Silicon Valley regional.
Mixed-Signal Electronics Technology for Space (MSETS)
2006-02-16
N A L P R O C E S S IN G (IN TE R P O LA TO R ) (D E LT A _S IG M A ) SCR Figure 1: Schematic representation of...Meeting on Electrical Performance of Electronic Packaging, pp 128-131, Oct. 1998 [8] M . Rickelt, H.- M . Rein and E . Rose, “ Influence of Impact... Embabi , J. Pineda de Gyvez, D.J. Allstot and E . Sanchez- Sinencio, “A capacitor cross-coupled common-gate low noise amplifier,” IEEE Trans. on
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
2011-01-01
Analysis of the material protection, control, and accountability (MPC&A) system is necessary to understand the limits and vulnerabilities of the system to internal threats. A self-appraisal helps the facility be prepared to respond to internal threats and reduce the risk of theft or diversion of nuclear material. The material control and accountability (MC&A) system effectiveness tool (MSET) fault tree was developed to depict the failure of the MPC&A system as a result of poor practices and random failures in the MC&A system. It can also be employed as a basis for assessing deliberate threats against a facility. MSET uses faultmore » tree analysis, which is a top-down approach to examining system failure. The analysis starts with identifying a potential undesirable event called a 'top event' and then determining the ways it can occur (e.g., 'Fail To Maintain Nuclear Materials Under The Purview Of The MC&A System'). The analysis proceeds by determining how the top event can be caused by individual or combined lower level faults or failures. These faults, which are the causes of the top event, are 'connected' through logic gates. The MSET model uses AND-gates and OR-gates and propagates the effect of event failure using Boolean algebra. To enable the fault tree analysis calculations, the basic events in the fault tree are populated with probability risk values derived by conversion of questionnaire data to numeric values. The basic events are treated as independent variables. This assumption affects the Boolean algebraic calculations used to calculate results. All the necessary calculations are built into the fault tree codes, but it is often useful to estimate the probabilities manually as a check on code functioning. The probability of failure of a given basic event is the probability that the basic event primary question fails to meet the performance metric for that question. The failure probability is related to how well the facility performs the task identified in that basic event over time (not just one performance or exercise). Fault tree calculations provide a failure probability for the top event in the fault tree. The basic fault tree calculations establish a baseline relative risk value for the system. This probability depicts relative risk, not absolute risk. Subsequent calculations are made to evaluate the change in relative risk that would occur if system performance is improved or degraded. During the development effort of MSET, the fault tree analysis program used was SAPHIRE. SAPHIRE is an acronym for 'Systems Analysis Programs for Hands-on Integrated Reliability Evaluations.' Version 1 of the SAPHIRE code was sponsored by the Nuclear Regulatory Commission in 1987 as an innovative way to draw, edit, and analyze graphical fault trees primarily for safe operation of nuclear power reactors. When the fault tree calculations are performed, the fault tree analysis program will produce several reports that can be used to analyze the MPC&A system. SAPHIRE produces reports showing risk importance factors for all basic events in the operational MC&A system. The risk importance information is used to examine the potential impacts when performance of certain basic events increases or decreases. The initial results produced by the SAPHIRE program are considered relative risk values. None of the results can be interpreted as absolute risk values since the basic event probability values represent estimates of risk associated with the performance of MPC&A tasks throughout the material balance area (MBA). The RRR for a basic event represents the decrease in total system risk that would result from improvement of that one event to a perfect performance level. Improvement of the basic event with the greatest RRR value produces a greater decrease in total system risk than improvement of any other basic event. Basic events with the greatest potential for system risk reduction are assigned performance improvement values, and new fault tree calculations show the improvement in total system risk. The operational impact or cost-effectiveness from implementing the performance improvements can then be evaluated. The improvements being evaluated can be system performance improvements, or they can be potential, or actual, upgrades to the system. The RIR for a basic event represents the increase in total system risk that would result from failure of that one event. Failure of the basic event with the greatest RIR value produces a greater increase in total system risk than failure of any other basic event. Basic events with the greatest potential for system risk increase are assigned failure performance values, and new fault tree calculations show the increase in total system risk. This evaluation shows the importance of preventing performance degradation of the basic events. SAPHIRE identifies combinations of basic events where concurrent failure of the events results in failure of the top event.« less
NASA Technical Reports Server (NTRS)
Ross, Elizabeth G.
1997-01-01
This document presents findings based on a third-year evaluation of Trenholm State (AL) Technical College's National Aeronautics and Space Administration (NASA) - supported High School Science Enrichment Program (HSSEP). HSSEP is an external (to school) program for area students from groups that are underrepresented in the mathematics, science, engineering and technology (MSET) professions. In addition to gaining insight into scientific careers, HSSEP participants learn about and deliver presentations that focus on mathematics applications, scientific problem-solving and computer programming during a seven-week summer or 10-week Academic-Year Saturday session.
Hydrogen-rich saline attenuates spinal cord hemisection-induced testicular injury in rats.
Ge, Li; Wei, Li-Hua; Du, Chang-Qing; Song, Guo-Hua; Xue, Ya-Zhuo; Shi, Hao-Shen; Yang, Ming; Yin, Xin-Xin; Li, Run-Ting; Wang, Xue-Er; Wang, Zhen; Song, Wen-Gang
2017-06-27
To study how hydrogen-rich saline (HS) promotes the recovery of testicular biological function in a hemi-sectioned spinal cord injury (hSCI) rat model, a right hemisection was performed at the T11-T12 of the spinal cord in Wistar rats. Animals were divided into four groups: normal group; vehicle group: sham-operated rats administered saline; hSCI group: subjected to hSCI and administered saline; HRST group: subjected to hSCI and administered HS. Hind limb neurological function, testis index, testicular morphology, mean seminiferous tubular diameter (MSTD) and seminiferous epithelial thickness (MSET), the expression of heme oxygenase-1 (HO-1), mitofusin-2 (MFN-2), and high-mobility group box 1 (HMGB-1), cell ultrastructure, and apoptosis of spermatogenic cells were studied. The results indicated that hSCI significantly decreased the hind limb neurological function, testis index, MSTD, and MSET, and induced severe testicular morphological injury. The MFN-2 level was decreased, and HO-1 and HMGB-1 were overexpressed in testicular tissues. In addition, hSCI accelerated the apoptosis of spermatogenic cells and the ultrastructural damage of cells in the hypophysis and testis. After HS administration, all these parameters were considerably improved, and the characteristics of hSCI testes were similar to those of normal control testes. Taken together, HS administration can promote the recovery of testicular biological function by anti-oxidative, anti-inflammatory, and anti-apoptotic action. More importantly, HS can inhibit the hSCI-induced ultrastructural changes in gonadotrophs, ameliorate the abnormal regulation of the hypothalamic-pituitary-testis axis, and thereby promote the recovery of testicular injury. HS administration also inhibited the hSCI-induced ultrastructural changes in testicular spermatogenic cells, Sertoli cells and interstitial cells.
Hydrogen-rich saline attenuates spinal cord hemisection-induced testicular injury in rats
Ge, Li; Wei, Li-Hua; Du, Chang-Qing; Song, Guo-Hua; Xue, Ya-Zhuo; Shi, Hao-Shen; Yang, Ming; Yin, Xin-Xin; Li, Run-Ting; Wang, Xue-er; Wang, Zhen; Song, Wen-Gang
2017-01-01
To study how hydrogen-rich saline (HS) promotes the recovery of testicular biological function in a hemi-sectioned spinal cord injury (hSCI) rat model, a right hemisection was performed at the T11–T12 of the spinal cord in Wistar rats. Animals were divided into four groups: normal group; vehicle group: sham-operated rats administered saline; hSCI group: subjected to hSCI and administered saline; HRST group: subjected to hSCI and administered HS. Hind limb neurological function, testis index, testicular morphology, mean seminiferous tubular diameter (MSTD) and seminiferous epithelial thickness (MSET), the expression of heme oxygenase-1 (HO-1), mitofusin-2 (MFN-2), and high-mobility group box 1 (HMGB-1), cell ultrastructure, and apoptosis of spermatogenic cells were studied. The results indicated that hSCI significantly decreased the hind limb neurological function, testis index, MSTD, and MSET, and induced severe testicular morphological injury. The MFN-2 level was decreased, and HO-1 and HMGB-1 were overexpressed in testicular tissues. In addition, hSCI accelerated the apoptosis of spermatogenic cells and the ultrastructural damage of cells in the hypophysis and testis. After HS administration, all these parameters were considerably improved, and the characteristics of hSCI testes were similar to those of normal control testes. Taken together, HS administration can promote the recovery of testicular biological function by anti-oxidative, anti-inflammatory, and anti-apoptotic action. More importantly, HS can inhibit the hSCI-induced ultrastructural changes in gonadotrophs, ameliorate the abnormal regulation of the hypothalamic-pituitary-testis axis, and thereby promote the recovery of testicular injury. HS administration also inhibited the hSCI-induced ultrastructural changes in testicular spermatogenic cells, Sertoli cells and interstitial cells. PMID:28404953
2013-01-01
Background A recent study of lateral septum (LS) suggested a large number of autism-related genes with altered expression in the postpartum state. However, formally testing the findings for enrichment of autism-associated genes proved to be problematic with existing software. Many gene-disease association databases have been curated which are not currently incorporated in popular, full-featured enrichment tools, and the use of custom gene lists in these programs can be difficult to perform and interpret. As a simple alternative, we have developed the Modular Single-set Enrichment Test (MSET), a minimal tool that enables one to easily evaluate expression data for enrichment of any conceivable gene list of interest. Results The MSET approach was validated by testing several publicly available expression data sets for expected enrichment in areas of autism, attention deficit hyperactivity disorder (ADHD), and arthritis. Using nine independent, unique autism gene lists extracted from association databases and two recent publications, a striking consensus of enrichment was detected within gene expression changes in LS of postpartum mice. A network of 160 autism-related genes was identified, representing developmental processes such as synaptic plasticity, neuronal morphogenesis, and differentiation. Additionally, maternal LS displayed enrichment for genes associated with bipolar disorder, schizophrenia, ADHD, and depression. Conclusions The transition to motherhood includes the most fundamental social bonding event in mammals and features naturally occurring changes in sociability. Some individuals with autism, schizophrenia, or other mental health disorders exhibit impaired social traits. Genes involved in these deficits may also contribute to elevated sociability in the maternal brain. To date, this is the first study to show a significant, quantitative link between the maternal brain and mental health disorders using large scale gene expression data. Thus, the postpartum brain may provide a novel and promising platform for understanding the complex genetics of improved sociability that may have direct relevance for multiple psychiatric illnesses. This study also provides an important new tool that fills a critical analysis gap and makes evaluation of enrichment using any database of interest possible with an emphasis on ease of use and methodological transparency. PMID:24245670
Eisinger, Brian E; Saul, Michael C; Driessen, Terri M; Gammie, Stephen C
2013-11-19
A recent study of lateral septum (LS) suggested a large number of autism-related genes with altered expression in the postpartum state. However, formally testing the findings for enrichment of autism-associated genes proved to be problematic with existing software. Many gene-disease association databases have been curated which are not currently incorporated in popular, full-featured enrichment tools, and the use of custom gene lists in these programs can be difficult to perform and interpret. As a simple alternative, we have developed the Modular Single-set Enrichment Test (MSET), a minimal tool that enables one to easily evaluate expression data for enrichment of any conceivable gene list of interest. The MSET approach was validated by testing several publicly available expression data sets for expected enrichment in areas of autism, attention deficit hyperactivity disorder (ADHD), and arthritis. Using nine independent, unique autism gene lists extracted from association databases and two recent publications, a striking consensus of enrichment was detected within gene expression changes in LS of postpartum mice. A network of 160 autism-related genes was identified, representing developmental processes such as synaptic plasticity, neuronal morphogenesis, and differentiation. Additionally, maternal LS displayed enrichment for genes associated with bipolar disorder, schizophrenia, ADHD, and depression. The transition to motherhood includes the most fundamental social bonding event in mammals and features naturally occurring changes in sociability. Some individuals with autism, schizophrenia, or other mental health disorders exhibit impaired social traits. Genes involved in these deficits may also contribute to elevated sociability in the maternal brain. To date, this is the first study to show a significant, quantitative link between the maternal brain and mental health disorders using large scale gene expression data. Thus, the postpartum brain may provide a novel and promising platform for understanding the complex genetics of improved sociability that may have direct relevance for multiple psychiatric illnesses. This study also provides an important new tool that fills a critical analysis gap and makes evaluation of enrichment using any database of interest possible with an emphasis on ease of use and methodological transparency.
Mission leverage education: NSU/NASA innovative undergraduate model
NASA Technical Reports Server (NTRS)
Chaudhury, S. Raj; Shaw, Paula R. D.
2005-01-01
The BEST Lab (Center for Excellence in Science Education), the Center for Materials Research (CMR), and the Chemistry, Mathematics, Physics, and Computer Science (CS) Departments at Norfolk State University (NSU) joined forces to implement MiLEN(2) IUM - an innovative approach tu integrate current and emerging research into the undergraduate curricula and train students on NASA-related fields. An Earth Observing System (EOS) mission was simulated where students are educated and trained in many aspects of Remote Sensing: detector physics and spectroscopy; signal processing; data conditioning, analysis, visualization; and atmospheric science. This model and its continued impact is expected to significantly enhance the quality of the Mathematics, Science, Engineering and Technology (MSET or SMET) educational experience and to inspire students from historically underrepresented groups to pursue careers in NASA-related fields. MiLEN(2) IUM will be applicable to other higher education institutions that are willing to make the commitment to this endeavor in terms of faculty interest and space.
Bao, Weier; Greenwold, Matthew J; Sawyer, Roger H
2017-11-01
Gene co-expression network analysis has been a research method widely used in systematically exploring gene function and interaction. Using the Weighted Gene Co-expression Network Analysis (WGCNA) approach to construct a gene co-expression network using data from a customized 44K microarray transcriptome of chicken epidermal embryogenesis, we have identified two distinct modules that are highly correlated with scale or feather development traits. Signaling pathways related to feather development were enriched in the traditional KEGG pathway analysis and functional terms relating specifically to embryonic epidermal development were also enriched in the Gene Ontology analysis. Significant enrichment annotations were discovered from customized enrichment tools such as Modular Single-Set Enrichment Test (MSET) and Medical Subject Headings (MeSH). Hub genes in both trait-correlated modules showed strong specific functional enrichment toward epidermal development. Also, regulatory elements, such as transcription factors and miRNAs, were targeted in the significant enrichment result. This work highlights the advantage of this methodology for functional prediction of genes not previously associated with scale- and feather trait-related modules.
Anomaly Detection in Host Signaling Pathways for the Early Prognosis of Acute Infection.
Wang, Kun; Langevin, Stanley; O'Hern, Corey S; Shattuck, Mark D; Ogle, Serenity; Forero, Adriana; Morrison, Juliet; Slayden, Richard; Katze, Michael G; Kirby, Michael
2016-01-01
Clinical diagnosis of acute infectious diseases during the early stages of infection is critical to administering the appropriate treatment to improve the disease outcome. We present a data driven analysis of the human cellular response to respiratory viruses including influenza, respiratory syncytia virus, and human rhinovirus, and compared this with the response to the bacterial endotoxin, Lipopolysaccharides (LPS). Using an anomaly detection framework we identified pathways that clearly distinguish between asymptomatic and symptomatic patients infected with the four different respiratory viruses and that accurately diagnosed patients exposed to a bacterial infection. Connectivity pathway analysis comparing the viral and bacterial diagnostic signatures identified host cellular pathways that were unique to patients exposed to LPS endotoxin indicating this type of analysis could be used to identify host biomarkers that can differentiate clinical etiologies of acute infection. We applied the Multivariate State Estimation Technique (MSET) on two human influenza (H1N1 and H3N2) gene expression data sets to define host networks perturbed in the asymptomatic phase of infection. Our analysis identified pathways in the respiratory virus diagnostic signature as prognostic biomarkers that triggered prior to clinical presentation of acute symptoms. These early warning pathways correctly predicted that almost half of the subjects would become symptomatic in less than forty hours post-infection and that three of the 18 subjects would become symptomatic after only 8 hours. These results provide a proof-of-concept for utility of anomaly detection algorithms to classify host pathway signatures that can identify presymptomatic signatures of acute diseases and differentiate between etiologies of infection. On a global scale, acute respiratory infections cause a significant proportion of human co-morbidities and account for 4.25 million deaths annually. The development of clinical diagnostic tools to distinguish between acute viral and bacterial respiratory infections is critical to improve patient care and limit the overuse of antibiotics in the medical community. The identification of prognostic respiratory virus biomarkers provides an early warning system that is capable of predicting which subjects will become symptomatic to expand our medical diagnostic capabilities and treatment options for acute infectious diseases. The host response to acute infection may be viewed as a deterministic signaling network responsible for maintaining the health of the host organism. We identify pathway signatures that reflect the very earliest perturbations in the host response to acute infection. These pathways provide a monitor the health state of the host using anomaly detection to quantify and predict health outcomes to pathogens.
Anomaly Detection in Host Signaling Pathways for the Early Prognosis of Acute Infection
O’Hern, Corey S.; Shattuck, Mark D.; Ogle, Serenity; Forero, Adriana; Morrison, Juliet; Slayden, Richard; Katze, Michael G.
2016-01-01
Clinical diagnosis of acute infectious diseases during the early stages of infection is critical to administering the appropriate treatment to improve the disease outcome. We present a data driven analysis of the human cellular response to respiratory viruses including influenza, respiratory syncytia virus, and human rhinovirus, and compared this with the response to the bacterial endotoxin, Lipopolysaccharides (LPS). Using an anomaly detection framework we identified pathways that clearly distinguish between asymptomatic and symptomatic patients infected with the four different respiratory viruses and that accurately diagnosed patients exposed to a bacterial infection. Connectivity pathway analysis comparing the viral and bacterial diagnostic signatures identified host cellular pathways that were unique to patients exposed to LPS endotoxin indicating this type of analysis could be used to identify host biomarkers that can differentiate clinical etiologies of acute infection. We applied the Multivariate State Estimation Technique (MSET) on two human influenza (H1N1 and H3N2) gene expression data sets to define host networks perturbed in the asymptomatic phase of infection. Our analysis identified pathways in the respiratory virus diagnostic signature as prognostic biomarkers that triggered prior to clinical presentation of acute symptoms. These early warning pathways correctly predicted that almost half of the subjects would become symptomatic in less than forty hours post-infection and that three of the 18 subjects would become symptomatic after only 8 hours. These results provide a proof-of-concept for utility of anomaly detection algorithms to classify host pathway signatures that can identify presymptomatic signatures of acute diseases and differentiate between etiologies of infection. On a global scale, acute respiratory infections cause a significant proportion of human co-morbidities and account for 4.25 million deaths annually. The development of clinical diagnostic tools to distinguish between acute viral and bacterial respiratory infections is critical to improve patient care and limit the overuse of antibiotics in the medical community. The identification of prognostic respiratory virus biomarkers provides an early warning system that is capable of predicting which subjects will become symptomatic to expand our medical diagnostic capabilities and treatment options for acute infectious diseases. The host response to acute infection may be viewed as a deterministic signaling network responsible for maintaining the health of the host organism. We identify pathway signatures that reflect the very earliest perturbations in the host response to acute infection. These pathways provide a monitor the health state of the host using anomaly detection to quantify and predict health outcomes to pathogens. PMID:27532264
DASS: efficient discovery and p-value calculation of substructures in unordered data.
Hollunder, Jens; Friedel, Maik; Beyer, Andreas; Workman, Christopher T; Wilhelm, Thomas
2007-01-01
Pattern identification in biological sequence data is one of the main objectives of bioinformatics research. However, few methods are available for detecting patterns (substructures) in unordered datasets. Data mining algorithms mainly developed outside the realm of bioinformatics have been adapted for that purpose, but typically do not determine the statistical significance of the identified patterns. Moreover, these algorithms do not exploit the often modular structure of biological data. We present the algorithm DASS (Discovery of All Significant Substructures) that first identifies all substructures in unordered data (DASS(Sub)) in a manner that is especially efficient for modular data. In addition, DASS calculates the statistical significance of the identified substructures, for sets with at most one element of each type (DASS(P(set))), or for sets with multiple occurrence of elements (DASS(P(mset))). The power and versatility of DASS is demonstrated by four examples: combinations of protein domains in multi-domain proteins, combinations of proteins in protein complexes (protein subcomplexes), combinations of transcription factor target sites in promoter regions and evolutionarily conserved protein interaction subnetworks. The program code and additional data are available at http://www.fli-leibniz.de/tsb/DASS
Two biased estimation techniques in linear regression: Application to aircraft
NASA Technical Reports Server (NTRS)
Klein, Vladislav
1988-01-01
Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.
Boundary methods for mode estimation
NASA Astrophysics Data System (ADS)
Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.
1999-08-01
This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
The questionnaire is the instrument used for recording performance data on the nuclear material protection, control, and accountability (MPC&A) system at a nuclear facility. The performance information provides a basis for evaluating the effectiveness of the MPC&A system. The goal for the questionnaire is to provide an accurate representation of the performance of the MPC&A system as it currently exists in the facility. Performance grades for all basic MPC&A functions should realistically reflect the actual level of performance at the time the survey is conducted. The questionnaire was developed after testing and benchmarking the material control and accountability (MC&A) systemmore » effectiveness tool (MSET) in the United States. The benchmarking exercise at the Idaho National Laboratory (INL) proved extremely valuable for improving the content and quality of the early versions of the questionnaire. Members of the INL benchmark team identified many areas of the questionnaire where questions should be clarified and areas where additional questions should be incorporated. The questionnaire addresses all elements of the MC&A system. Specific parts pertain to the foundation for the facility's overall MPC&A system, and other parts pertain to the specific functions of the operational MPC&A system. The questionnaire includes performance metrics for each of the basic functions or tasks performed in the operational MPC&A system. All of those basic functions or tasks are represented as basic events in the MPC&A fault tree. Performance metrics are to be used during completion of the questionnaire to report what is actually being done in relation to what should be done in the performance of MPC&A functions.« less
Evaluation of gravimetric techniques to estimate the microvascular filtration coefficient
Dongaonkar, R. M.; Laine, G. A.; Stewart, R. H.
2011-01-01
Microvascular permeability to water is characterized by the microvascular filtration coefficient (Kf). Conventional gravimetric techniques to estimate Kf rely on data obtained from either transient or steady-state increases in organ weight in response to increases in microvascular pressure. Both techniques result in considerably different estimates and neither account for interstitial fluid storage and lymphatic return. We therefore developed a theoretical framework to evaluate Kf estimation techniques by 1) comparing conventional techniques to a novel technique that includes effects of interstitial fluid storage and lymphatic return, 2) evaluating the ability of conventional techniques to reproduce Kf from simulated gravimetric data generated by a realistic interstitial fluid balance model, 3) analyzing new data collected from rat intestine, and 4) analyzing previously reported data. These approaches revealed that the steady-state gravimetric technique yields estimates that are not directly related to Kf and are in some cases directly proportional to interstitial compliance. However, the transient gravimetric technique yields accurate estimates in some organs, because the typical experimental duration minimizes the effects of interstitial fluid storage and lymphatic return. Furthermore, our analytical framework reveals that the supposed requirement of tying off all draining lymphatic vessels for the transient technique is unnecessary. Finally, our numerical simulations indicate that our comprehensive technique accurately reproduces the value of Kf in all organs, is not confounded by interstitial storage and lymphatic return, and provides corroboration of the estimate from the transient technique. PMID:21346245
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
Bayesian techniques for surface fuel loading estimation
Kathy Gray; Robert Keane; Ryan Karpisz; Alyssa Pedersen; Rick Brown; Taylor Russell
2016-01-01
A study by Keane and Gray (2013) compared three sampling techniques for estimating surface fine woody fuels. Known amounts of fine woody fuel were distributed on a parking lot, and researchers estimated the loadings using different sampling techniques. An important result was that precise estimates of biomass required intensive sampling for both the planar intercept...
Estimation of correlation functions by stochastic approximation.
NASA Technical Reports Server (NTRS)
Habibi, A.; Wintz, P. A.
1972-01-01
Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.
Sim, Kok Swee; NorHisham, Syafiq
2016-11-01
A technique based on linear Least Squares Regression (LSR) model is applied to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. In order to test the accuracy of this technique on SNR estimation, a number of SEM images are initially corrupted with white noise. The autocorrelation function (ACF) of the original and the corrupted SEM images are formed to serve as the reference point to estimate the SNR value of the corrupted image. The LSR technique is then compared with the previous three existing techniques known as nearest neighbourhood, first-order interpolation, and the combination of both nearest neighborhood and first-order interpolation. The actual and the estimated SNR values of all these techniques are then calculated for comparison purpose. It is shown that the LSR technique is able to attain the highest accuracy compared to the other three existing techniques as the absolute difference between the actual and the estimated SNR value is relatively small. SCANNING 38:771-782, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
As-built design specification for proportion estimate software subsystem
NASA Technical Reports Server (NTRS)
Obrien, S. (Principal Investigator)
1980-01-01
The Proportion Estimate Processor evaluates four estimation techniques in order to get an improved estimate of the proportion of a scene that is planted in a selected crop. The four techniques to be evaluated were provided by the techniques development section and are: (1) random sampling; (2) proportional allocation, relative count estimate; (3) proportional allocation, Bayesian estimate; and (4) sequential Bayesian allocation. The user is given two options for computation of the estimated mean square error. These are referred to as the cluster calculation option and the segment calculation option. The software for the Proportion Estimate Processor is operational on the IBM 3031 computer.
Use of Empirical Estimates of Shrinkage in Multiple Regression: A Caution.
ERIC Educational Resources Information Center
Kromrey, Jeffrey D.; Hines, Constance V.
1995-01-01
The accuracy of four empirical techniques to estimate shrinkage in multiple regression was studied through Monte Carlo simulation. None of the techniques provided unbiased estimates of the population squared multiple correlation coefficient, but the normalized jackknife and bootstrap techniques demonstrated marginally acceptable performance with…
NASA Technical Reports Server (NTRS)
Suit, W. T.; Cannaday, R. L.
1979-01-01
The longitudinal and lateral stability and control parameters for a high wing, general aviation, airplane are examined. Estimations using flight data obtained at various flight conditions within the normal range of the aircraft are presented. The estimations techniques, an output error technique (maximum likelihood) and an equation error technique (linear regression), are presented. The longitudinal static parameters are estimated from climbing, descending, and quasi steady state flight data. The lateral excitations involve a combination of rudder and ailerons. The sensitivity of the aircraft modes of motion to variations in the parameter estimates are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-03-01
The module provides an overview of general techniques that owners and operators of reporting facilities may use to estimate their toxic chemical releases. It exlains the basic release estimation techniques used to determine the chemical quantities reported on the Form R and uses those techniques, along with fundamental chemical or physical principles and properties, to estimate releases of listed toxic chemicals. It converts units of mass, volume, and time. It states the rules governing significant figures and rounding techniques, and references general and industry-specific estimation documents.
Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model
ERIC Educational Resources Information Center
Lamsal, Sunil
2015-01-01
Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…
Deep learning ensemble with asymptotic techniques for oscillometric blood pressure estimation.
Lee, Soojeong; Chang, Joon-Hyuk
2017-11-01
This paper proposes a deep learning based ensemble regression estimator with asymptotic techniques, and offers a method that can decrease uncertainty for oscillometric blood pressure (BP) measurements using the bootstrap and Monte-Carlo approach. While the former is used to estimate SBP and DBP, the latter attempts to determine confidence intervals (CIs) for SBP and DBP based on oscillometric BP measurements. This work originally employs deep belief networks (DBN)-deep neural networks (DNN) to effectively estimate BPs based on oscillometric measurements. However, there are some inherent problems with these methods. First, it is not easy to determine the best DBN-DNN estimator, and worthy information might be omitted when selecting one DBN-DNN estimator and discarding the others. Additionally, our input feature vectors, obtained from only five measurements per subject, represent a very small sample size; this is a critical weakness when using the DBN-DNN technique and can cause overfitting or underfitting, depending on the structure of the algorithm. To address these problems, an ensemble with an asymptotic approach (based on combining the bootstrap with the DBN-DNN technique) is utilized to generate the pseudo features needed to estimate the SBP and DBP. In the first stage, the bootstrap-aggregation technique is used to create ensemble parameters. Afterward, the AdaBoost approach is employed for the second-stage SBP and DBP estimation. We then use the bootstrap and Monte-Carlo techniques in order to determine the CIs based on the target BP estimated using the DBN-DNN ensemble regression estimator with the asymptotic technique in the third stage. The proposed method can mitigate the estimation uncertainty such as large the standard deviation of error (SDE) on comparing the proposed DBN-DNN ensemble regression estimator with the DBN-DNN single regression estimator, we identify that the SDEs of the SBP and DBP are reduced by 0.58 and 0.57 mmHg, respectively. These indicate that the proposed method actually enhances the performance by 9.18% and 10.88% compared with the DBN-DNN single estimator. The proposed methodology improves the accuracy of BP estimation and reduces the uncertainty for BP estimation. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod
2010-04-01
For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.
Positional estimation techniques for an autonomous mobile robot
NASA Technical Reports Server (NTRS)
Nandhakumar, N.; Aggarwal, J. K.
1990-01-01
Techniques for positional estimation of a mobile robot navigation in an indoor environment are described. A comprehensive review of the various positional estimation techniques studied in the literature is first presented. The techniques are divided into four different types and each of them is discussed briefly. Two different kinds of environments are considered for positional estimation; mountainous natural terrain and an urban, man-made environment with polyhedral buildings. In both cases, the robot is assumed to be equipped with single visual camera that can be panned and tilted and also a 3-D description (world model) of the environment is given. Such a description could be obtained from a stereo pair of aerial images or from the architectural plans of the buildings. Techniques for positional estimation using the camera input and the world model are presented.
Estimation for bilinear stochastic systems
NASA Technical Reports Server (NTRS)
Willsky, A. S.; Marcus, S. I.
1974-01-01
Three techniques for the solution of bilinear estimation problems are presented. First, finite dimensional optimal nonlinear estimators are presented for certain bilinear systems evolving on solvable and nilpotent lie groups. Then the use of harmonic analysis for estimation problems evolving on spheres and other compact manifolds is investigated. Finally, an approximate estimation technique utilizing cumulants is discussed.
Comparison of five canopy cover estimation techniques in the western Oregon Cascades.
Anne C.S. Fiala; Steven L. Garman; Andrew N. Gray
2006-01-01
Estimates of forest canopy cover are widely used in forest research and management, yet methods used to quantify canopy cover and the estimates they provide vary greatly. Four commonly used ground-based techniques for estimating overstory cover - line-intercept, spherical densiometer, moosehorn, and hemispherical photography - and cover estimates generated from crown...
Simulations of motor unit number estimation techniques
NASA Astrophysics Data System (ADS)
Major, Lora A.; Jones, Kelvin E.
2005-06-01
Motor unit number estimation (MUNE) is an electrodiagnostic procedure used to evaluate the number of motor axons connected to a muscle. All MUNE techniques rely on assumptions that must be fulfilled to produce a valid estimate. As there is no gold standard to compare the MUNE techniques against, we have developed a model of the relevant neuromuscular physiology and have used this model to simulate various MUNE techniques. The model allows for a quantitative analysis of candidate MUNE techniques that will hopefully contribute to consensus regarding a standard procedure for performing MUNE.
Quantitative CT: technique dependence of volume estimation on pulmonary nodules
NASA Astrophysics Data System (ADS)
Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan
2012-03-01
Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Development and application of the maximum entropy method and other spectral estimation techniques
NASA Astrophysics Data System (ADS)
King, W. R.
1980-09-01
This summary report is a collection of four separate progress reports prepared under three contracts, which are all sponsored by the Office of Naval Research in Arlington, Virginia. This report contains the results of investigations into the application of the maximum entropy method (MEM), a high resolution, frequency and wavenumber estimation technique. The report also contains a description of two, new, stable, high resolution spectral estimation techniques that is provided in the final report section. Many examples of wavenumber spectral patterns for all investigated techniques are included throughout the report. The maximum entropy method is also known as the maximum entropy spectral analysis (MESA) technique, and both names are used in the report. Many MEM wavenumber spectral patterns are demonstrated using both simulated and measured radar signal and noise data. Methods for obtaining stable MEM wavenumber spectra are discussed, broadband signal detection using the MEM prediction error transform (PET) is discussed, and Doppler radar narrowband signal detection is demonstrated using the MEM technique. It is also shown that MEM cannot be applied to randomly sampled data. The two new, stable, high resolution, spectral estimation techniques discussed in the final report section, are named the Wiener-King and the Fourier spectral estimation techniques. The two new techniques have a similar derivation based upon the Wiener prediction filter, but the two techniques are otherwise quite different. Further development of the techniques and measurement of the technique spectral characteristics is recommended for subsequent investigation.
Improved Pulse Wave Velocity Estimation Using an Arterial Tube-Load Model
Gao, Mingwu; Zhang, Guanqun; Olivier, N. Bari; Mukkamala, Ramakrishna
2015-01-01
Pulse wave velocity (PWV) is the most important index of arterial stiffness. It is conventionally estimated by non-invasively measuring central and peripheral blood pressure (BP) and/or velocity (BV) waveforms and then detecting the foot-to-foot time delay between the waveforms wherein wave reflection is presumed absent. We developed techniques for improved estimation of PWV from the same waveforms. The techniques effectively estimate PWV from the entire waveforms, rather than just their feet, by mathematically eliminating the reflected wave via an arterial tube-load model. In this way, the techniques may be more robust to artifact while revealing the true PWV in absence of wave reflection. We applied the techniques to estimate aortic PWV from simultaneously and sequentially measured central and peripheral BP waveforms and simultaneously measured central BV and peripheral BP waveforms from 17 anesthetized animals during diverse interventions that perturbed BP widely. Since BP is the major acute determinant of aortic PWV, especially under anesthesia wherein vasomotor tone changes are minimal, we evaluated the techniques in terms of the ability of their PWV estimates to track the acute BP changes in each subject. Overall, the PWV estimates of the techniques tracked the BP changes better than those of the conventional technique (e.g., diastolic BP root-mean-squared-errors of 3.4 vs. 5.2 mmHg for the simultaneous BP waveforms and 7.0 vs. 12.2 mmHg for the BV and BP waveforms (p < 0.02)). With further testing, the arterial tube-load model-based PWV estimation techniques may afford more accurate arterial stiffness monitoring in hypertensive and other patients. PMID:24263016
Accuracy of selected techniques for estimating ice-affected streamflow
Walker, John F.
1991-01-01
This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.
Choosing a DIVA: a comparison of emerging digital imagery vegetation analysis techniques
Jorgensen, Christopher F.; Stutzman, Ryan J.; Anderson, Lars C.; Decker, Suzanne E.; Powell, Larkin A.; Schacht, Walter H.; Fontaine, Joseph J.
2013-01-01
Question: What is the precision of five methods of measuring vegetation structure using ground-based digital imagery and processing techniques? Location: Lincoln, Nebraska, USA Methods: Vertical herbaceous cover was recorded using digital imagery techniques at two distinct locations in a mixed-grass prairie. The precision of five ground-based digital imagery vegetation analysis (DIVA) methods for measuring vegetation structure was tested using a split-split plot analysis of covariance. Variability within each DIVA technique was estimated using coefficient of variation of mean percentage cover. Results: Vertical herbaceous cover estimates differed among DIVA techniques. Additionally, environmental conditions affected the vertical vegetation obstruction estimates for certain digital imagery methods, while other techniques were more adept at handling various conditions. Overall, percentage vegetation cover values differed among techniques, but the precision of four of the five techniques was consistently high. Conclusions: DIVA procedures are sufficient for measuring various heights and densities of standing herbaceous cover. Moreover, digital imagery techniques can reduce measurement error associated with multiple observers' standing herbaceous cover estimates, allowing greater opportunity to detect patterns associated with vegetation structure.
Development of a technique for estimating noise covariances using multiple observers
NASA Technical Reports Server (NTRS)
Bundick, W. Thomas
1988-01-01
Friedland's technique for estimating the unknown noise variances of a linear system using multiple observers has been extended by developing a general solution for the estimates of the variances, developing the statistics (mean and standard deviation) of these estimates, and demonstrating the solution on two examples.
An adaptive technique for estimating the atmospheric density profile during the AE mission
NASA Technical Reports Server (NTRS)
Argentiero, P.
1973-01-01
A technique is presented for processing accelerometer data obtained during the AE missions in order to estimate the atmospheric density profile. A minimum variance, adaptive filter is utilized. The trajectory of the probe and probe parameters are in a consider mode where their estimates are unimproved but their associated uncertainties are permitted an impact on filter behavior. Simulations indicate that the technique is effective in estimating a density profile to within a few percentage points.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, A H; Kerr, L A; Cailliet, G M
2007-11-04
Canary rockfish (Sebastes pinniger) have long been an important part of recreational and commercial rockfish fishing from southeast Alaska to southern California, but localized stock abundances have declined considerably. Based on age estimates from otoliths and other structures, lifespan estimates vary from about 20 years to over 80 years. For the purpose of monitoring stocks, age composition is routinely estimated by counting growth zones in otoliths; however, age estimation procedures and lifespan estimates remain largely unvalidated. Typical age validation techniques have limited application for canary rockfish because they are deep dwelling and may be long lived. In this study, themore » unaged otolith of the pair from fish aged at the Department of Fisheries and Oceans Canada was used in one of two age validation techniques: (1) lead-radium dating and (2) bomb radiocarbon ({sup 14}C) dating. Age estimate accuracy and the validity of age estimation procedures were validated based on the results from each technique. Lead-radium dating proved successful in determining a minimum estimate of lifespan was 53 years and provided support for age estimation procedures up to about 50-60 years. These findings were further supported by {Delta}{sup 14}C data, which indicated a minimum estimate of lifespan was 44 {+-} 3 years. Both techniques validate, to differing degrees, age estimation procedures and provide support for inferring that canary rockfish can live more than 80 years.« less
NASA Astrophysics Data System (ADS)
Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod
2010-06-01
Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.
NASA Astrophysics Data System (ADS)
Sehad, Mounir; Lazri, Mourad; Ameur, Soltane
2017-03-01
In this work, a new rainfall estimation technique based on the high spatial and temporal resolution of the Spinning Enhanced Visible and Infra Red Imager (SEVIRI) aboard the Meteosat Second Generation (MSG) is presented. This work proposes efficient scheme rainfall estimation based on two multiclass support vector machine (SVM) algorithms: SVM_D for daytime and SVM_N for night time rainfall estimations. Both SVM models are trained using relevant rainfall parameters based on optical, microphysical and textural cloud proprieties. The cloud parameters are derived from the Spectral channels of the SEVIRI MSG radiometer. The 3-hourly and daily accumulated rainfall are derived from the 15 min-rainfall estimation given by the SVM classifiers for each MSG observation image pixel. The SVMs were trained with ground meteorological radar precipitation scenes recorded from November 2006 to March 2007 over the north of Algeria located in the Mediterranean region. Further, the SVM_D and SVM_N models were used to estimate 3-hourly and daily rainfall using data set gathered from November 2010 to March 2011 over north Algeria. The results were validated against collocated rainfall observed by rain gauge network. Indeed, the statistical scores given by correlation coefficient, bias, root mean square error and mean absolute error, showed good accuracy of rainfall estimates by the present technique. Moreover, rainfall estimates of our technique were compared with two high accuracy rainfall estimates methods based on MSG SEVIRI imagery namely: random forests (RF) based approach and an artificial neural network (ANN) based technique. The findings of the present technique indicate higher correlation coefficient (3-hourly: 0.78; daily: 0.94), and lower mean absolute error and root mean square error values. The results show that the new technique assign 3-hourly and daily rainfall with good and better accuracy than ANN technique and (RF) model.
NASA Astrophysics Data System (ADS)
Shrivastava, Akash; Mohanty, A. R.
2018-03-01
This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.
Parameter Estimation in Atmospheric Data Sets
NASA Technical Reports Server (NTRS)
Wenig, Mark; Colarco, Peter
2004-01-01
In this study the structure tensor technique is used to estimate dynamical parameters in atmospheric data sets. The structure tensor is a common tool for estimating motion in image sequences. This technique can be extended to estimate other dynamical parameters such as diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. As a test scenario this technique will be applied to modeled dust data. In this case vertically integrated dust concentrations were used to derive wind information. Those results can be compared to the wind vector fields which served as input to the model. Based on this analysis, a method to compute atmospheric data parameter fields will be presented. .
Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation
NASA Technical Reports Server (NTRS)
Rakoczy, John M.; Herren, Kenneth A.
2008-01-01
A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.
Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation
NASA Technical Reports Server (NTRS)
Rakoczy, John; Herren, Kenneth
2007-01-01
A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.
Accuracy of Noninvasive Estimation Techniques for the State of the Cochlear Amplifier
NASA Astrophysics Data System (ADS)
Dalhoff, Ernst; Gummer, Anthony W.
2011-11-01
Estimation of the function of the cochlea in human is possible only by deduction from indirect measurements, which may be subjective or objective. Therefore, for basic research as well as diagnostic purposes, it is important to develop methods to deduce and analyse error sources of cochlear-state estimation techniques. Here, we present a model of technical and physiologic error sources contributing to the estimation accuracy of hearing threshold and the state of the cochlear amplifier and deduce from measurements of human that the estimated standard deviation can be considerably below 6 dB. Experimental evidence is drawn from two partly independent objective estimation techniques for the auditory signal chain based on measurements of otoacoustic emissions.
Darmawan, M F; Yusuf, Suhaila M; Kadir, M R Abdul; Haron, H
2015-02-01
Sex estimation is used in forensic anthropology to assist the identification of individual remains. However, the estimation techniques tend to be unique and applicable only to a certain population. This paper analyzed sex estimation on living individual child below 19 years old using the length of 19 bones of left hand applied for three classification techniques, which were Discriminant Function Analysis (DFA), Support Vector Machine (SVM) and Artificial Neural Network (ANN) multilayer perceptron. These techniques were carried out on X-ray images of the left hand taken from an Asian population data set. All the 19 bones of the left hand were measured using Free Image software, and all the techniques were performed using MATLAB. The group of age "16-19" years old and "7-9" years old were the groups that could be used for sex estimation with as their average of accuracy percentage was above 80%. ANN model was the best classification technique with the highest average of accuracy percentage in the two groups of age compared to other classification techniques. The results show that each classification technique has the best accuracy percentage on each different group of age. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Dave Gartner; Gregory A. Reams
2001-01-01
As Forest Inventory and Analysis changes from a periodic survey to a multipanel annual survey, a transition will occur where only some of the panels have been resurveyed. Several estimation techniques use data from the periodic survey in addition to the data from the partially completed multipanel data. These estimation techniques were compared using data from two...
USDA-ARS?s Scientific Manuscript database
Recently, an instrument (TEMPOTM) has been developed to automate the Most Probable Number (MPN) technique and reduce the effort required to estimate some bacterial populations. We compared the automated MPN technique to traditional microbiological plating methods or PetrifilmTM for estimating the t...
Ariyama, Kaoru; Kadokura, Masashi; Suzuki, Tadanao
2008-01-01
Techniques to determine the geographic origin of foods have been developed for various agricultural and fishery products, and they have used various principles. Some of these techniques are already in use for checking the authenticity of the labeling. Many are based on multielement analysis and chemometrics. We have developed such a technique to determine the geographic origin of onions (Allium cepa L.). This technique, which determines whether an onion is from outside Japan, is designed for onions labeled as having a geographic origin of Hokkaido, Hyogo, or Saga, the main onion production areas in Japan. However, estimations of discrimination errors for this technique have not been fully conducted; they have been limited to those for discrimination models and do not include analytical errors. Interlaboratory studies were conducted to estimate the analytical errors of the technique. Four collaborators each determined 11 elements (Na, Mg, P, Mn, Zn, Rb, Sr, Mo, Cd, Cs, and Ba) in 4 test materials of fresh and dried onions. Discrimination errors in this technique were estimated by summing (1) individual differences within lots, (2) variations between lots from the same production area, and (3) analytical errors. The discrimination errors for onions from Hokkaido, Hyogo, and Saga were estimated to be 2.3, 9.5, and 8.0%, respectively. Those for onions from abroad in determinations targeting Hokkaido, Hyogo, and Saga were estimated to be 28.2, 21.6, and 21.9%, respectively.
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.
2017-11-01
This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation
Two ground-based canopy closure estimation techniques, the Spherical Densitometer (SD) and the Vertical Tube (VT), were compared for the effect of deciduous understory on dominantlco-dominant crown closure estimates in even-aged loblolly (Pinus taeda) pine stands located in the N...
Two ground-based canopy closure estimation techniques, the Spherical Densitometer (SD) and the Vertical Tube (VT), were compared for the effect of deciduous understory on dominant/co-dominant crown closure estimates in even-aged loblolly (Pinus taeda) pine stands located in the N...
On using sample selection methods in estimating the price elasticity of firms' demand for insurance.
Marquis, M Susan; Louis, Thomas A
2002-01-01
We evaluate a technique based on sample selection models that has been used by health economists to estimate the price elasticity of firms' demand for insurance. We demonstrate that, this technique produces inflated estimates of the price elasticity. We show that alternative methods lead to valid estimates.
NASA Technical Reports Server (NTRS)
Zimmerman, G. A.; Olsen, E. T.
1992-01-01
Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.
Three-dimensional ultrasound strain imaging of skeletal muscles
NASA Astrophysics Data System (ADS)
Gijsbertse, K.; Sprengers, A. M. J.; Nillesen, M. M.; Hansen, H. H. G.; Lopata, R. G. P.; Verdonschot, N.; de Korte, C. L.
2017-01-01
In this study, a multi-dimensional strain estimation method is presented to assess local relative deformation in three orthogonal directions in 3D space of skeletal muscles during voluntary contractions. A rigid translation and compressive deformation of a block phantom, that mimics muscle contraction, is used as experimental validation of the 3D technique and to compare its performance with respect to a 2D based technique. Axial, lateral and (in case of 3D) elevational displacements are estimated using a cross-correlation based displacement estimation algorithm. After transformation of the displacements to a Cartesian coordinate system, strain is derived using a least-squares strain estimator. The performance of both methods is compared by calculating the root-mean-squared error of the estimated displacements with the calculated theoretical displacements of the phantom experiments. We observe that the 3D technique delivers more accurate displacement estimations compared to the 2D technique, especially in the translation experiment where out-of-plane motion hampers the 2D technique. In vivo application of the 3D technique in the musculus vastus intermedius shows good resemblance between measured strain and the force pattern. Similarity of the strain curves of repetitive measurements indicates the reproducibility of voluntary contractions. These results indicate that 3D ultrasound is a valuable imaging tool to quantify complex tissue motion, especially when there is motion in three directions, which results in out-of-plane errors for 2D techniques.
Simulation studies of wide and medium field of view earth radiation data analysis
NASA Technical Reports Server (NTRS)
Green, R. N.
1978-01-01
A parameter estimation technique is presented to estimate the radiative flux distribution over the earth from radiometer measurements at satellite altitude. The technique analyzes measurements from a wide field of view (WFOV), horizon to horizon, nadir pointing sensor with a mathematical technique to derive the radiative flux estimates at the top of the atmosphere for resolution elements smaller than the sensor field of view. A computer simulation of the data analysis technique is presented for both earth-emitted and reflected radiation. Zonal resolutions are considered as well as the global integration of plane flux. An estimate of the equator-to-pole gradient is obtained from the zonal estimates. Sensitivity studies of the derived flux distribution to directional model errors are also presented. In addition to the WFOV results, medium field of view results are presented.
A photographic technique for estimating egg density of the white pine weevil, Pissodes strobi (Peck)
Roger T. Zerillo
1975-01-01
Compares a photographic technique with visual and dissection techniques for estimating egg density of the white pine weevil, Pissodes strobi (Peck). The relatively high correlations (.67 and .79) between counts from photographs and those obtained by dissection indicate that the non-destructive photographic technique could be a useful tool for...
Comparative evaluation of workload estimation techniques in piloting tasks
NASA Technical Reports Server (NTRS)
Wierwille, W. W.
1983-01-01
Techniques to measure operator workload in a wide range of situations and tasks were examined. The sensitivity and intrusion of a wide variety of workload assessment techniques in simulated piloting tasks were investigated. Four different piloting tasks, psychomotor, perceptual, mediational, and communication aspects of piloting behavior were selected. Techniques to determine relative sensitivity and intrusion were applied. Sensitivity is the relative ability of a workload estimation technique to discriminate statistically significant differences in operator loading. High sensitivity requires discriminable changes in score means as a function of load level and low variation of the scores about the means. Intrusion is an undesirable change in the task for which workload is measured, resulting from the introduction of the workload estimation technique or apparatus.
Michael E. Goerndt; Vicente J. Monleon; Hailemariam Temesgen
2011-01-01
One of the challenges often faced in forestry is the estimation of forest attributes for smaller areas of interest within a larger population. Small-area estimation (SAE) is a set of techniques well suited to estimation of forest attributes for small areas in which the existing sample size is small and auxiliary information is available. Selected SAE methods were...
Improved Estimates of Thermodynamic Parameters
NASA Technical Reports Server (NTRS)
Lawson, D. D.
1982-01-01
Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.
A comparison of minimum distance and maximum likelihood techniques for proportion estimation
NASA Technical Reports Server (NTRS)
Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.
1982-01-01
The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.
NASA Astrophysics Data System (ADS)
GonzáLez, Pablo J.; FernáNdez, José
2011-10-01
Interferometric Synthetic Aperture Radar (InSAR) is a reliable technique for measuring crustal deformation. However, despite its long application in geophysical problems, its error estimation has been largely overlooked. Currently, the largest problem with InSAR is still the atmospheric propagation errors, which is why multitemporal interferometric techniques have been successfully developed using a series of interferograms. However, none of the standard multitemporal interferometric techniques, namely PS or SB (Persistent Scatterers and Small Baselines, respectively) provide an estimate of their precision. Here, we present a method to compute reliable estimates of the precision of the deformation time series. We implement it for the SB multitemporal interferometric technique (a favorable technique for natural terrains, the most usual target of geophysical applications). We describe the method that uses a properly weighted scheme that allows us to compute estimates for all interferogram pixels, enhanced by a Montecarlo resampling technique that properly propagates the interferogram errors (variance-covariances) into the unknown parameters (estimated errors for the displacements). We apply the multitemporal error estimation method to Lanzarote Island (Canary Islands), where no active magmatic activity has been reported in the last decades. We detect deformation around Timanfaya volcano (lengthening of line-of-sight ˜ subsidence), where the last eruption in 1730-1736 occurred. Deformation closely follows the surface temperature anomalies indicating that magma crystallization (cooling and contraction) of the 300-year shallow magmatic body under Timanfaya volcano is still ongoing.
Noise estimation for hyperspectral imagery using spectral unmixing and synthesis
NASA Astrophysics Data System (ADS)
Demirkesen, C.; Leloglu, Ugur M.
2014-10-01
Most hyperspectral image (HSI) processing algorithms assume a signal to noise ratio model in their formulation which makes them dependent on accurate noise estimation. Many techniques have been proposed to estimate the noise. A very comprehensive comparative study on the subject is done by Gao et al. [1]. In a nut-shell, most techniques are based on the idea of calculating standard deviation from assumed-to-be homogenous regions in the image. Some of these algorithms work on a regular grid parameterized with a window size w, while others make use of image segmentation in order to obtain homogenous regions. This study focuses not only to the statistics of the noise but to the estimation of the noise itself. A noise estimation technique motivated from a recent HSI de-noising approach [2] is proposed in this study. The denoising algorithm is based on estimation of the end-members and their fractional abundances using non-negative least squares method. The end-members are extracted using the well-known simplex volume optimization technique called NFINDR after manual selection of number of end-members and the image is reconstructed using the estimated endmembers and abundances. Actually, image de-noising and noise estimation are two sides of the same coin: Once we denoise an image, we can estimate the noise by calculating the difference of the de-noised image and the original noisy image. In this study, the noise is estimated as described above. To assess the accuracy of this method, the methodology in [1] is followed, i.e., synthetic images are created by mixing end-member spectra and noise. Since best performing method for noise estimation was spectral and spatial de-correlation (SSDC) originally proposed in [3], the proposed method is compared to SSDC. The results of the experiments conducted with synthetic HSIs suggest that the proposed noise estimation strategy outperforms the existing techniques in terms of mean and standard deviation of absolute error of the estimated noise. Finally, it is shown that the proposed technique demonstrated a robust behavior to the change of its single parameter, namely the number of end-members.
Boyle, John J.; Kume, Maiko; Wyczalkowski, Matthew A.; Taber, Larry A.; Pless, Robert B.; Xia, Younan; Genin, Guy M.; Thomopoulos, Stavros
2014-01-01
When mechanical factors underlie growth, development, disease or healing, they often function through local regions of tissue where deformation is highly concentrated. Current optical techniques to estimate deformation can lack precision and accuracy in such regions due to challenges in distinguishing a region of concentrated deformation from an error in displacement tracking. Here, we present a simple and general technique for improving the accuracy and precision of strain estimation and an associated technique for distinguishing a concentrated deformation from a tracking error. The strain estimation technique improves accuracy relative to other state-of-the-art algorithms by directly estimating strain fields without first estimating displacements, resulting in a very simple method and low computational cost. The technique for identifying local elevation of strain enables for the first time the successful identification of the onset and consequences of local strain concentrating features such as cracks and tears in a highly strained tissue. We apply these new techniques to demonstrate a novel hypothesis in prenatal wound healing. More generally, the analytical methods we have developed provide a simple tool for quantifying the appearance and magnitude of localized deformation from a series of digital images across a broad range of disciplines. PMID:25165601
Estimation of Dynamical Parameters in Atmospheric Data Sets
NASA Technical Reports Server (NTRS)
Wenig, Mark O.
2004-01-01
In this study a new technique is used to derive dynamical parameters out of atmospheric data sets. This technique, called the structure tensor technique, can be used to estimate dynamical parameters such as motion, source strengths, diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. The fundamental algorithm will be extended to the analysis of multi- channel (e.g. multi trace gas) image sequences and to provide solutions to the extended aperture problem. In this study sensitivity studies have been performed to determine the usability of this technique for data sets with different resolution in time and space and different dimensions.
Estimation variance bounds of importance sampling simulations in digital communication systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1991-01-01
In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.
Novel Application of Density Estimation Techniques in Muon Ionization Cooling Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohayai, Tanaz Angelina; Snopok, Pavel; Neuffer, David
The international Muon Ionization Cooling Experiment (MICE) aims to demonstrate muon beam ionization cooling for the first time and constitutes a key part of the R&D towards a future neutrino factory or muon collider. Beam cooling reduces the size of the phase space volume occupied by the beam. Non-parametric density estimation techniques allow very precise calculation of the muon beam phase-space density and its increase as a result of cooling. These density estimation techniques are investigated in this paper and applied in order to estimate the reduction in muon beam size in MICE under various conditions.
Development of the One-Sided Nonlinear Adaptive Doppler Shift Estimation
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Koch, Grady J.; Singh, Upendra N.; Kavaya, Michael J.; Serror, Judith A.
2009-01-01
The new development of a one-sided nonlinear adaptive shift estimation technique (NADSET) is introduced. The background of the algorithm and a brief overview of NADSET are presented. The new technique is applied to the wind parameter estimates from a 2-micron wavelength coherent Doppler lidar system called VALIDAR located in NASA Langley Research Center in Virginia. The new technique enhances wind parameters such as Doppler shift and power estimates in low Signal-To-Noise-Ratio (SNR) regimes using the estimates in high SNR regimes as the algorithm scans the range bins from low to high altitude. The original NADSET utilizes the statistics in both the lower and the higher range bins to refine the wind parameter estimates in between. The results of the two different approaches of NADSET are compared.
An angle-dependent estimation of CT x-ray spectrum from rotational transmission measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Yuan, E-mail: yuan.lin@duke.edu; Samei, Ehsan; Ramirez-Giraldo, Juan Carlos
2014-06-15
Purpose: Computed tomography (CT) performance as well as dose and image quality is directly affected by the x-ray spectrum. However, the current assessment approaches of the CT x-ray spectrum require costly measurement equipment and complicated operational procedures, and are often limited to the spectrum corresponding to the center of rotation. In order to address these limitations, the authors propose an angle-dependent estimation technique, where the incident spectra across a wide range of angular trajectories can be estimated accurately with only a single phantom and a single axial scan in the absence of the knowledge of the bowtie filter. Methods: Themore » proposed technique uses a uniform cylindrical phantom, made of ultra-high-molecular-weight polyethylene and positioned in an off-centered geometry. The projection data acquired with an axial scan have a twofold purpose. First, they serve as a reflection of the transmission measurements across different angular trajectories. Second, they are used to reconstruct the cross sectional image of the phantom, which is then utilized to compute the intersection length of each transmission measurement. With each CT detector element recording a range of transmission measurements for a single angular trajectory, the spectrum is estimated for that trajectory. A data conditioning procedure is used to combine information from hundreds of collected transmission measurements to accelerate the estimation speed, to reduce noise, and to improve estimation stability. The proposed spectral estimation technique was validated experimentally using a clinical scanner (Somatom Definition Flash, Siemens Healthcare, Germany) with spectra provided by the manufacturer serving as the comparison standard. Results obtained with the proposed technique were compared against those obtained from a second conventional transmission measurement technique with two materials (i.e., Cu and Al). After validation, the proposed technique was applied to measure spectra from the clinical system across a range of angular trajectories [−15°, 15°] and spectrum settings (80, 100, 120, 140 kVp). Results: At 140 kVp, the proposed technique was comparable to the conventional technique in terms of the mean energy difference (MED, −0.29 keV) and the normalized root mean square difference (NRMSD, 0.84%) from the comparison standard compared to 0.64 keV and 1.56%, respectively, with the conventional technique. The average absolute MEDs and NRMSDs across kVp settings and angular trajectories were less than 0.61 keV and 3.41%, respectively, which indicates a high level of estimation accuracy and stability. Conclusions: An angle-dependent estimation technique of CT x-ray spectra from rotational transmission measurements was proposed. Compared with the conventional technique, the proposed method simplifies the measurement procedures and enables incident spectral estimation for a wide range of angular trajectories. The proposed technique is suitable for rigorous research objectives as well as routine clinical quality control procedures.« less
Estimating propagation velocity through a surface acoustic wave sensor
Xu, Wenyuan; Huizinga, John S.
2010-03-16
Techniques are described for estimating the propagation velocity through a surface acoustic wave sensor. In particular, techniques which measure and exploit a proper segment of phase frequency response of the surface acoustic wave sensor are described for use as a basis of bacterial detection by the sensor. As described, use of velocity estimation based on a proper segment of phase frequency response has advantages over conventional techniques that use phase shift as the basis for detection.
We developed a technique for assessing the accuracy of sub-pixel derived estimates of impervious surface extracted from LANDSAT TM imagery. We utilized spatially coincident
sub-pixel derived impervious surface estimates, high-resolution planimetric GIS data, vector--to-
r...
Forest inventory and stratified estimation: a cautionary note
John Coulston
2008-01-01
The Forest Inventory and Analysis (FIA) Program uses stratified estimation techniques to produce estimates of forest attributes. Stratification must be unbiased and stratification procedures should be examined to identify any potential bias. This note explains simple techniques for identifying potential bias, discriminating between sample bias and stratification bias,...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.
2016-11-15
A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layermore » density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.« less
USDA-ARS?s Scientific Manuscript database
Spatial frequency domain imaging technique has recently been developed for determination of the optical properties of food and biological materials. However, accurate estimation of the optical property parameters by the technique is challenging due to measurement errors associated with signal acquis...
Spring Small Grains Area Estimation
NASA Technical Reports Server (NTRS)
Palmer, W. F.; Mohler, R. J.
1986-01-01
SSG3 automatically estimates acreage of spring small grains from Landsat data. Report describes development and testing of a computerized technique for using Landsat multispectral scanner (MSS) data to estimate acreage of spring small grains (wheat, barley, and oats). Application of technique to analysis of four years of data from United States and Canada yielded estimates of accuracy comparable to those obtained through procedures that rely on trained analysis.
NASA Astrophysics Data System (ADS)
Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne
2014-01-01
Inverse modelling techniques can be used to estimate the amount of radionuclides and the temporal profile of the source term released in the atmosphere during the accident of the Fukushima Daiichi nuclear power plant in March 2011. In Winiarek et al. (2012b), the lower bounds of the caesium-137 and iodine-131 source terms were estimated with such techniques, using activity concentration measurements. The importance of an objective assessment of prior errors (the observation errors and the background errors) was emphasised for a reliable inversion. In such critical context where the meteorological conditions can make the source term partly unobservable and where only a few observations are available, such prior estimation techniques are mandatory, the retrieved source term being very sensitive to this estimation. We propose to extend the use of these techniques to the estimation of prior errors when assimilating observations from several data sets. The aim is to compute an estimate of the caesium-137 source term jointly using all available data about this radionuclide, such as activity concentrations in the air, but also daily fallout measurements and total cumulated fallout measurements. It is crucial to properly and simultaneously estimate the background errors and the prior errors relative to each data set. A proper estimation of prior errors is also a necessary condition to reliably estimate the a posteriori uncertainty of the estimated source term. Using such techniques, we retrieve a total released quantity of caesium-137 in the interval 11.6-19.3 PBq with an estimated standard deviation range of 15-20% depending on the method and the data sets. The “blind” time intervals of the source term have also been strongly mitigated compared to the first estimations with only activity concentration data.
Estimating Crop Growth Stage by Combining Meteorological and Remote Sensing Based Techniques
NASA Astrophysics Data System (ADS)
Champagne, C.; Alavi-Shoushtari, N.; Davidson, A. M.; Chipanshi, A.; Zhang, Y.; Shang, J.
2016-12-01
Estimations of seeding, harvest and phenological growth stage of crops are important sources of information for monitoring crop progress and crop yield forecasting. Growth stage has been traditionally estimated at the regional level through surveys, which rely on field staff to collect the information. Automated techniques to estimate growth stage have included agrometeorological approaches that use temperature and day length information to estimate accumulated heat and photoperiod, with thresholds used to determine when these stages are most likely. These approaches however, are crop and hybrid dependent, and can give widely varying results depending on the method used, particularly if the seeding date is unknown. Methods to estimate growth stage from remote sensing have progressed greatly in the past decade, with time series information from the Normalized Difference Vegetation Index (NDVI) the most common approach. Time series NDVI provide information on growth stage through a variety of techniques, including fitting functions to a series of measured NDVI values or smoothing these values and using thresholds to detect changes in slope that are indicative of rapidly increasing or decreasing `greeness' in the vegetation cover. The key limitations of these techniques for agriculture are frequent cloud cover in optical data that lead to errors in estimating local features in the time series function, and the incongruity between changes in greenness and traditional agricultural growth stages. There is great potential to combine both meteorological approaches and remote sensing to overcome the limitations of each technique. This research will examine the accuracy of both meteorological and remote sensing approaches over several agricultural sites in Canada, and look at the potential to integrate these techniques to provide improved estimates of crop growth stage for common field crops.
Lorenz, David L.; Sanocki, Chris A.; Kocian, Matthew J.
2010-01-01
Knowledge of the peak flow of floods of a given recurrence interval is essential for regulation and planning of water resources and for design of bridges, culverts, and dams along Minnesota's rivers and streams. Statistical techniques are needed to estimate peak flow at ungaged sites because long-term streamflow records are available at relatively few places. Because of the need to have up-to-date peak-flow frequency information in order to estimate peak flows at ungaged sites, the U.S. Geological Survey (USGS) conducted a peak-flow frequency study in cooperation with the Minnesota Department of Transportation and the Minnesota Pollution Control Agency. Estimates of peak-flow magnitudes for 1.5-, 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals are presented for 330 streamflow-gaging stations in Minnesota and adjacent areas in Iowa and South Dakota based on data through water year 2005. The peak-flow frequency information was subsequently used in regression analyses to develop equations relating peak flows for selected recurrence intervals to various basin and climatic characteristics. Two statistically derived techniques-regional regression equation and region of influence regression-can be used to estimate peak flow on ungaged streams smaller than 3,000 square miles in Minnesota. Regional regression equations were developed for selected recurrence intervals in each of six regions in Minnesota: A (northwestern), B (north central and east central), C (northeastern), D (west central and south central), E (southwestern), and F (southeastern). The regression equations can be used to estimate peak flows at ungaged sites. The region of influence regression technique dynamically selects streamflow-gaging stations with characteristics similar to a site of interest. Thus, the region of influence regression technique allows use of a potentially unique set of gaging stations for estimating peak flow at each site of interest. Two methods of selecting streamflow-gaging stations, similarity and proximity, can be used for the region of influence regression technique. The regional regression equation technique is the preferred technique as an estimate of peak flow in all six regions for ungaged sites. The region of influence regression technique is not appropriate for regions C, E, and F because the interrelations of some characteristics of those regions do not agree with the interrelations throughout the rest of the State. Both the similarity and proximity methods for the region of influence technique can be used in the other regions (A, B, and D) to provide additional estimates of peak flow. The peak-flow-frequency estimates and basin characteristics for selected streamflow-gaging stations and regional peak-flow regression equations are included in this report.
Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay
2012-01-01
An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.
Gao, Mingwu; Cheng, Hao-Min; Sung, Shih-Hsien; Chen, Chen-Huan; Olivier, Nicholas Bari; Mukkamala, Ramakrishna
2017-07-01
pulse transit time (PTT) varies with blood pressure (BP) throughout the cardiac cycle, yet, because of wave reflection, only one PTT value at the diastolic BP level is conventionally estimated from proximal and distal BP waveforms. The objective was to establish a technique to estimate multiple PTT values at different BP levels in the cardiac cycle. a technique was developed for estimating PTT as a function of BP (to indicate the PTT value for every BP level) from proximal and distal BP waveforms. First, a mathematical transformation from one waveform to the other is defined in terms of the parameters of a nonlinear arterial tube-load model accounting for BP-dependent arterial compliance and wave reflection. Then, the parameters are estimated by optimally fitting the waveforms to each other via the model-based transformation. Finally, PTT as a function of BP is specified by the parameters. The technique was assessed in animals and patients in several ways including the ability of its estimated PTT-BP function to serve as a subject-specific curve for calibrating PTT to BP. the calibration curve derived by the technique during a baseline period yielded bias and precision errors in mean BP of 5.1 ± 0.9 and 6.6 ± 1.0 mmHg, respectively, during hemodynamic interventions that varied mean BP widely. the new technique may permit, for the first time, estimation of PTT values throughout the cardiac cycle from proximal and distal waveforms. the technique could potentially be applied to improve arterial stiffness monitoring and help realize cuff-less BP monitoring.
A-posteriori error estimation for second order mechanical systems
NASA Astrophysics Data System (ADS)
Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter
2012-06-01
One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.
Estimates of air emissions from asphalt storage tanks and truck loading
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trumbore, D.C.
1999-12-31
Title V of the 1990 Clean Air Act requires the accurate estimation of emissions from all US manufacturing processes, and places the burden of proof for that estimate on the process owner. This paper is published as a tool to assist in the estimation of air emission from hot asphalt storage tanks and asphalt truck loading operations. Data are presented on asphalt vapor pressure, vapor molecular weight, and the emission split between volatile organic compounds and particulate emissions that can be used with AP-42 calculation techniques to estimate air emissions from asphalt storage tanks and truck loading operations. Since currentmore » AP-42 techniques are not valid in asphalt tanks with active fume removal, a different technique for estimation of air emissions in those tanks, based on direct measurement of vapor space combustible gas content, is proposed. Likewise, since AP-42 does not address carbon monoxide or hydrogen sulfide emissions that are known to be present in asphalt operations, this paper proposes techniques for estimation of those emissions. Finally, data are presented on the effectiveness of fiber bed filters in reducing air emissions in asphalt operations.« less
Estimation of Heavy Metals Contamination in the Soil of Zaafaraniya City Using the Neural Network
NASA Astrophysics Data System (ADS)
Ghazi, Farah F.
2018-05-01
The aim of this paper is to estimate the heavy metals Contamination in soils which can be used to determine the rate of environmental contamination by using new technique depend on design feedback neural network as an alternative accurate technique. The network simulates to estimate the concentration of Cadmium (Cd), Nickel (Ni), Lead (Pb), Zinc (Zn) and Copper (Cu). Then to show the accuracy and efficiency of suggested design we applied the technique in Al- Zafaraniyah in Baghdad city. The results of this paper show that the suggested networks can be successfully applied to the rapid and accuracy estimation of concentration of heavy metals.
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
ERIC Educational Resources Information Center
Recchia, Gabriel L.; Louwerse, Max M.
2016-01-01
Computational techniques comparing co-occurrences of city names in texts allow the relative longitudes and latitudes of cities to be estimated algorithmically. However, these techniques have not been applied to estimate the provenance of artifacts with unknown origins. Here, we estimate the geographic origin of artifacts from the Indus Valley…
A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.
ERIC Educational Resources Information Center
McKinley, Robert L.; Reckase, Mark D.
A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…
Genie M. Fleming; Joseph M. Wunderle; David N. Ewert; Joseph O' Brien
2014-01-01
Aim: Non-destructive methods for quantifying above-ground plant biomass are important tools in many ecological studies and management endeavours, but estimation methods can be labour intensive and particularly difficult in structurally diverse vegetation types. We aimed to develop a low-cost, but reasonably accurate, estimation technique within early-successional...
Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Boyle, Richard D.
2014-01-01
Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.
NASA Astrophysics Data System (ADS)
Shi, Lei; Guo, Lianghui; Ma, Yawei; Li, Yonghua; Wang, Weilai
2018-05-01
The technique of teleseismic receiver function H-κ stacking is popular for estimating the crustal thickness and Vp/Vs ratio. However, it has large uncertainty or ambiguity when the Moho multiples in receiver function are not easy to be identified. We present an improved technique to estimate the crustal thickness and Vp/Vs ratio by joint constraints of receiver function and gravity data. The complete Bouguer gravity anomalies, composed of the anomalies due to the relief of the Moho interface and the heterogeneous density distribution within the crust, are associated with the crustal thickness, density and Vp/Vs ratio. According to their relationship formulae presented by Lowry and Pérez-Gussinyé, we invert the complete Bouguer gravity anomalies by using a common algorithm of likelihood estimation to obtain the crustal thickness and Vp/Vs ratio, and then utilize them to constrain the receiver function H-κ stacking result. We verified the improved technique on three synthetic crustal models and evaluated the influence of selected parameters, the results of which demonstrated that the novel technique could reduce the ambiguity and enhance the accuracy of estimation. Real data test at two given stations in the NE margin of Tibetan Plateau illustrated that the improved technique provided reliable estimations of crustal thickness and Vp/Vs ratio.
Quantum-classical boundary for precision optical phase estimation
NASA Astrophysics Data System (ADS)
Birchall, Patrick M.; O'Brien, Jeremy L.; Matthews, Jonathan C. F.; Cable, Hugo
2017-12-01
Understanding the fundamental limits on the precision to which an optical phase can be estimated is of key interest for many investigative techniques utilized across science and technology. We study the estimation of a fixed optical phase shift due to a sample which has an associated optical loss, and compare phase estimation strategies using classical and nonclassical probe states. These comparisons are based on the attainable (quantum) Fisher information calculated per number of photons absorbed or scattered by the sample throughout the sensing process. We find that for a given number of incident photons upon the unknown phase, nonclassical techniques in principle provide less than a 20 % reduction in root-mean-square error (RMSE) in comparison with ideal classical techniques in multipass optical setups. Using classical techniques in a different optical setup that we analyze, which incorporates additional stages of interference during the sensing process, the achievable reduction in RMSE afforded by nonclassical techniques falls to only ≃4 % . We explain how these conclusions change when nonclassical techniques are compared to classical probe states in nonideal multipass optical setups, with additional photon losses due to the measurement apparatus.
NASA Astrophysics Data System (ADS)
Kumar, Shashi; Khati, Unmesh G.; Chandola, Shreya; Agrawal, Shefali; Kushwaha, Satya P. S.
2017-08-01
The regulation of the carbon cycle is a critical ecosystem service provided by forests globally. It is, therefore, necessary to have robust techniques for speedy assessment of forest biophysical parameters at the landscape level. It is arduous and time taking to monitor the status of vast forest landscapes using traditional field methods. Remote sensing and GIS techniques are efficient tools that can monitor the health of forests regularly. Biomass estimation is a key parameter in the assessment of forest health. Polarimetric SAR (PolSAR) remote sensing has already shown its potential for forest biophysical parameter retrieval. The current research work focuses on the retrieval of forest biophysical parameters of tropical deciduous forest, using fully polarimetric spaceborne C-band data with Polarimetric SAR Interferometry (PolInSAR) techniques. PolSAR based Interferometric Water Cloud Model (IWCM) has been used to estimate aboveground biomass (AGB). Input parameters to the IWCM have been extracted from the decomposition modeling of SAR data as well as PolInSAR coherence estimation. The technique of forest tree height retrieval utilized PolInSAR coherence based modeling approach. Two techniques - Coherence Amplitude Inversion (CAI) and Three Stage Inversion (TSI) - for forest height estimation are discussed, compared and validated. These techniques allow estimation of forest stand height and true ground topography. The accuracy of the forest height estimated is assessed using ground-based measurements. PolInSAR based forest height models showed enervation in the identification of forest vegetation and as a result height values were obtained in river channels and plain areas. Overestimation in forest height was also noticed at several patches of the forest. To overcome this problem, coherence and backscatter based threshold technique is introduced for forest area identification and accurate height estimation in non-forested regions. IWCM based modeling for forest AGB retrieval showed R2 value of 0.5, RMSE of 62.73 (t ha-1) and a percent accuracy of 51%. TSI based PolInSAR inversion modeling showed the most accurate result for forest height estimation. The correlation between the field measured forest height and the estimated tree height using TSI technique is 62% with an average accuracy of 91.56% and RMSE of 2.28 m. The study suggested that PolInSAR coherence based modeling approach has significant potential for retrieval of forest biophysical parameters.
The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation
NASA Technical Reports Server (NTRS)
Tsou, Haiping; Yan, Tsun-Yee
2000-01-01
This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.
NASA Astrophysics Data System (ADS)
Hamaguchi, Nana; Yamamoto, Keiko; Iwai, Daisuke; Sato, Kosuke
We investigate ambient sensing techniques that recognize writer's psychological states by measuring vibrations of handwriting on a desk panel using a piezoelectric contact sensor attached to its underside. In particular, we describe a technique for estimating the subjective difficulty of a question for a student as the ratio of the time duration of thinking to the total amount of time spent on the question. Through experiments, we confirm that our technique correctly recognizes whether or not a person writes something down on paper by measured vibration data at the accuracy of over 80 %, and that the order of computed subjective difficulties of three questions is coincident with that reported by the subject in 60 % of experiments. We also propose a technique to estimate a writer's psychological stress by using the standard deviation of the spectrum of the measured vibration. Results of a proof-of-concept experiment show that the proposed technique correctly estimates whether or not the subject feels stress at least 90 % of the time.
High-resolution bottom-loss estimation using the ambient-noise vertical coherence function.
Muzi, Lanfranco; Siderius, Martin; Quijano, Jorge E; Dosso, Stan E
2015-01-01
The seabed reflection loss (shortly "bottom loss") is an important quantity for predicting transmission loss in the ocean. A recent passive technique for estimating the bottom loss as a function of frequency and grazing angle exploits marine ambient noise (originating at the surface from breaking waves, wind, and rain) as an acoustic source. Conventional beamforming of the noise field at a vertical line array of hydrophones is a fundamental step in this technique, and the beamformer resolution in grazing angle affects the quality of the estimated bottom loss. Implementation of this technique with short arrays can be hindered by their inherently poor angular resolution. This paper presents a derivation of the bottom reflection coefficient from the ambient-noise spatial coherence function, and a technique based on this derivation for obtaining higher angular resolution bottom-loss estimates. The technique, which exploits the (approximate) spatial stationarity of the ambient-noise spatial coherence function, is demonstrated on both simulated and experimental data.
Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U
2011-04-01
In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
UAV State Estimation Modeling Techniques in AHRS
NASA Astrophysics Data System (ADS)
Razali, Shikin; Zhahir, Amzari
2017-11-01
Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.
The effects of missing data on global ozone estimates
NASA Technical Reports Server (NTRS)
Drewry, J. W.; Robbins, J. L.
1981-01-01
The effects of missing data and model truncation on estimates of the global mean, zonal distribution, and global distribution of ozone are considered. It is shown that missing data can introduce biased estimates with errors that are not accounted for in the accuracy calculations of empirical modeling techniques. Data-fill techniques are introduced and used for evaluating error bounds and constraining the estimate in areas of sparse and missing data. It is found that the accuracy of the global mean estimate is more dependent on data distribution than model size. Zonal features can be accurately described by 7th order models over regions of adequate data distribution. Data variance accounted for by higher order models appears to represent climatological features of columnar ozone rather than pure error. Data-fill techniques can prevent artificial feature generation in regions of sparse or missing data without degrading high order estimates over dense data regions.
Tumor response estimation in radar-based microwave breast cancer detection.
Kurrant, Douglas J; Fear, Elise C; Westwick, David T
2008-12-01
Radar-based microwave imaging techniques have been proposed for early stage breast cancer detection. A considerable challenge for the successful implementation of these techniques is the reduction of clutter, or components of the signal originating from objects other than the tumor. In particular, the reduction of clutter from the late-time scattered fields is required in order to detect small (subcentimeter diameter) tumors. In this paper, a method to estimate the tumor response contained in the late-time scattered fields is presented. The method uses a parametric function to model the tumor response. A maximum a posteriori estimation approach is used to evaluate the optimal values for the estimates of the parameters. A pattern classification technique is then used to validate the estimation. The ability of the algorithm to estimate a tumor response is demonstrated by using both experimental and simulated data obtained with a tissue sensing adaptive radar system.
Oberg, Kevin A.; Mades, Dean M.
1987-01-01
Four techniques for estimating generalized skew in Illinois were evaluated: (1) a generalized skew map of the US; (2) an isoline map; (3) a prediction equation; and (4) a regional-mean skew. Peak-flow records at 730 gaging stations having 10 or more annual peaks were selected for computing station skews. Station skew values ranged from -3.55 to 2.95, with a mean of -0.11. Frequency curves computed for 30 gaging stations in Illinois using the variations of the regional-mean skew technique are similar to frequency curves computed using a skew map developed by the US Water Resources Council (WRC). Estimates of the 50-, 100-, and 500-yr floods computed for 29 of these gaging stations using the regional-mean skew techniques are within the 50% confidence limits of frequency curves computed using the WRC skew map. Although the three variations of the regional-mean skew technique were slightly more accurate than the WRC map, there is no appreciable difference between flood estimates computed using the variations of the regional-mean technique and flood estimates computed using the WRC skew map. (Peters-PTT)
Biomagnetic techniques for evaluating gastric emptying, peristaltic contraction and transit time
la Roca-Chiapas, Jose María De; Cordova-Fraga, Teodoro
2011-01-01
Biomagnetic techniques were used to measure motility in various parts of the gastrointestinal (GI) tract, particularly a new technique for detecting magnetic markers and tracers. A coil was used to enhance the signal from a magnetic tracer in the GI tract and the signal was detected using a fluxgate magnetometer or a magnetoresistor in an unshielded room. Estimates of esophageal transit time were affected by the position of the subject. The reproducibility of estimates derived using the new biomagnetic technique was greater than 85% and it yielded estimates similar to those obtained using scintigraphy. This technique is suitable for studying the effect of emotional state on GI physiology and for measuring GI transit time. The biomagnetic technique can be used to evaluate digesta transit time in the esophagus, stomach and colon, peristaltic frequency and gastric emptying and is easy to use in the hospital setting. PMID:22025978
Biomagnetic techniques for evaluating gastric emptying, peristaltic contraction and transit time.
la Roca-Chiapas, Jose María De; Cordova-Fraga, Teodoro
2011-10-15
Biomagnetic techniques were used to measure motility in various parts of the gastrointestinal (GI) tract, particularly a new technique for detecting magnetic markers and tracers. A coil was used to enhance the signal from a magnetic tracer in the GI tract and the signal was detected using a fluxgate magnetometer or a magnetoresistor in an unshielded room. Estimates of esophageal transit time were affected by the position of the subject. The reproducibility of estimates derived using the new biomagnetic technique was greater than 85% and it yielded estimates similar to those obtained using scintigraphy. This technique is suitable for studying the effect of emotional state on GI physiology and for measuring GI transit time. The biomagnetic technique can be used to evaluate digesta transit time in the esophagus, stomach and colon, peristaltic frequency and gastric emptying and is easy to use in the hospital setting.
A visual training tool for the Photoload sampling technique
Violet J. Holley; Robert E. Keane
2010-01-01
This visual training aid is designed to provide Photoload users a tool to increase the accuracy of fuel loading estimations when using the Photoload technique. The Photoload Sampling Technique (RMRS-GTR-190) provides fire managers a sampling method for obtaining consistent, accurate, inexpensive, and quick estimates of fuel loading. It is designed to require only one...
NASA Technical Reports Server (NTRS)
Daly, J. K.
1974-01-01
The programming techniques used to implement the equations and mathematical techniques of the Houston Operations Predictor/Estimator (HOPE) orbit determination program on the UNIVAC 1108 computer are described. Detailed descriptions are given of the program structure, the internal program structure, the internal program tables and program COMMON, modification and maintainence techniques, and individual subroutine documentation.
NASA Technical Reports Server (NTRS)
1980-01-01
A plan is presented for a supplemental experiment to evaluate a sample allocation technique for selecting picture elements from remotely sensed multispectral imagery for labeling in connection with a new crop proportion estimation technique. The method of evaluating an improved allocation and proportion estimation technique is also provided.
NASA Technical Reports Server (NTRS)
Sheffner, E. J.; Hlavka, C. A.; Bauer, E. M.
1984-01-01
Two techniques have been developed for the mapping and area estimation of small grains in California from Landsat digital data. The two techniques are Band Ratio Thresholding, a semi-automated version of a manual procedure, and LCLS, a layered classification technique which can be fully automated and is based on established clustering and classification technology. Preliminary evaluation results indicate that the two techniques have potential for providing map products which can be incorporated into existing inventory procedures and automated alternatives to traditional inventory techniques and those which currently employ Landsat imagery.
Mann, Michael P.; Rizzardo, Jule; Satkowski, Richard
2004-01-01
Accurate streamflow statistics are essential to water resource agencies involved in both science and decision-making. When long-term streamflow data are lacking at a site, estimation techniques are often employed to generate streamflow statistics. However, procedures for accurately estimating streamflow statistics often are lacking. When estimation procedures are developed, they often are not evaluated properly before being applied. Use of unevaluated or underevaluated flow-statistic estimation techniques can result in improper water-resources decision-making. The California State Water Resources Control Board (SWRCB) uses two key techniques, a modified rational equation and drainage basin area-ratio transfer, to estimate streamflow statistics at ungaged locations. These techniques have been implemented to varying degrees, but have not been formally evaluated. For estimating peak flows at the 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals, the SWRCB uses the U.S. Geological Surveys (USGS) regional peak-flow equations. In this study, done cooperatively by the USGS and SWRCB, the SWRCB estimated several flow statistics at 40 USGS streamflow gaging stations in the north coast region of California. The SWRCB estimates were made without reference to USGS flow data. The USGS used the streamflow data provided by the 40 stations to generate flow statistics that could be compared with SWRCB estimates for accuracy. While some SWRCB estimates compared favorably with USGS statistics, results were subject to varying degrees of error over the region. Flow-based estimation techniques generally performed better than rain-based methods, especially for estimation of December 15 to March 31 mean daily flows. The USGS peak-flow equations also performed well, but tended to underestimate peak flows. The USGS equations performed within reported error bounds, but will require updating in the future as peak-flow data sets grow larger. Little correlation was discovered between estimation errors and geographic locations or various basin characteristics. However, for 25-percentile year mean-daily-flow estimates for December 15 to March 31, the greatest estimation errors were at east San Francisco Bay area stations with mean annual precipitation less than or equal to 30 inches, and estimated 2-year/24-hour rainfall intensity less than 3 inches.
NASA Technical Reports Server (NTRS)
Tranter, W. H.; Turner, M. D.
1977-01-01
Techniques are developed to estimate power gain, delay, signal-to-noise ratio, and mean square error in digital computer simulations of lowpass and bandpass systems. The techniques are applied to analog and digital communications. The signal-to-noise ratio estimates are shown to be maximum likelihood estimates in additive white Gaussian noise. The methods are seen to be especially useful for digital communication systems where the mapping from the signal-to-noise ratio to the error probability can be obtained. Simulation results show the techniques developed to be accurate and quite versatile in evaluating the performance of many systems through digital computer simulation.
Comparing capacity value estimation techniques for photovoltaic solar power
Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul
2012-09-28
In this paper, we estimate the capacity value of photovoltaic (PV) solar plants in the western U.S. Our results show that PV plants have capacity values that range between 52% and 93%, depending on location and sun-tracking capability. We further compare more robust but data- and computationally-intense reliability-based estimation techniques with simpler approximation methods. We show that if implemented properly, these techniques provide accurate approximations of reliability-based methods. Overall, methods that are based on the weighted capacity factor of the plant provide the most accurate estimate. As a result, we also examine the sensitivity of PV capacity value to themore » inclusion of sun-tracking systems.« less
A proposed technique for the Venus balloon telemetry and Doppler frequency recovery
NASA Technical Reports Server (NTRS)
Jurgens, R. F.; Divsalar, D.
1985-01-01
A technique is proposed to accurately estimate the Doppler frequency and demodulate the digitally encoded telemetry signal that contains the measurements from balloon instruments. Since the data are prerecorded, one can take advantage of noncausal estimators that are both simpler and more computationally efficient than the usual closed-loop or real-time estimators for signal detection and carrier tracking. Algorithms for carrier frequency estimation subcarrier demodulation, bit and frame synchronization are described. A Viterbi decoder algorithm using a branch indexing technique has been devised to decode constraint length 6, rate 1/2 convolutional code that is being used by the balloon transmitter. These algorithms are memory efficient and can be implemented on microcomputer systems.
NASA Technical Reports Server (NTRS)
Smetana, F. O.; Summery, D. C.; Johnson, W. D.
1972-01-01
Techniques quoted in the literature for the extraction of stability derivative information from flight test records are reviewed. A recent technique developed at NASA's Langley Research Center was regarded as the most productive yet developed. Results of tests of the sensitivity of this procedure to various types of data noise and to the accuracy of the estimated values of the derivatives are reported. Computer programs for providing these initial estimates are given. The literature review also includes a discussion of flight test measuring techniques, instrumentation, and piloting techniques.
Use of high-order spectral moments in Doppler weather radar
NASA Astrophysics Data System (ADS)
di Vito, A.; Galati, G.; Veredice, A.
Three techniques to estimate the skewness and curtosis of measured precipitation spectra are evaluated. These are: (1) an extension of the pulse-pair technique, (2) fitting the autocorrelation function with a least square polynomial and differentiating it, and (3) the autoregressive spectral estimation. The third technique provides the best results but has an exceedingly large computation burden. The first technique does not supply any useful results due to the crude approximation of the derivatives of the ACF. The second technique requires further study to reduce its variance.
Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2015-11-01
The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
Ellison, L.E.; O'Shea, T.J.; Neubaum, D.J.; Neubaum, M.A.; Pearce, R.D.; Bowen, R.A.
2007-01-01
We compared conventional capture (primarily mist nets and harp traps) and passive integrated transponder (PIT) tagging techniques for estimating capture and survival probabilities of big brown bats (Eptesicus fuscus) roosting in buildings in Fort Collins, Colorado. A total of 987 female adult and juvenile bats were captured and marked by subdermal injection of PIT tags during the summers of 2001-2005 at five maternity colonies in buildings. Openings to roosts were equipped with PIT hoop-style readers, and exit and entry of bats were passively monitored on a daily basis throughout the summers of 2002-2005. PIT readers 'recaptured' adult and juvenile females more often than conventional capture events at each roost. Estimates of annual capture probabilities for all five colonies were on average twice as high when estimated from PIT reader data (P?? = 0.93-1.00) than when derived from conventional techniques (P?? = 0.26-0.66), and as a consequence annual survival estimates were more precisely estimated when using PIT reader encounters. Short-term, daily capture estimates were also higher using PIT readers than conventional captures. We discuss the advantages and limitations of using PIT tags and passive encounters with hoop readers vs. conventional capture techniques for estimating these vital parameters in big brown bats. ?? Museum and Institute of Zoology PAS.
Optimizing focal plane electric field estimation for detecting exoplanets
NASA Astrophysics Data System (ADS)
Groff, T.; Kasdin, N. J.; Riggs, A. J. E.
Detecting extrasolar planets with angular separations and contrast levels similar to Earth requires a large space-based observatory and advanced starlight suppression techniques. This paper focuses on techniques employing an internal coronagraph, which is highly sensitive to optical errors and must rely on focal plane wavefront control techniques to achieve the necessary contrast levels. To maximize the available science time for a coronagraphic mission we demonstrate an estimation scheme using a discrete time Kalman filter. The state estimate feedback inherent to the filter allows us to minimize the number of exposures required to estimate the electric field. We also show progress including a bias estimate into the Kalman filter to eliminate incoherent light from the estimate. Since the exoplanets themselves are incoherent to the star, this has the added benefit of using the control history to gain certainty in the location of exoplanet candidates as the signal-to-noise between the planets and speckles improves. Having established a purely focal plane based wavefront estimation technique, we discuss a sensor fusion concept where alternate wavefront sensors feedforward a time update to the focal plane estimate to improve robustness to time varying speckle. The overall goal of this work is to reduce the time required for wavefront control on a target, thereby improving the observatory's planet detection performance by increasing the number of targets reachable during the lifespan of the mission.
Parrett, Charles; Hull, J.A.
1986-01-01
Once-monthly streamflow measurements were used to estimate selected percentile discharges on flow-duration curves of monthly mean discharge for 40 ungaged stream sites in the upper Yellowstone River basin in Montana. The estimation technique was a modification of the concurrent-discharge method previously described and used by H.C. Riggs to estimate annual mean discharge. The modified technique is based on the relationship of various mean seasonal discharges to the required discharges on the flow-duration curves. The mean seasonal discharges are estimated from the monthly streamflow measurements, and the percentile discharges are calculated from regression equations. The regression equations, developed from streamflow record at nine gaging stations, indicated a significant log-linear relationship between mean seasonal discharge and various percentile discharges. The technique was tested at two discontinued streamflow-gaging stations; the differences between estimated monthly discharges and those determined from the discharge record ranged from -31 to +27 percent at one site and from -14 to +85 percent at the other. The estimates at one site were unbiased, and the estimates at the other site were consistently larger than the recorded values. Based on the test results, the probable average error of the technique was + or - 30 percent for the 21 sites measured during the first year of the program and + or - 50 percent for the 19 sites measured during the second year. (USGS)
Using Deep Learning for Tropical Cyclone Intensity Estimation
NASA Astrophysics Data System (ADS)
Miller, J.; Maskey, M.; Berendes, T.
2017-12-01
Satellite-based techniques are the primary approach to estimating tropical cyclone (TC) intensity. Tropical cyclone warning centers worldwide still apply variants of the Dvorak technique for such estimations that include visual inspection of the satellite images. The National Hurricane Center (NHC) estimates about 10-20% uncertainty in its post analyses when only satellite-based estimates are available. The success of the Dvorak technique proves that spatial patterns in infrared (IR) imagery strongly relate to TC intensity. With the ever-increasing quality and quantity of satellite observations of TCs, deep learning techniques designed to excel at pattern recognition have become more relevant in this area of study. In our current study, we aim to provide a fully objective approach to TC intensity estimation by utilizing deep learning in the form of a convolutional neural network trained to predict TC intensity (maximum sustained wind speed) using IR satellite imagery. Large amounts of training data are needed to train a convolutional neural network, so we use GOES IR images from historical tropical storms from the Atlantic and Pacific basins spanning years 2000 to 2015. Images are labeled using a special subset of the HURDAT2 dataset restricted to time periods with airborne reconnaissance data available in order to improve the quality of the HURDAT2 data. Results and the advantages of this technique are to be discussed.
Accurate Estimation of Solvation Free Energy Using Polynomial Fitting Techniques
Shyu, Conrad; Ytreberg, F. Marty
2010-01-01
This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem 30: 2297–2304, 2009). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and non-equidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest these polynomial techniques, especially with use of non-equidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software. PMID:20623657
Fusion-based multi-target tracking and localization for intelligent surveillance systems
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2008-04-01
In this paper, we have presented two approaches addressing visual target tracking and localization in complex urban environment. The two techniques presented in this paper are: fusion-based multi-target visual tracking, and multi-target localization via camera calibration. For multi-target tracking, the data fusion concepts of hypothesis generation/evaluation/selection, target-to-target registration, and association are employed. An association matrix is implemented using RGB histograms for associated tracking of multi-targets of interests. Motion segmentation of targets of interest (TOI) from the background was achieved by a Gaussian Mixture Model. Foreground segmentation, on other hand, was achieved by the Connected Components Analysis (CCA) technique. The tracking of individual targets was estimated by fusing two sources of information, the centroid with the spatial gating, and the RGB histogram association matrix. The localization problem is addressed through an effective camera calibration technique using edge modeling for grid mapping (EMGM). A two-stage image pixel to world coordinates mapping technique is introduced that performs coarse and fine location estimation of moving TOIs. In coarse estimation, an approximate neighborhood of the target position is estimated based on nearest 4-neighbor method, and in fine estimation, we use Euclidean interpolation to localize the position within the estimated four neighbors. Both techniques were tested and shown reliable results for tracking and localization of Targets of interests in complex urban environment.
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
Weighted image de-fogging using luminance dark prior
NASA Astrophysics Data System (ADS)
Kansal, Isha; Kasana, Singara Singh
2017-10-01
In this work, the weighted image de-fogging process based upon dark channel prior is modified by using luminance dark prior. Dark channel prior estimates the transmission by using three colour channels whereas luminance dark prior does the same by making use of only Y component of YUV colour space. For each pixel in a patch of ? size, the luminance dark prior uses ? pixels, rather than ? pixels used in DCP technique, which speeds up the de-fogging process. To estimate the transmission map, weighted approach based upon difference prior is used which mitigates halo artefacts at the time of transmission estimation. The major drawback of weighted technique is that it does not maintain the constancy of the transmission in a local patch even if there are no significant depth disruptions, due to which the de-fogged image looks over smooth and has low contrast. Apart from this, in some images, weighted transmission still carries less visible halo artefacts. Therefore, Gaussian filter is used to blur the estimated weighted transmission map which enhances the contrast of de-fogged images. In addition to this, a novel approach is proposed to remove the pixels belonging to bright light source(s) during the atmospheric light estimation process based upon histogram of YUV colour space. To show the effectiveness, the proposed technique is compared with existing techniques. This comparison shows that the proposed technique performs better than the existing techniques.
NASA Astrophysics Data System (ADS)
Bellili, Faouzi; Amor, Souheib Ben; Affes, Sofiène; Ghrayeb, Ali
2017-12-01
This paper addresses the problem of DOA estimation using uniform linear array (ULA) antenna configurations. We propose a new low-cost method of multiple DOA estimation from very short data snapshots. The new estimator is based on the annihilating filter (AF) technique. It is non-data-aided (NDA) and does not impinge therefore on the whole throughput of the system. The noise components are assumed temporally and spatially white across the receiving antenna elements. The transmitted signals are also temporally and spatially white across the transmitting sources. The new method is compared in performance to the Cramér-Rao lower bound (CRLB), the root-MUSIC algorithm, the deterministic maximum likelihood estimator and another Bayesian method developed precisely for the single snapshot case. Simulations show that the new estimator performs well over a wide SNR range. Prominently, the main advantage of the new AF-based method is that it succeeds in accurately estimating the DOAs from short data snapshots and even from a single snapshot outperforming by far the state-of-the-art techniques both in DOA estimation accuracy and computational cost.
Lance R. Williams; Melvin L. Warren; Susan B. Adams; Joseph L. Arvai; Christopher M. Taylor
2004-01-01
Basin Visual Estimation Techniques (BVET) are used to estimate abundance for fish populations in small streams. With BVET, independent samples are drawn from natural habitat units in the stream rather than sampling "representative reaches." This sampling protocol provides an alternative to traditional reach-level surveys, which are criticized for their lack...
A field test of cut-off importance sampling for bole volume
Jeffrey H. Gove; Harry T. Valentine; Michael J. Holmes
2000-01-01
Cut-off importance sampling has recently been introduced as a technique for estimating bole volume to some point below the tree tip, termed the cut-off point. A field test of this technique was conducted on a small population of eastern white pine trees using dendrometry as the standard for volume estimation. Results showed that the differences in volume estimates...
Robert E. Keane; Laura J. Dickinson
2007-01-01
Fire managers need better estimates of fuel loading so they can more accurately predict the potential fire behavior and effects of alternative fuel and ecosystem restoration treatments. This report presents a new fuel sampling method, called the photoload sampling technique, to quickly and accurately estimate loadings for six common surface fuel components (1 hr, 10 hr...
NASA Astrophysics Data System (ADS)
Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar
2017-11-01
Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.
Vasanawala, Shreyas S; Yu, Huanzhou; Shimakawa, Ann; Jeng, Michael; Brittain, Jean H
2012-01-01
MRI imaging of hepatic iron overload can be achieved by estimating T(2) values using multiple-echo sequences. The purpose of this work is to develop and clinically evaluate a weighted least squares algorithm based on T(2) Iterative Decomposition of water and fat with Echo Asymmetry and Least-squares estimation (IDEAL) technique for volumetric estimation of hepatic T(2) in the setting of iron overload. The weighted least squares T(2) IDEAL technique improves T(2) estimation by automatically decreasing the impact of later, noise-dominated echoes. The technique was evaluated in 37 patients with iron overload. Each patient underwent (i) a standard 2D multiple-echo gradient echo sequence for T(2) assessment with nonlinear exponential fitting, and (ii) a 3D T(2) IDEAL technique, with and without a weighted least squares fit. Regression and Bland-Altman analysis demonstrated strong correlation between conventional 2D and T(2) IDEAL estimation. In cases of severe iron overload, T(2) IDEAL without weighted least squares reconstruction resulted in a relative overestimation of T(2) compared with weighted least squares. Copyright © 2011 Wiley-Liss, Inc.
Ronald E. McRoberts; Erkki O. Tomppo; Andrew O. Finley; Heikkinen Juha
2007-01-01
The k-Nearest Neighbor (k-NN) technique has become extremely popular for a variety of forest inventory mapping and estimation applications. Much of this popularity may be attributed to the non-parametric, multivariate features of the technique, its intuitiveness, and its ease of use. When used with satellite imagery and forest...
USDA-ARS?s Scientific Manuscript database
Traditional microbiological techniques for estimating populations of viable bacteria can be laborious and time consuming. The Most Probable Number (MPN) technique is especially tedious as multiple series of tubes must be inoculated at several different dilutions. Recently, an instrument (TEMPOTM) ...
C. Andrew Dolloff; Holly E. Jennings
1997-01-01
We compared estimates of stream habitat at the watershed scale using the basinwide visual estimation technique (BVET) and the representative reach extrapolation technique (RRET) in three small watersheds in the Appalachian Mountains. Within each watershed, all habitat units were sampled by the BVET, in contrast, three or four 100-m reaches were sampled with the RRET....
Three Different Methods of Estimating LAI in a Small Watershed
NASA Astrophysics Data System (ADS)
Speckman, H. N.; Ewers, B. E.; Beverly, D.
2015-12-01
Leaf area index (LAI) is a critical input of models that improve predictive understanding of ecology, hydrology, and climate change. Multiple techniques exist to quantify LAI, most of which are labor intensive, and all often fail to converge on similar estimates. . Recent large-scale bark beetle induced mortality greatly altered LAI, which is now dominated by younger and more metabolically active trees compared to the pre-beetle forest. Tree mortality increases error in optical LAI estimates due to the lack of differentiation between live and dead branches in dense canopy. Our study aims to quantify LAI using three different LAI methods, and then to compare the techniques to each other and topographic drivers to develop an effective predictive model of LAI. This study focuses on quantifying LAI within a small (~120 ha) beetle infested watershed in Wyoming's Snowy Range Mountains. The first technique estimated LAI using in-situ hemispherical canopy photographs that were then analyzed with Hemisfer software. The second LAI estimation technique was use of the Kaufmann 1982 allometrerics from forest inventories conducted throughout the watershed, accounting for stand basal area, species composition, and the extent of bark beetle driven mortality. The final technique used airborne light detection and ranging (LIDAR) first DMS returns, which were used to estimating canopy heights and crown area. LIDAR final returns provided topographical information and were then ground-truthed during forest inventories. Once data was collected, a fractural analysis was conducted comparing the three methods. Species composition was driven by slope position and elevation Ultimately the three different techniques provided very different estimations of LAI, but each had their advantage: estimates from hemisphere photos were well correlated with SWE and snow depth measurements, forest inventories provided insight into stand health and composition, and LIDAR were able to quickly and efficiently cover a very large area.
D'Agnese, F. A.; Faunt, C.C.; Turner, A.K.; ,
1996-01-01
The recharge and discharge components of the Death Valley regional groundwater flow system were defined by techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were used to calculate discharge volumes for these area. An empirical method of groundwater recharge estimation was modified to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.
Comparison study on disturbance estimation techniques in precise slow motion control
NASA Astrophysics Data System (ADS)
Fan, S.; Nagamune, R.; Altintas, Y.; Fan, D.; Zhang, Z.
2010-08-01
Precise low speed motion control is important for the industrial applications of both micro-milling machine tool feed drives and electro-optical tracking servo systems. It calls for precise position and instantaneous velocity measurement and disturbance, which involves direct drive motor force ripple, guide way friction and cutting force etc., estimation. This paper presents a comparison study on dynamic response and noise rejection performance of three existing disturbance estimation techniques, including the time-delayed estimators, the state augmented Kalman Filters and the conventional disturbance observers. The design technique essentials of these three disturbance estimators are introduced. For designing time-delayed estimators, it is proposed to substitute Kalman Filter for Luenberger state observer to improve noise suppression performance. The results show that the noise rejection performances of the state augmented Kalman Filters and the time-delayed estimators are much better than the conventional disturbance observers. These two estimators can give not only the estimation of the disturbance but also the low noise level estimations of position and instantaneous velocity. The bandwidth of the state augmented Kalman Filters is wider than the time-delayed estimators. In addition, the state augmented Kalman Filters can give unbiased estimations of the slow varying disturbance and the instantaneous velocity, while the time-delayed estimators can not. The simulation and experiment conducted on X axis of a 2.5-axis prototype micro milling machine are provided.
NASA Technical Reports Server (NTRS)
Amis, M. L.; Martin, M. V.; Mcguire, W. G.; Shen, S. S. (Principal Investigator)
1982-01-01
Studies completed in fiscal year 1981 in support of the clustering/classification and preprocessing activities of the Domestic Crops and Land Cover project. The theme throughout the study was the improvement of subanalysis district (usually county level) crop hectarage estimates, as reflected in the following three objectives: (1) to evaluate the current U.S. Department of Agriculture Statistical Reporting Service regression approach to crop area estimation as applied to the problem of obtaining subanalysis district estimates; (2) to develop and test alternative approaches to subanalysis district estimation; and (3) to develop and test preprocessing techniques for use in improving subanalysis district estimates.
The augmented Lagrangian method for parameter estimation in elliptic systems
NASA Technical Reports Server (NTRS)
Ito, Kazufumi; Kunisch, Karl
1990-01-01
In this paper a new technique for the estimation of parameters in elliptic partial differential equations is developed. It is a hybrid method combining the output-least-squares and the equation error method. The new method is realized by an augmented Lagrangian formulation, and convergence as well as rate of convergence proofs are provided. Technically the critical step is the verification of a coercivity estimate of an appropriately defined Lagrangian functional. To obtain this coercivity estimate a seminorm regularization technique is used.
Advances in parameter estimation techniques applied to flexible structures
NASA Technical Reports Server (NTRS)
Maben, Egbert; Zimmerman, David C.
1994-01-01
In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.
NASA Astrophysics Data System (ADS)
Thoonsaengngam, Rattapol; Tangsangiumvisai, Nisachon
This paper proposes an enhanced method for estimating the a priori Signal-to-Disturbance Ratio (SDR) to be employed in the Acoustic Echo and Noise Suppression (AENS) system for full-duplex hands-free communications. The proposed a priori SDR estimation technique is modified based upon the Two-Step Noise Reduction (TSNR) algorithm to suppress the background noise while preserving speech spectral components. In addition, a practical approach to determine accurately the Echo Spectrum Variance (ESV) is presented based upon the linear relationship assumption between the power spectrum of far-end speech and acoustic echo signals. The ESV estimation technique is then employed to alleviate the acoustic echo problem. The performance of the AENS system that employs these two proposed estimation techniques is evaluated through the Echo Attenuation (EA), Noise Attenuation (NA), and two speech distortion measures. Simulation results based upon real speech signals guarantee that our improved AENS system is able to mitigate efficiently the problem of acoustic echo and background noise, while preserving the speech quality and speech intelligibility.
Harding, Brian J; Gehrels, Thomas W; Makela, Jonathan J
2014-02-01
The Earth's thermosphere plays a critical role in driving electrodynamic processes in the ionosphere and in transferring solar energy to the atmosphere, yet measurements of thermospheric state parameters, such as wind and temperature, are sparse. One of the most popular techniques for measuring these parameters is to use a Fabry-Perot interferometer to monitor the Doppler width and breadth of naturally occurring airglow emissions in the thermosphere. In this work, we present a technique for estimating upper-atmospheric winds and temperatures from images of Fabry-Perot fringes captured by a CCD detector. We estimate instrument parameters from fringe patterns of a frequency-stabilized laser, and we use these parameters to estimate winds and temperatures from airglow fringe patterns. A unique feature of this technique is the model used for the laser and airglow fringe patterns, which fits all fringes simultaneously and attempts to model the effects of optical defects. This technique yields accurate estimates for winds, temperatures, and the associated uncertainties in these parameters, as we show with a Monte Carlo simulation.
Langdon, Jonathan H; Elegbe, Etana; McAleavey, Stephen A
2015-01-01
Single Tracking Location (STL) Shear wave Elasticity Imaging (SWEI) is a method for detecting elastic differences between tissues. It has the advantage of intrinsic speckle bias suppression compared to Multiple Tracking Location (MTL) variants of SWEI. However, the assumption of a linear model leads to an overestimation of the shear modulus in viscoelastic media. A new reconstruction technique denoted Single Tracking Location Viscosity Estimation (STL-VE) is introduced to correct for this overestimation. This technique utilizes the same raw data generated in STL-SWEI imaging. Here, the STL-VE technique is developed by way of a Maximum Likelihood Estimation (MLE) for general viscoelastic materials. The method is then implemented for the particular case of the Kelvin-Voigt Model. Using simulation data, the STL-VE technique is demonstrated and the performance of the estimator is characterized. Finally, the STL-VE method is used to estimate the viscoelastic parameters of ex-vivo bovine liver. We find good agreement between the STL-VE results and the simulation parameters as well as between the liver shear wave data and the modeled data fit. PMID:26168170
Using the Delphi technique in economic evaluation: time to revisit the oracle?
Simoens, S
2006-12-01
Although the Delphi technique has been commonly used as a data source in medical and health services research, its application in economic evaluation of medicines has been more limited. The aim of this study was to describe the methodology of the Delphi technique, to present a case for using the technique in economic evaluation, and to provide recommendations to improve such use. The literature was accessed through MEDLINE focusing on studies discussing the methodology of the Delphi technique and economic evaluations of medicines using the Delphi technique. The Delphi technique can be used to provide estimates of health care resources required and to modify such estimates when making inter-country comparisons. The Delphi technique can also contribute to mapping the treatment process under investigation, to identifying the appropriate comparator to be used, and to ensuring that the economic evaluation estimates cost-effectiveness rather than cost-efficacy. Ideally, economic evaluations of medicines should be based on real-patient data. In the absence of such data, evaluations need to incorporate the best evidence available by employing approaches such as the Delphi technique. Evaluations based on this approach should state the limitations, and explore the impact of the associated uncertainty in the results.
Investigation of spectral analysis techniques for randomly sampled velocimetry data
NASA Technical Reports Server (NTRS)
Sree, Dave
1993-01-01
It is well known that velocimetry (LV) generates individual realization velocity data that are randomly or unevenly sampled in time. Spectral analysis of such data to obtain the turbulence spectra, and hence turbulence scales information, requires special techniques. The 'slotting' technique of Mayo et al, also described by Roberts and Ajmani, and the 'Direct Transform' method of Gaster and Roberts are well known in the LV community. The slotting technique is faster than the direct transform method in computation. There are practical limitations, however, as to how a high frequency and accurate estimate can be made for a given mean sampling rate. These high frequency estimates are important in obtaining the microscale information of turbulence structure. It was found from previous studies that reliable spectral estimates can be made up to about the mean sampling frequency (mean data rate) or less. If the data were evenly samples, the frequency range would be half the sampling frequency (i.e. up to Nyquist frequency); otherwise, aliasing problem would occur. The mean data rate and the sample size (total number of points) basically limit the frequency range. Also, there are large variabilities or errors associated with the high frequency estimates from randomly sampled signals. Roberts and Ajmani proposed certain pre-filtering techniques to reduce these variabilities, but at the cost of low frequency estimates. The prefiltering acts as a high-pass filter. Further, Shapiro and Silverman showed theoretically that, for Poisson sampled signals, it is possible to obtain alias-free spectral estimates far beyond the mean sampling frequency. But the question is, how far? During his tenure under 1993 NASA-ASEE Summer Faculty Fellowship Program, the author investigated from his studies on the spectral analysis techniques for randomly sampled signals that the spectral estimates can be enhanced or improved up to about 4-5 times the mean sampling frequency by using a suitable prefiltering technique. But, this increased bandwidth comes at the cost of the lower frequency estimates. The studies further showed that large data sets of the order of 100,000 points, or more, high data rates, and Poisson sampling are very crucial for obtaining reliable spectral estimates from randomly sampled data, such as LV data. Some of the results of the current study are presented.
Decision rules for unbiased inventory estimates
NASA Technical Reports Server (NTRS)
Argentiero, P. D.; Koch, D.
1979-01-01
An efficient and accurate procedure for estimating inventories from remote sensing scenes is presented. In place of the conventional and expensive full dimensional Bayes decision rule, a one-dimensional feature extraction and classification technique was employed. It is shown that this efficient decision rule can be used to develop unbiased inventory estimates and that for large sample sizes typical of satellite derived remote sensing scenes, resulting accuracies are comparable or superior to more expensive alternative procedures. Mathematical details of the procedure are provided in the body of the report and in the appendix. Results of a numerical simulation of the technique using statistics obtained from an observed LANDSAT scene are included. The simulation demonstrates the effectiveness of the technique in computing accurate inventory estimates.
Empirical State Error Covariance Matrix for Batch Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joe
2015-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.
A novel time of arrival estimation algorithm using an energy detector receiver in MMW systems
NASA Astrophysics Data System (ADS)
Liang, Xiaolin; Zhang, Hao; Lyu, Tingting; Xiao, Han; Gulliver, T. Aaron
2017-12-01
This paper presents a new time of arrival (TOA) estimation technique using an improved energy detection (ED) receiver based on the empirical mode decomposition (EMD) in an impulse radio (IR) 60 GHz millimeter wave (MMW) system. A threshold is employed via analyzing the characteristics of the received energy values with an extreme learning machine (ELM). The effect of the channel and integration period on the TOA estimation is evaluated. Several well-known ED-based TOA algorithms are used to compare with the proposed technique. It is shown that this ELM-based technique has lower TOA estimation error compared to other approaches and provides robust performance with the IEEE 802.15.3c channel models.
Heidari, M.; Ranjithan, S.R.
1998-01-01
In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.
A Rapid Screen Technique for Estimating Nanoparticle Transport in Porous Media
Quantifying the mobility of engineered nanoparticles in hydrologic pathways from point of release to human or ecological receptors is essential for assessing environmental exposures. Column transport experiments are a widely used technique to estimate the transport parameters of ...
Taxi-cabs as Subjects for a Population Study
ERIC Educational Resources Information Center
Bishop, J. A.; Bradley, J. S.
1972-01-01
Describes the use of capture-recapture techniques to estimate the population of taxis in Liverpool and demonstrates the points of similarity to animal population estimation. Considers advantages of studying taxis rather than organisms in introductory studies of the techniques. (AL)
Estimating the cost of major ongoing cost plus hardware development programs
NASA Technical Reports Server (NTRS)
Bush, J. C.
1990-01-01
Approaches are developed for forecasting the cost of major hardware development programs while these programs are in the design and development C/D phase. Three approaches are developed: a schedule assessment technique for bottom-line summary cost estimation, a detailed cost estimation approach, and an intermediate cost element analysis procedure. The schedule assessment technique was developed using historical cost/schedule performance data.
Reiter, M.E.; Andersen, D.E.
2008-01-01
Both egg flotation and egg candling have been used to estimate incubation day (often termed nest age) in nesting birds, but little is known about the relative accuracy of these two techniques. We used both egg flotation and egg candling to estimate incubation day for Canada Geese (Branta canadensis interior) nesting near Cape Churchill, Manitoba, from 2000 to 2007. We modeled variation in the difference between estimates of incubation day using each technique as a function of true incubation day, as well as, variation in error rates with each technique as a function of the true incubation day. We also evaluated the effect of error in the estimated incubation day on estimates of daily survival rate (DSR) and nest success using simulations. The mean difference between concurrent estimates of incubation day based on egg flotation minus egg candling at the same nest was 0.85 ?? 0.06 (SE) days. The positive difference in favor of egg flotation and the magnitude of the difference in estimates of incubation day did not vary as a function of true incubation day. Overall, both egg flotation and egg candling overestimated incubation day early in incubation and underestimated incubation day later in incubation. The average difference between true hatch date and estimated hatch date did not differ from zero (days) for egg flotation, but egg candling overestimated true hatch date by about 1 d (true - estimated; days). Our simulations suggested that error associated with estimating the incubation day of nests and subsequently exposure days using either egg candling or egg flotation would have minimal effects on estimates of DSR and nest success. Although egg flotation was slightly less biased, both methods provided comparable and accurate estimates of incubation day and subsequent estimates of hatch date and nest success throughout the entire incubation period. ?? 2008 Association of Field Ornithologists.
Simon, Aaron B.; Dubowitz, David J.; Blockley, Nicholas P.; Buxton, Richard B.
2016-01-01
Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2′ as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2′, we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2′-based estimate of the metabolic response to CO2 of 1.4%, and R2′- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2′-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. PMID:26790354
Simon, Aaron B; Dubowitz, David J; Blockley, Nicholas P; Buxton, Richard B
2016-04-01
Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2' as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2', we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2'-based estimate of the metabolic response to CO2 of 1.4%, and R2'- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2'-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. Copyright © 2016 Elsevier Inc. All rights reserved.
Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Abotteen, K. M. (Principal Investigator)
1980-01-01
The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.
D'Agnese, F. A.; Faunt, C.C.; Keith, Turner A.
1996-01-01
The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.
Kalman filter approach for uncertainty quantification in time-resolved laser-induced incandescence.
Hadwin, Paul J; Sipkens, Timothy A; Thomson, Kevin A; Liu, Fengshan; Daun, Kyle J
2018-03-01
Time-resolved laser-induced incandescence (TiRe-LII) data can be used to infer spatially and temporally resolved volume fractions and primary particle size distributions of soot-laden aerosols, but these estimates are corrupted by measurement noise as well as uncertainties in the spectroscopic and heat transfer submodels used to interpret the data. Estimates of the temperature, concentration, and size distribution of soot primary particles within a sample aerosol are typically made by nonlinear regression of modeled spectral incandescence decay, or effective temperature decay, to experimental data. In this work, we employ nonstationary Bayesian estimation techniques to infer aerosol properties from simulated and experimental LII signals, specifically the extended Kalman filter and Schmidt-Kalman filter. These techniques exploit the time-varying nature of both the measurements and the models, and they reveal how uncertainty in the estimates computed from TiRe-LII data evolves over time. Both techniques perform better when compared with standard deterministic estimates; however, we demonstrate that the Schmidt-Kalman filter produces more realistic uncertainty estimates.
Ruíz, A; Ramos, A; San Emeterio, J L
2004-04-01
An estimation procedure to efficiently find approximate values of internal parameters in ultrasonic transducers intended for broadband operation would be a valuable tool to discover internal construction data. This information is necessary in the modelling and simulation of acoustic and electrical behaviour related to ultrasonic systems containing commercial transducers. There is not a general solution for this generic problem of parameter estimation in the case of broadband piezoelectric probes. In this paper, this general problem is briefly analysed for broadband conditions. The viability of application in this field of an artificial intelligence technique supported on the modelling of the transducer internal components is studied. A genetic algorithm (GA) procedure is presented and applied to the estimation of different parameters, related to two transducers which are working as pulsed transmitters. The efficiency of this GA technique is studied, considering the influence of the number and variation range of the estimated parameters. Estimation results are experimentally ratified.
Optical rangefinding applications using communications modulation technique
NASA Astrophysics Data System (ADS)
Caplan, William D.; Morcom, Christopher John
2010-10-01
A novel range detection technique combines optical pulse modulation patterns with signal cross-correlation to produce an accurate range estimate from low power signals. The cross-correlation peak is analyzed by a post-processing algorithm such that the phase delay is proportional to the range to target. This technique produces a stable range estimate from noisy signals. The advantage is higher accuracy obtained with relatively low optical power transmitted. The technique is useful for low cost, low power and low mass sensors suitable for tactical use. The signal coding technique allows applications including IFF and battlefield identification systems.
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.
A direct-measurement technique for estimating discharge-chamber lifetime. [for ion thrusters
NASA Technical Reports Server (NTRS)
Beattie, J. R.; Garvin, H. L.
1982-01-01
The use of short-term measurement techniques for predicting the wearout of ion thrusters resulting from sputter-erosion damage is investigated. The laminar-thin-film technique is found to provide high precision erosion-rate data, although the erosion rates are generally substantially higher than those found during long-term erosion tests, so that the results must be interpreted in a relative sense. A technique for obtaining absolute measurements is developed using a masked-substrate arrangement. This new technique provides a means for estimating the lifetimes of critical discharge-chamber components based on direct measurements of sputter-erosion depths obtained during short-duration (approximately 1 hr) tests. Results obtained using the direct-measurement technique are shown to agree with sputter-erosion depths calculated for the plasma conditions of the test. The direct-measurement approach is found to be applicable to both mercury and argon discharge-plasma environments and will be useful for estimating the lifetimes of inert gas and extended performance mercury ion thrusters currently under development.
Restoration of out-of-focus images based on circle of confusion estimate
NASA Astrophysics Data System (ADS)
Vivirito, Paolo; Battiato, Sebastiano; Curti, Salvatore; La Cascia, M.; Pirrone, Roberto
2002-11-01
In this paper a new method for a fast out-of-focus blur estimation and restoration is proposed. It is suitable for CFA (Color Filter Array) images acquired by typical CCD/CMOS sensor. The method is based on the analysis of a single image and consists of two steps: 1) out-of-focus blur estimation via Bayer pattern analysis; 2) image restoration. Blur estimation is based on a block-wise edge detection technique. This edge detection is carried out on the green pixels of the CFA sensor image also called Bayer pattern. Once the blur level has been estimated the image is restored through the application of a new inverse filtering technique. This algorithm gives sharp images reducing ringing and crisping artifact, involving wider region of frequency. Experimental results show the effectiveness of the method, both in subjective and numerical way, by comparison with other techniques found in literature.
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.
Estimating Mass of Inflatable Aerodynamic Decelerators Using Dimensionless Parameters
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2011-01-01
This paper describes a technique for estimating mass for inflatable aerodynamic decelerators. The technique uses dimensional analysis to identify a set of dimensionless parameters for inflation pressure, mass of inflation gas, and mass of flexible material. The dimensionless parameters enable scaling of an inflatable concept with geometry parameters (e.g., diameter), environmental conditions (e.g., dynamic pressure), inflation gas properties (e.g., molecular mass), and mass growth allowance. This technique is applicable for attached (e.g., tension cone, hypercone, and stacked toroid) and trailing inflatable aerodynamic decelerators. The technique uses simple engineering approximations that were developed by NASA in the 1960s and 1970s, as well as some recent important developments. The NASA Mars Entry and Descent Landing System Analysis (EDL-SA) project used this technique to estimate the masses of the inflatable concepts that were used in the analysis. The EDL-SA results compared well with two independent sets of high-fidelity finite element analyses.
High suspended sediment concentrations (SSCs) from natural and anthropogenic sources are responsible for biological impairments of many streams, rivers, lakes, and estuaries, but techniques to estimate sediment concentrations or loads accurately at the daily temporal resolution a...
Rapid estimation of nutritional elements on citrus leaves by near infrared reflectance spectroscopy.
Galvez-Sola, Luis; García-Sánchez, Francisco; Pérez-Pérez, Juan G; Gimeno, Vicente; Navarro, Josefa M; Moral, Raul; Martínez-Nicolás, Juan J; Nieves, Manuel
2015-01-01
Sufficient nutrient application is one of the most important factors in producing quality citrus fruits. One of the main guides in planning citrus fertilizer programs is by directly monitoring the plant nutrient content. However, this requires analysis of a large number of leaf samples using expensive and time-consuming chemical techniques. Over the last 5 years, it has been demonstrated that it is possible to quantitatively estimate certain nutritional elements in citrus leaves by using the spectral reflectance values, obtained by using near infrared reflectance spectroscopy (NIRS). This technique is rapid, non-destructive, cost-effective and environmentally friendly. Therefore, the estimation of macro and micronutrients in citrus leaves by this method would be beneficial in identifying the mineral status of the trees. However, to be used effectively NIRS must be evaluated against the standard techniques across different cultivars. In this study, NIRS spectral analysis, and subsequent nutrient estimations for N, K, Ca, Mg, B, Fe, Cu, Mn, and Zn concentration, were performed using 217 leaf samples from different citrus trees species. Partial least square regression and different pre-processing signal treatments were used to generate the best estimation against the current best practice techniques. It was verified a high proficiency in the estimation of N (Rv = 0.99) and Ca (Rv = 0.98) as well as achieving acceptable estimation for K, Mg, Fe, and Zn. However, no successful calibrations were obtained for the estimation of B, Cu, and Mn.
Rapid estimation of nutritional elements on citrus leaves by near infrared reflectance spectroscopy
Galvez-Sola, Luis; García-Sánchez, Francisco; Pérez-Pérez, Juan G.; Gimeno, Vicente; Navarro, Josefa M.; Moral, Raul; Martínez-Nicolás, Juan J.; Nieves, Manuel
2015-01-01
Sufficient nutrient application is one of the most important factors in producing quality citrus fruits. One of the main guides in planning citrus fertilizer programs is by directly monitoring the plant nutrient content. However, this requires analysis of a large number of leaf samples using expensive and time-consuming chemical techniques. Over the last 5 years, it has been demonstrated that it is possible to quantitatively estimate certain nutritional elements in citrus leaves by using the spectral reflectance values, obtained by using near infrared reflectance spectroscopy (NIRS). This technique is rapid, non-destructive, cost-effective and environmentally friendly. Therefore, the estimation of macro and micronutrients in citrus leaves by this method would be beneficial in identifying the mineral status of the trees. However, to be used effectively NIRS must be evaluated against the standard techniques across different cultivars. In this study, NIRS spectral analysis, and subsequent nutrient estimations for N, K, Ca, Mg, B, Fe, Cu, Mn, and Zn concentration, were performed using 217 leaf samples from different citrus trees species. Partial least square regression and different pre-processing signal treatments were used to generate the best estimation against the current best practice techniques. It was verified a high proficiency in the estimation of N (Rv = 0.99) and Ca (Rv = 0.98) as well as achieving acceptable estimation for K, Mg, Fe, and Zn. However, no successful calibrations were obtained for the estimation of B, Cu, and Mn. PMID:26257767
Shape and Spatially-Varying Reflectance Estimation from Virtual Exemplars.
Hui, Zhuo; Sankaranarayanan, Aswin C
2017-10-01
This paper addresses the problem of estimating the shape of objects that exhibit spatially-varying reflectance. We assume that multiple images of the object are obtained under a fixed view-point and varying illumination, i.e., the setting of photometric stereo. At the core of our techniques is the assumption that the BRDF at each pixel lies in the non-negative span of a known BRDF dictionary. This assumption enables a per-pixel surface normal and BRDF estimation framework that is computationally tractable and requires no initialization in spite of the underlying problem being non-convex. Our estimation framework first solves for the surface normal at each pixel using a variant of example-based photometric stereo. We design an efficient multi-scale search strategy for estimating the surface normal and subsequently, refine this estimate using a gradient descent procedure. Given the surface normal estimate, we solve for the spatially-varying BRDF by constraining the BRDF at each pixel to be in the span of the BRDF dictionary; here, we use additional priors to further regularize the solution. A hallmark of our approach is that it does not require iterative optimization techniques nor the need for careful initialization, both of which are endemic to most state-of-the-art techniques. We showcase the performance of our technique on a wide range of simulated and real scenes where we outperform competing methods.
Development of advanced techniques for rotorcraft state estimation and parameter identification
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.
1980-01-01
An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.
Peeters, Frank; Atamanchuk, Dariia; Tengberg, Anders; Encinas-Fernández, Jorge; Hofmann, Hilmar
2016-01-01
Lake metabolism is a key factor for the understanding of turnover of energy and of organic and inorganic matter in lake ecosystems. Long-term time series on metabolic rates are commonly estimated from diel changes in dissolved oxygen. Here we present long-term data on metabolic rates based on diel changes in total dissolved inorganic carbon (DIC) utilizing an open-water diel CO2-technique. Metabolic rates estimated with this technique and the traditional diel O2-technique agree well in alkaline Lake Illmensee (pH of ~8.5), although the diel changes in molar CO2 concentrations are much smaller than those of the molar O2 concentrations. The open-water diel CO2- and diel O2-techniques provide independent measures of lake metabolic rates that differ in their sensitivity to transport processes. Hence, the combination of both techniques can help to constrain uncertainties arising from assumptions on vertical fluxes due to gas exchange and turbulent diffusion. This is particularly important for estimates of lake respiration rates because these are much more sensitive to assumptions on gradients in vertical fluxes of O2 or DIC than estimates of lake gross primary production. Our data suggest that it can be advantageous to estimate respiration rates assuming negligible gradients in vertical fluxes rather than including gas exchange with the atmosphere but neglecting vertical mixing in the water column. During two months in summer the average lake net production was close to zero suggesting at most slightly autotrophic conditions. However, the lake emitted O2 and CO2 during the entire time period suggesting that O2 and CO2 emissions from lakes can be decoupled from the metabolism in the near surface layer.
Peeters, Frank; Atamanchuk, Dariia; Tengberg, Anders; Encinas-Fernández, Jorge; Hofmann, Hilmar
2016-01-01
Lake metabolism is a key factor for the understanding of turnover of energy and of organic and inorganic matter in lake ecosystems. Long-term time series on metabolic rates are commonly estimated from diel changes in dissolved oxygen. Here we present long-term data on metabolic rates based on diel changes in total dissolved inorganic carbon (DIC) utilizing an open-water diel CO2-technique. Metabolic rates estimated with this technique and the traditional diel O2-technique agree well in alkaline Lake Illmensee (pH of ~8.5), although the diel changes in molar CO2 concentrations are much smaller than those of the molar O2 concentrations. The open-water diel CO2- and diel O2-techniques provide independent measures of lake metabolic rates that differ in their sensitivity to transport processes. Hence, the combination of both techniques can help to constrain uncertainties arising from assumptions on vertical fluxes due to gas exchange and turbulent diffusion. This is particularly important for estimates of lake respiration rates because these are much more sensitive to assumptions on gradients in vertical fluxes of O2 or DIC than estimates of lake gross primary production. Our data suggest that it can be advantageous to estimate respiration rates assuming negligible gradients in vertical fluxes rather than including gas exchange with the atmosphere but neglecting vertical mixing in the water column. During two months in summer the average lake net production was close to zero suggesting at most slightly autotrophic conditions. However, the lake emitted O2 and CO2 during the entire time period suggesting that O2 and CO2 emissions from lakes can be decoupled from the metabolism in the near surface layer. PMID:28002477
Heer, D M; Passel, J F
1987-01-01
This article compares 2 different methods for estimating the number of undocumented Mexican adults in Los Angeles County. The 1st method, the survey-based method, uses a combination of 1980 census data and the results of a survey conducted in Los Angeles County in 1980 and 1981. A sample was selected from babies born in Los Angeles County who had a mother or father of Mexican origin. The survey included questions about the legal status of the baby's parents and certain other relatives. The resulting estimates of undocumented Mexican immigrants are for males aged 18-44 and females aged 18-39. The 2nd method, the residual method, involves comparison of census figures for aliens counted with estimates of legally-resident aliens developed principally with data from the Immigration and Naturalization Service (INS). For this study, estimates by age, sex, and period of entry were produced for persons born in Mexico and living in Los Angeles County. The results of this research indicate that it is possible to measure undocumented immigration with different techniques, yet obtain results that are similar. Both techniques presented here are limited in that they represent estimates of undocumented aliens based on the 1980 census. The number of additional undocumented aliens not counted remains a subject of conjecture. The fact that the proportions undocumented shown in the survey (228,700) are quite similar to the residual estimates (317,800) suggests that the number of undocumented aliens not counted in the census may not be an extremely large fraction of the undocumented population. The survey-based estimates have some significant advantages over the residual estimates. The survey provides tabulations of the undocumented population by characteristics other than the limited demographic information provided by the residual technique. On the other hand, the survey-based estimates require that a survey be conducted and, if national or regional estimates are called for, they may require a number of surveys. The residual technique, however, also requires a data source other than the census. However, the INS discontinued the annual registration of aliens after 1981. Thus, estimates of undocumented aliens based on the residual technique will probably not be possible for subnational areas using the 1990 census unless the registration program is reinstituted. Perhaps the best information on the undocumented population in the 1990 census will come from an improved version of the survey-based technique described here applied in selected local areas.
Center of pressure based segment inertial parameters validation
Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice; Venture, Gentiane
2017-01-01
By proposing efficient methods for estimating Body Segment Inertial Parameters’ (BSIP) estimation and validating them with a force plate, it is possible to improve the inverse dynamic computations that are necessary in multiple research areas. Until today a variety of studies have been conducted to improve BSIP estimation but to our knowledge a real validation has never been completely successful. In this paper, we propose a validation method using both kinematic and kinetic parameters (contact forces) gathered from optical motion capture system and a force plate respectively. To compare BSIPs, we used the measured contact forces (Force plate) as the ground truth, and reconstructed the displacements of the Center of Pressure (COP) using inverse dynamics from two different estimation techniques. Only minor differences were seen when comparing the estimated segment masses. Their influence on the COP computation however is large and the results show very distinguishable patterns of the COP movements. Improving BSIP techniques is crucial and deviation from the estimations can actually result in large errors. This method could be used as a tool to validate BSIP estimation techniques. An advantage of this approach is that it facilitates the comparison between BSIP estimation methods and more specifically it shows the accuracy of those parameters. PMID:28662090
Adaptive neuro fuzzy inference system-based power estimation method for CMOS VLSI circuits
NASA Astrophysics Data System (ADS)
Vellingiri, Govindaraj; Jayabalan, Ramesh
2018-03-01
Recent advancements in very large scale integration (VLSI) technologies have made it feasible to integrate millions of transistors on a single chip. This greatly increases the circuit complexity and hence there is a growing need for less-tedious and low-cost power estimation techniques. The proposed work employs Back-Propagation Neural Network (BPNN) and Adaptive Neuro Fuzzy Inference System (ANFIS), which are capable of estimating the power precisely for the complementary metal oxide semiconductor (CMOS) VLSI circuits, without requiring any knowledge on circuit structure and interconnections. The ANFIS to power estimation application is relatively new. Power estimation using ANFIS is carried out by creating initial FIS modes using hybrid optimisation and back-propagation (BP) techniques employing constant and linear methods. It is inferred that ANFIS with the hybrid optimisation technique employing the linear method produces better results in terms of testing error that varies from 0% to 0.86% when compared to BPNN as it takes the initial fuzzy model and tunes it by means of a hybrid technique combining gradient descent BP and mean least-squares optimisation algorithms. ANFIS is the best suited for power estimation application with a low RMSE of 0.0002075 and a high coefficient of determination (R) of 0.99961.
A technique for estimating 4D-CBCT using prior knowledge and limited-angle projections.
Zhang, You; Yin, Fang-Fang; Segars, W Paul; Ren, Lei
2013-12-01
To develop a technique to estimate onboard 4D-CBCT using prior information and limited-angle projections for potential 4D target verification of lung radiotherapy. Each phase of onboard 4D-CBCT is considered as a deformation from one selected phase (prior volume) of the planning 4D-CT. The deformation field maps (DFMs) are solved using a motion modeling and free-form deformation (MM-FD) technique. In the MM-FD technique, the DFMs are estimated using a motion model which is extracted from planning 4D-CT based on principal component analysis (PCA). The motion model parameters are optimized by matching the digitally reconstructed radiographs of the deformed volumes to the limited-angle onboard projections (data fidelity constraint). Afterward, the estimated DFMs are fine-tuned using a FD model based on data fidelity constraint and deformation energy minimization. The 4D digital extended-cardiac-torso phantom was used to evaluate the MM-FD technique. A lung patient with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume, including changes of respiration amplitude, lesion size and lesion average-position, and phase shift between lesion and body respiratory cycle. The lesions were contoured in both the estimated and "ground-truth" onboard 4D-CBCT for comparison. 3D volume percentage-difference (VPD) and center-of-mass shift (COMS) were calculated to evaluate the estimation accuracy of three techniques: MM-FD, MM-only, and FD-only. Different onboard projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy. For all simulated patient and projection acquisition scenarios, the mean VPD (±S.D.)∕COMS (±S.D.) between lesions in prior images and "ground-truth" onboard images were 136.11% (±42.76%)∕15.5 mm (±3.9 mm). Using orthogonal-view 15°-each scan angle, the mean VPD∕COMS between the lesion in estimated and "ground-truth" onboard images for MM-only, FD-only, and MM-FD techniques were 60.10% (±27.17%)∕4.9 mm (±3.0 mm), 96.07% (±31.48%)∕12.1 mm (±3.9 mm) and 11.45% (±9.37%)∕1.3 mm (±1.3 mm), respectively. For orthogonal-view 30°-each scan angle, the corresponding results were 59.16% (±26.66%)∕4.9 mm (±3.0 mm), 75.98% (±27.21%)∕9.9 mm (±4.0 mm), and 5.22% (±2.12%)∕0.5 mm (±0.4 mm). For single-view scan angles of 3°, 30°, and 60°, the results for MM-FD technique were 32.77% (±17.87%)∕3.2 mm (±2.2 mm), 24.57% (±18.18%)∕2.9 mm (±2.0 mm), and 10.48% (±9.50%)∕1.1 mm (±1.3 mm), respectively. For projection angular-sampling-intervals of 0.6°, 1.2°, and 2.5° with the orthogonal-view 30°-each scan angle, the MM-FD technique generated similar VPD (maximum deviation 2.91%) and COMS (maximum deviation 0.6 mm), while sparser sampling yielded larger VPD∕COMS. With equal number of projections, the estimation results using scattered 360° scan angle were slightly better than those using orthogonal-view 30°-each scan angle. The estimation accuracy of MM-FD technique declined as noise level increased. The MM-FD technique substantially improves the estimation accuracy for onboard 4D-CBCT using prior planning 4D-CT and limited-angle projections, compared to the MM-only and FD-only techniques. It can potentially be used for the inter/intrafractional 4D-localization verification.
A technique for estimating 4D-CBCT using prior knowledge and limited-angle projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, You; Yin, Fang-Fang; Ren, Lei
2013-12-15
Purpose: To develop a technique to estimate onboard 4D-CBCT using prior information and limited-angle projections for potential 4D target verification of lung radiotherapy.Methods: Each phase of onboard 4D-CBCT is considered as a deformation from one selected phase (prior volume) of the planning 4D-CT. The deformation field maps (DFMs) are solved using a motion modeling and free-form deformation (MM-FD) technique. In the MM-FD technique, the DFMs are estimated using a motion model which is extracted from planning 4D-CT based on principal component analysis (PCA). The motion model parameters are optimized by matching the digitally reconstructed radiographs of the deformed volumes tomore » the limited-angle onboard projections (data fidelity constraint). Afterward, the estimated DFMs are fine-tuned using a FD model based on data fidelity constraint and deformation energy minimization. The 4D digital extended-cardiac-torso phantom was used to evaluate the MM-FD technique. A lung patient with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume, including changes of respiration amplitude, lesion size and lesion average-position, and phase shift between lesion and body respiratory cycle. The lesions were contoured in both the estimated and “ground-truth” onboard 4D-CBCT for comparison. 3D volume percentage-difference (VPD) and center-of-mass shift (COMS) were calculated to evaluate the estimation accuracy of three techniques: MM-FD, MM-only, and FD-only. Different onboard projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy.Results: For all simulated patient and projection acquisition scenarios, the mean VPD (±S.D.)/COMS (±S.D.) between lesions in prior images and “ground-truth” onboard images were 136.11% (±42.76%)/15.5 mm (±3.9 mm). Using orthogonal-view 15°-each scan angle, the mean VPD/COMS between the lesion in estimated and “ground-truth” onboard images for MM-only, FD-only, and MM-FD techniques were 60.10% (±27.17%)/4.9 mm (±3.0 mm), 96.07% (±31.48%)/12.1 mm (±3.9 mm) and 11.45% (±9.37%)/1.3 mm (±1.3 mm), respectively. For orthogonal-view 30°-each scan angle, the corresponding results were 59.16% (±26.66%)/4.9 mm (±3.0 mm), 75.98% (±27.21%)/9.9 mm (±4.0 mm), and 5.22% (±2.12%)/0.5 mm (±0.4 mm). For single-view scan angles of 3°, 30°, and 60°, the results for MM-FD technique were 32.77% (±17.87%)/3.2 mm (±2.2 mm), 24.57% (±18.18%)/2.9 mm (±2.0 mm), and 10.48% (±9.50%)/1.1 mm (±1.3 mm), respectively. For projection angular-sampling-intervals of 0.6°, 1.2°, and 2.5° with the orthogonal-view 30°-each scan angle, the MM-FD technique generated similar VPD (maximum deviation 2.91%) and COMS (maximum deviation 0.6 mm), while sparser sampling yielded larger VPD/COMS. With equal number of projections, the estimation results using scattered 360° scan angle were slightly better than those using orthogonal-view 30°-each scan angle. The estimation accuracy of MM-FD technique declined as noise level increased.Conclusions: The MM-FD technique substantially improves the estimation accuracy for onboard 4D-CBCT using prior planning 4D-CT and limited-angle projections, compared to the MM-only and FD-only techniques. It can potentially be used for the inter/intrafractional 4D-localization verification.« less
Ronald E. McRoberts; Steen Magnussen; Erkki O. Tomppo; Gherardo Chirici
2011-01-01
Nearest neighbors techniques have been shown to be useful for estimating forest attributes, particularly when used with forest inventory and satellite image data. Published reports of positive results have been truly international in scope. However, for these techniques to be more useful, they must be able to contribute to scientific inference which, for sample-based...
2011-01-01
sensing an attractive technique for estimating LAI. Many vegetation indices, such as Normalized Difference Vegetation Index ( NDVI ), tend to saturate at...little or no improvement over NDVI . Furthermore, indirect ground-sampling techniques often used to evaluate the potential of vegetation indices also...landscapes makes remote sensing an attractive technique for estimating LAI. Many vegetation indices, such as Normalized Difference Vegetation Index ( NDVI
Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.
Spiess, Martin; Jordan, Pascal; Wendt, Mike
2018-05-07
In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.
Digital synchronization and communication techniques
NASA Technical Reports Server (NTRS)
Lindsey, William C.
1992-01-01
Information on digital synchronization and communication techniques is given in viewgraph form. Topics covered include phase shift keying, modems, characteristics of open loop digital synchronizers, an open loop phase and frequency estimator, and a digital receiver structure using an open loop estimator in a decision directed architecture.
NASA Astrophysics Data System (ADS)
Trirongjitmoah, Suchin; Iinaga, Kazuya; Sakurai, Toshihiro; Chiba, Hitoshi; Sriyudthsak, Mana; Shimizu, Koichi
2016-04-01
Quantification of small, dense low-density lipoprotein (sdLDL) cholesterol is clinically significant. We propose a practical technique to estimate the amount of sdLDL cholesterol using dynamic light scattering (DLS). An analytical solution in a closed form has newly been obtained to estimate the weight fraction of one species of scatterers in the DLS measurement of two species of scatterers. Using this solution, we can quantify the sdLDL cholesterol amount from the amounts of the low-density lipoprotein cholesterol and the high-density lipoprotein (HDL) cholesterol, which are commonly obtained through clinical tests. The accuracy of the proposed technique was confirmed experimentally using latex spheres with known size distributions. The applicability of the proposed technique was examined using samples of human blood serum. The possibility of estimating the sdLDL amount using the HDL data was demonstrated. These results suggest that the quantitative estimation of sdLDL amounts using DLS is feasible for point-of-care testing in clinical practice.
NASA Astrophysics Data System (ADS)
Piñero, G.; Vergara, L.; Desantes, J. M.; Broatch, A.
2000-11-01
The knowledge of the particle velocity fluctuations associated with acoustic pressure oscillation in the exhaust system of internal combustion engines may represent a powerful aid in the design of such systems, from the point of view of both engine performance improvement and exhaust noise abatement. However, usual velocity measurement techniques, even if applicable, are not well suited to the aggressive environment existing in exhaust systems. In this paper, a method to obtain a suitable estimate of velocity fluctuations is proposed, which is based on the application of spatial filtering (beamforming) techniques to instantaneous pressure measurements. Making use of simulated pressure-time histories, several algorithms have been checked by comparison between the simulated and the estimated velocity fluctuations. Then, problems related to the experimental procedure and associated with the proposed methodology are addressed, making application to measurements made in a real exhaust system. The results indicate that, if proper care is taken when performing the measurements, the application of beamforming techniques gives a reasonable estimate of the velocity fluctuations.
Estimation of Stratospheric Age Spectrum from Chemical Tracers
NASA Technical Reports Server (NTRS)
Schoeberl, Mark R.; Douglass, Anne R.; Polansky, Brian
2005-01-01
We have developed a technique to diagnose the stratospheric age spectrum and estimate the mean age of air using the distributions of at least four constituents with different photochemical lifetimes. We demonstrate that the technique works using a 3D CTM and then apply the technique to UMS CLAES January 1993 observations of CFC11, CFC12, CH4 and N2O. Our results are generally in agreement with mean age of air estimates from the chemical model and from observations of SF6 and CO2; however, the mean age estimates show an intrusion of very young tropical air into the mid-latitude stratosphere. This feature is consistent with mixing of high N20 air out of the tropics during the westerly phase of the QBO.
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
Importance of Geosat orbit and tidal errors in the estimation of large-scale Indian Ocean variations
NASA Technical Reports Server (NTRS)
Perigaud, Claire; Zlotnicki, Victor
1992-01-01
To improve the estimate accuracy of large-scale meridional sea-level variations, Geosat ERM data on the Indian Ocean for a 26-month period were processed using two different techniques of orbit error reduction. The first technique removes an along-track polynomial of degree 1 over about 5000 km and the second technique removes an along-track once-per-revolution sine wave about 40,000 km. Results obtained show that the polynomial technique produces stronger attenuation of both the tidal error and the large-scale oceanic signal. After filtering, the residual difference between the two methods represents 44 percent of the total variance and 23 percent of the annual variance. The sine-wave method yields a larger estimate of annual and interannual meridional variations.
Congestion estimation technique in the optical network unit registration process.
Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk
2016-07-01
We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results.
Software for the grouped optimal aggregation technique
NASA Technical Reports Server (NTRS)
Brown, P. M.; Shaw, G. W. (Principal Investigator)
1982-01-01
The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.
Kroll, Lars Eric; Schumann, Maria; Müters, Stephan; Lampert, Thomas
2017-12-01
Nationwide health surveys can be used to estimate regional differences in health. Using traditional estimation techniques, the spatial depth for these estimates is limited due to the constrained sample size. So far - without special refreshment samples - results have only been available for larger populated federal states of Germany. An alternative is regression-based small-area estimation techniques. These models can generate smaller-scale data, but are also subject to greater statistical uncertainties because of the model assumptions. In the present article, exemplary regionalized results based on the studies "Gesundheit in Deutschland aktuell" (GEDA studies) 2009, 2010 and 2012, are compared to the self-rated health status of the respondents. The aim of the article is to analyze the range of regional estimates in order to assess the usefulness of the techniques for health reporting more adequately. The results show that the estimated prevalence is relatively stable when using different samples. Important determinants of the variation of the estimates are the achieved sample size on the district level and the type of the district (cities vs. rural regions). Overall, the present study shows that small-area modeling of prevalence is associated with additional uncertainties compared to conventional estimates, which should be taken into account when interpreting the corresponding findings.
NASA Astrophysics Data System (ADS)
Lee, T. R.; Wood, W. T.; Dale, J.
2017-12-01
Empirical and theoretical models of sub-seafloor organic matter transformation, degradation and methanogenesis require estimates of initial seafloor total organic carbon (TOC). This subsurface methane, under the appropriate geophysical and geochemical conditions may manifest as methane hydrate deposits. Despite the importance of seafloor TOC, actual observations of TOC in the world's oceans are sparse and large regions of the seafloor yet remain unmeasured. To provide an estimate in areas where observations are limited or non-existent, we have implemented interpolation techniques that rely on existing data sets. Recent geospatial analyses have provided accurate accounts of global geophysical and geochemical properties (e.g. crustal heat flow, seafloor biomass, porosity) through machine learning interpolation techniques. These techniques find correlations between the desired quantity (in this case TOC) and other quantities (predictors, e.g. bathymetry, distance from coast, etc.) that are more widely known. Predictions (with uncertainties) of seafloor TOC in regions lacking direct observations are made based on the correlations. Global distribution of seafloor TOC at 1 x 1 arc-degree resolution was estimated from a dataset of seafloor TOC compiled by Seiter et al. [2004] and a non-parametric (i.e. data-driven) machine learning algorithm, specifically k-nearest neighbors (KNN). Built-in predictor selection and a ten-fold validation technique generated statistically optimal estimates of seafloor TOC and uncertainties. In addition, inexperience was estimated. Inexperience is effectively the distance in parameter space to the single nearest neighbor, and it indicates geographic locations where future data collection would most benefit prediction accuracy. These improved geospatial estimates of TOC in data deficient areas will provide new constraints on methane production and subsequent methane hydrate accumulation.
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Deassuncao, G. V.; Moreira, M. A.; Novaes, R. A.
1984-01-01
The development of a methodology for annual estimates of irrigated rice crop in the State of Rio Grande do Sul, Brazil, using remote sensing techniques is proposed. The project involves interpretation, digital analysis, and sampling techniques of LANDSAT imagery. Results are discussed from a preliminary phase for identifying and evaluating irrigated rice crop areas in four counties of the State, for the crop year 1982/1983. This first phase involved just visual interpretation techniques of MSS/LANDSAT images.
NASA Technical Reports Server (NTRS)
Green, R. N.
1981-01-01
The shape factor, parameter estimation, and deconvolution data analysis techniques were applied to the same set of Earth emitted radiation measurements to determine the effects of different techniques on the estimated radiation field. All three techniques are defined and their assumptions, advantages, and disadvantages are discussed. Their results are compared globally, zonally, regionally, and on a spatial spectrum basis. The standard deviations of the regional differences in the derived radiant exitance varied from 7.4 W-m/2 to 13.5 W-m/2.
Image enhancement and advanced information extraction techniques for ERTS-1 data
NASA Technical Reports Server (NTRS)
Malila, W. A. (Principal Investigator); Nalepka, R. F.; Sarno, J. E.
1975-01-01
The author has identified the following significant results. It was demonstrated and concluded that: (1) the atmosphere has significant effects on ERTS MSS data which can seriously degrade recognition performance; (2) the application of selected signature extension techniques serve to reduce the deleterious effects of both the atmosphere and changing ground conditions on recognition performance; and (3) a proportion estimation algorithm for overcoming problems in acreage estimation accuracy resulting from the coarse spatial resolution of the ERTS MSS, was able to significantly improve acreage estimation accuracy over that achievable by conventional techniques, especially for high contrast targets such as lakes and ponds.
Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis
NASA Technical Reports Server (NTRS)
Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.
2017-01-01
This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.
Weak value amplification considered harmful
NASA Astrophysics Data System (ADS)
Ferrie, Christopher; Combes, Joshua
2014-03-01
We show using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of parameter estimation and signal detection. We show that using all data and considering the joint distribution of all measurement outcomes yields the optimal estimator. Moreover, we show estimation using the maximum likelihood technique with weak values as small as possible produces better performance for quantum metrology. In doing so, we identify the optimal experimental arrangement to be the one which reveals the maximal eigenvalue of the square of system observables. We also show these conclusions do not change in the presence of technical noise.
NASA Technical Reports Server (NTRS)
Cornish, C. R.
1983-01-01
Following reception and analog to digital conversion (A/D) conversion, atmospheric radar backscatter echoes need to be processed so as to obtain desired information about atmospheric processes and to eliminate or minimize contaminating contributions from other sources. Various signal processing techniques have been implemented at mesosphere-stratosphere-troposphere (MST) radar facilities to estimate parameters of interest from received spectra. Such estimation techniques need to be both accurate and sufficiently efficient to be within the capabilities of the particular data-processing system. The various techniques used to parameterize the spectra of received signals are reviewed herein. Noise estimation, electromagnetic interference, data smoothing, correlation, and the Doppler effect are among the specific points addressed.
A three-dimensional muscle activity imaging technique for assessing pelvic muscle function
NASA Astrophysics Data System (ADS)
Zhang, Yingchun; Wang, Dan; Timm, Gerald W.
2010-11-01
A novel multi-channel surface electromyography (EMG)-based three-dimensional muscle activity imaging (MAI) technique has been developed by combining the bioelectrical source reconstruction approach and subject-specific finite element modeling approach. Internal muscle activities are modeled by a current density distribution and estimated from the intra-vaginal surface EMG signals with the aid of a weighted minimum norm estimation algorithm. The MAI technique was employed to minimally invasively reconstruct electrical activity in the pelvic floor muscles and urethral sphincter from multi-channel intra-vaginal surface EMG recordings. A series of computer simulations were conducted to evaluate the performance of the present MAI technique. With appropriate numerical modeling and inverse estimation techniques, we have demonstrated the capability of the MAI technique to accurately reconstruct internal muscle activities from surface EMG recordings. This MAI technique combined with traditional EMG signal analysis techniques is being used to study etiologic factors associated with stress urinary incontinence in women by correlating functional status of muscles characterized from the intra-vaginal surface EMG measurements with the specific pelvic muscle groups that generated these signals. The developed MAI technique described herein holds promise for eliminating the need to place needle electrodes into muscles to obtain accurate EMG recordings in some clinical applications.
NASA Astrophysics Data System (ADS)
Li, Dong; Cheng, Tao; Zhou, Kai; Zheng, Hengbiao; Yao, Xia; Tian, Yongchao; Zhu, Yan; Cao, Weixing
2017-07-01
Red edge position (REP), defined as the wavelength of the inflexion point in the red edge region (680-760 nm) of the reflectance spectrum, has been widely used to estimate foliar chlorophyll content from reflectance spectra. A number of techniques have been developed for REP extraction in the past three decades, but most of them require data-specific parameterization and the consistence of their performance from leaf to canopy levels remains poorly understood. In this study, we propose a new technique (WREP) to extract REPs based on the application of continuous wavelet transform to reflectance spectra. The REP is determined by the zero-crossing wavelength in the red edge region of a wavelet transformed spectrum for a number of scales of wavelet decomposition. The new technique is simple to implement and requires no parameterization from the user as long as continuous wavelet transforms are applied to reflectance spectra. Its performance was evaluated for estimating leaf chlorophyll content (LCC) and canopy chlorophyll content (CCC) of cereal crops (i.e. rice and wheat) and compared with traditional techniques including linear interpolation, linear extrapolation, polynomial fitting and inverted Gaussian. Our results demonstrated that WREP obtained the best estimation accuracy for both LCC and CCC as compared to traditional techniques. High scales of wavelet decomposition were favorable for the estimation of CCC and low scales for the estimation of LCC. The difference in optimal scale reveals the underlying mechanism of signature transfer from leaf to canopy levels. In addition, crop-specific models were required for the estimation of CCC over the full range. However, a common model could be built with the REPs extracted with Scale 5 of the WREP technique for wheat and rice crops when CCC was less than 2 g/m2 (R2 = 0.73, RMSE = 0.26 g/m2). This insensitivity of WREP to crop type indicates the potential for aerial mapping of chlorophyll content between growth seasons of cereal crops. The new REP extraction technique provides us a new insight for understanding the spectral changes in the red edge region in response to chlorophyll variation from leaf to canopy levels.
Least-squares sequential parameter and state estimation for large space structures
NASA Technical Reports Server (NTRS)
Thau, F. E.; Eliazov, T.; Montgomery, R. C.
1982-01-01
This paper presents the formulation of simultaneous state and parameter estimation problems for flexible structures in terms of least-squares minimization problems. The approach combines an on-line order determination algorithm, with least-squares algorithms for finding estimates of modal approximation functions, modal amplitudes, and modal parameters. The approach combines previous results on separable nonlinear least squares estimation with a regression analysis formulation of the state estimation problem. The technique makes use of sequential Householder transformations. This allows for sequential accumulation of matrices required during the identification process. The technique is used to identify the modal prameters of a flexible beam.
A solar energy estimation procedure using remote sensing techniques. [watershed hydrologic models
NASA Technical Reports Server (NTRS)
Khorram, S.
1977-01-01
The objective of this investigation is to design a remote sensing-aided procedure for daily location-specific estimation of solar radiation components over the watershed(s) of interest. This technique has been tested on the Spanish Creek Watershed, Northern California, with successful results.
A new slit lamp-based technique for anterior chamber angle estimation.
Gispets, Joan; Cardona, Genís; Tomàs, Núria; Fusté, Cèlia; Binns, Alison; Fortes, Miguel A
2014-06-01
To design and test a new noninvasive method for anterior chamber angle (ACA) estimation based on the slit lamp that is accessible to all eye-care professionals. A new technique (slit lamp anterior chamber estimation [SLACE]) that aims to overcome some of the limitations of the van Herick procedure was designed. The technique, which only requires a slit lamp, was applied to estimate the ACA of 50 participants (100 eyes) using two different slit lamp models, and results were compared with gonioscopy as the clinical standard. The Spearman nonparametric correlation between ACA values as determined by gonioscopy and SLACE were 0.81 (p < 0.001) and 0.79 (p < 0.001) for each slit lamp. Sensitivity values of 100 and 87.5% and specificity values of 75 and 81.2%, depending on the slit lamp used, were obtained for the SLACE technique as compared with gonioscopy (Spaeth classification). The SLACE technique, when compared with gonioscopy, displayed good accuracy in the detection of narrow angles, and it may be useful for eye-care clinicians without access to expensive alternative equipment or those who cannot perform gonioscopy because of legal constraints regarding the use of diagnostic drugs.
Recent Improvements in Estimating Convective and Stratiform Rainfall in Amazonia
NASA Technical Reports Server (NTRS)
Negri, Andrew J.
1999-01-01
In this paper we present results from the application of a satellite infrared (IR) technique for estimating rainfall over northern South America. Our main objectives are to examine the diurnal variability of rainfall and to investigate the relative contributions from the convective and stratiform components. We apply the technique of Anagnostou et al (1999). In simple functional form, the estimated rain area A(sub rain) may be expressed as: A(sub rain) = f(A(sub mode),T(sub mode)), where T(sub mode) is the mode temperature of a cloud defined by 253 K, and A(sub mode) is the area encompassed by T(sub mode). The technique was trained by a regression between coincident microwave estimates from the Goddard Profiling (GPROF) algorithm (Kummerow et al, 1996) applied to SSM/I data and GOES IR (11 microns) observations. The apportionment of the rainfall into convective and stratiform components is based on the microwave technique described by Anagnostou and Kummerow (1997). The convective area from this technique was regressed against an IR structure parameter (the Convective Index) defined by Anagnostou et al (1999). Finally, rainrates are assigned to the Am.de proportional to (253-temperature), with different rates for the convective and stratiform
Technique for estimating depth of 100-year floods in Tennessee
Gamble, Charles R.; Lewis, James G.
1977-01-01
Preface: A method is presented for estimating the depth of the loo-year flood in four hydrologic areas in Tennessee. Depths at 151 gaging stations on streams that were not significantly affected by man made changes were related to basin characteristics by multiple regression techniques. Equations derived from the analysis can be used to estimate the depth of the loo-year flood if the size of the drainage basin is known.
1998-03-01
benefit estimation techniques used to monetize the value of flood hazard reduction in the City of Roanoke. Each method was then used to estimate...behavior. This framework justifies interpreting people’s choices to infer and then monetize their preferences. If individuals have well-ordered and...Journal of Agricultural Economics. 68 (1986) 2: 280-290. Soule, Don M. and Claude M. Vaughn, "Flood Protection Benefits as Reflected in Property
Jacquemin, Bénédicte; Lepeule, Johanna; Boudier, Anne; Arnould, Caroline; Benmerad, Meriem; Chappaz, Claire; Ferran, Joane; Kauffmann, Francine; Morelli, Xavier; Pin, Isabelle; Pison, Christophe; Rios, Isabelle; Temam, Sofia; Künzli, Nino; Slama, Rémy; Siroux, Valérie
2013-09-01
Errors in address geocodes may affect estimates of the effects of air pollution on health. We investigated the impact of four geocoding techniques on the association between urban air pollution estimated with a fine-scale (10 m × 10 m) dispersion model and lung function in adults. We measured forced expiratory volume in 1 sec (FEV1) and forced vital capacity (FVC) in 354 adult residents of Grenoble, France, who were participants in two well-characterized studies, the Epidemiological Study on the Genetics and Environment on Asthma (EGEA) and the European Community Respiratory Health Survey (ECRHS). Home addresses were geocoded using individual building matching as the reference approach and three spatial interpolation approaches. We used a dispersion model to estimate mean PM10 and nitrogen dioxide concentrations at each participant's address during the 12 months preceding their lung function measurements. Associations between exposures and lung function parameters were adjusted for individual confounders and same-day exposure to air pollutants. The geocoding techniques were compared with regard to geographical distances between coordinates, exposure estimates, and associations between the estimated exposures and health effects. Median distances between coordinates estimated using the building matching and the three interpolation techniques were 26.4, 27.9, and 35.6 m. Compared with exposure estimates based on building matching, PM10 concentrations based on the three interpolation techniques tended to be overestimated. When building matching was used to estimate exposures, a one-interquartile range increase in PM10 (3.0 μg/m3) was associated with a 3.72-point decrease in FVC% predicted (95% CI: -0.56, -6.88) and a 3.86-point decrease in FEV1% predicted (95% CI: -0.14, -3.24). The magnitude of associations decreased when other geocoding approaches were used [e.g., for FVC% predicted -2.81 (95% CI: -0.26, -5.35) using NavTEQ, or 2.08 (95% CI -4.63, 0.47, p = 0.11) using Google Maps]. Our findings suggest that the choice of geocoding technique may influence estimated health effects when air pollution exposures are estimated using a fine-scale exposure model.
NASA Astrophysics Data System (ADS)
Prakash, Satya; Mahesh, C.; Gairola, Rakesh M.
2011-12-01
Large-scale precipitation estimation is very important for climate science because precipitation is a major component of the earth's water and energy cycles. In the present study, the GOES precipitation index technique has been applied to the Kalpana-1 satellite infrared (IR) images of every three-hourly, i.e., of 0000, 0300, 0600,…., 2100 hours UTC, for rainfall estimation as a preparatory to the INSAT-3D. After the temperatures of all the pixels in a grid are known, they are distributed to generate a three-hourly 24-class histogram of brightness temperatures of IR (10.5-12.5 μm) images for a 1.0° × 1.0° latitude/longitude box. The daily, monthly, and seasonal rainfall have been estimated using these three-hourly rain estimates for the entire south-west monsoon period of 2009 in the present study. To investigate the potential of these rainfall estimates, the validation of monthly and seasonal rainfall estimates has been carried out using the Global Precipitation Climatology Project and Global Precipitation Climatology Centre data. The validation results show that the present technique works very well for the large-scale precipitation estimation qualitatively as well as quantitatively. The results also suggest that the simple IR-based estimation technique can be used to estimate rainfall for tropical areas at a larger temporal scale for climatological applications.
Halford, K.J.; Mayer, G.C.
2000-01-01
Ground water discharge and recharge frequently have been estimated with hydrograph-separation techniques, but the critical assumptions of the techniques have not been investigated. The critical assumptions are that the hydraulic characteristics of the contributing aquifer (recession index) can be estimated from stream-discharge records; that periods of exclusively ground water discharge can be reliably identified; and that stream-discharge peaks approximate the magnitude and tinting of recharge events. The first assumption was tested by estimating the recession index from st earn-discharge hydrographs, ground water hydrographs, and hydraulic diffusivity estimates from aquifer tests in basins throughout the eastern United States and Montana. The recession index frequently could not be estimated reliably from stream-discharge records alone because many of the estimates of the recession index were greater than 1000 days. The ratio of stream discharge during baseflow periods was two to 36 times greater than the maximum expected range of ground water discharge at 12 of the 13 field sites. The identification of the ground water component of stream-discharge records was ambiguous because drainage from bank-storage, wetlands, surface water bodies, soils, and snowpacks frequently exceeded ground water discharge and also decreased exponentially during recession periods. The timing and magnitude of recharge events could not be ascertained from stream-discharge records at any of the sites investigated because recharge events were not directly correlated with stream peaks. When used alone, the recession-curve-displacement method and other hydrograph-separation techniques are poor tools for estimating ground water discharge or recharge because the major assumptions of the methods are commonly and grossly violated. Multiple, alternative methods of estimating ground water discharge and recharge should be used because of the uncertainty associated with any one technique.
Alternative Strategies for Pricing Home Work Time.
ERIC Educational Resources Information Center
Zick, Cathleen D.; Bryant, W. Keith
1983-01-01
Discusses techniques for measuring the value of home work time. Estimates obtained using the reservation wage technique are contrasted with market alternative estimates derived with the same data set. Findings suggest that the market alternative cost method understates the true value of a woman's home time to the household. (JOW)
A NOVEL TECHNIQUE FOR QUANTITATIVE ESTIMATION OF UPTAKE OF DIESEL EXHAUST PARTICLES BY LUNG CELLS
While airborne particulates like diesel exhaust particulates (DEP) exert significant toxicological effects on lungs, quantitative estimation of accumulation of DEP inside lung cells has not been reported due to a lack of an accurate and quantitative technique for this purpose. I...
Estimation of Target Angular Position Under Mainbeam Jamming Conditions,
1995-12-01
technique, Multiple Signal Classification ( MUSIC ), is used to estimate the target Direction Of Arrival (DOA) from the processed data vectors. The model...used in the MUSIC technique takes into account the fact that the jammer has been cancelled in the target data vector. The performance of this algorithm
We compare biomass burning emissions estimates from four different techniques that use satellite based fire products to determine area burned over regional to global domains. Three of the techniques use active fire detections from polar-orbiting MODIS sensors and one uses detec...
A nonparametric clustering technique which estimates the number of clusters
NASA Technical Reports Server (NTRS)
Ramey, D. B.
1983-01-01
In applications of cluster analysis, one usually needs to determine the number of clusters, K, and the assignment of observations to each cluster. A clustering technique based on recursive application of a multivariate test of bimodality which automatically estimates both K and the cluster assignments is presented.
ERIC Educational Resources Information Center
Stapleton, Laura M.
2008-01-01
This article discusses replication sampling variance estimation techniques that are often applied in analyses using data from complex sampling designs: jackknife repeated replication, balanced repeated replication, and bootstrapping. These techniques are used with traditional analyses such as regression, but are currently not used with structural…
A Biomechanical Modeling Guided CBCT Estimation Technique
Zhang, You; Tehrani, Joubin Nasehi; Wang, Jing
2017-01-01
Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks. PMID:27831866
NASA Technical Reports Server (NTRS)
Tilton, J. C.; Swain, P. H. (Principal Investigator); Vardeman, S. B.
1981-01-01
A key input to a statistical classification algorithm, which exploits the tendency of certain ground cover classes to occur more frequently in some spatial context than in others, is a statistical characterization of the context: the context distribution. An unbiased estimator of the context distribution is discussed which, besides having the advantage of statistical unbiasedness, has the additional advantage over other estimation techniques of being amenable to an adaptive implementation in which the context distribution estimate varies according to local contextual information. Results from applying the unbiased estimator to the contextual classification of three real LANDSAT data sets are presented and contrasted with results from non-contextual classifications and from contextual classifications utilizing other context distribution estimation techniques.
Depth-estimation-enabled compound eyes
NASA Astrophysics Data System (ADS)
Lee, Woong-Bi; Lee, Heung-No
2018-04-01
Most animals that have compound eyes determine object distances by using monocular cues, especially motion parallax. In artificial compound eye imaging systems inspired by natural compound eyes, object depths are typically estimated by measuring optic flow; however, this requires mechanical movement of the compound eyes or additional acquisition time. In this paper, we propose a method for estimating object depths in a monocular compound eye imaging system based on the computational compound eye (COMPU-EYE) framework. In the COMPU-EYE system, acceptance angles are considerably larger than interommatidial angles, causing overlap between the ommatidial receptive fields. In the proposed depth estimation technique, the disparities between these receptive fields are used to determine object distances. We demonstrate that the proposed depth estimation technique can estimate the distances of multiple objects.
Technique for estimation of streamflow statistics in mineral areas of interest in Afghanistan
Olson, Scott A.; Mack, Thomas J.
2011-01-01
A technique for estimating streamflow statistics at ungaged stream sites in areas of mineral interest in Afghanistan using drainage-area-ratio relations of historical streamflow data was developed and is documented in this report. The technique can be used to estimate the following streamflow statistics at ungaged sites: (1) 7-day low flow with a 10-year recurrence interval, (2) 7-day low flow with a 2-year recurrence interval, (3) daily mean streamflow exceeded 90 percent of the time, (4) daily mean streamflow exceeded 80 percent of the time, (5) mean monthly streamflow for each month of the year, (6) mean annual streamflow, and (7) minimum monthly streamflow for each month of the year. Because they are based on limited historical data, the estimates of streamflow statistics at ungaged sites are considered preliminary.
A New Approach to Estimate Forest Parameters Using Dual-Baseline Pol-InSAR Data
NASA Astrophysics Data System (ADS)
Bai, L.; Hong, W.; Cao, F.; Zhou, Y.
2009-04-01
In POL-InSAR applications using ESPRIT technique, it is assumed that there exist stable scattering centres in the forest. However, the observations in forest severely suffer from volume and temporal decorrelation. The forest scatters are not stable as assumed. The obtained interferometric information is not accurate as expected. Besides, ESPRIT techniques could not identify the interferometric phases corresponding to the ground and the canopy. It provides multiple estimations for the height between two scattering centers due to phase unwrapping. Therefore, estimation errors are introduced to the forest height results. To suppress the two types of errors, we use the dual-baseline POL-InSAR data to estimate forest height. Dual-baseline coherence optimization is applied to obtain interferometric information of stable scattering centers in the forest. From the interferometric phases for different baselines, estimation errors caused by phase unwrapping is solved. Other estimation errors can be suppressed, too. Experiments are done to the ESAR L band POL-InSAR data. Experimental results show the proposed methods provide more accurate forest height than ESPRIT technique.
An Adaptive Kalman Filter Using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
Characterizing Detrended Fluctuation Analysis of multifractional Brownian motion
NASA Astrophysics Data System (ADS)
Setty, V. A.; Sharma, A. S.
2015-02-01
The Hurst exponent (H) is widely used to quantify long range dependence in time series data and is estimated using several well known techniques. Recognizing its ability to remove trends the Detrended Fluctuation Analysis (DFA) is used extensively to estimate a Hurst exponent in non-stationary data. Multifractional Brownian motion (mBm) broadly encompasses a set of models of non-stationary data exhibiting time varying Hurst exponents, H(t) as against a constant H. Recently, there has been a growing interest in time dependence of H(t) and sliding window techniques have been used to estimate a local time average of the exponent. This brought to fore the ability of DFA to estimate scaling exponents in systems with time varying H(t) , such as mBm. This paper characterizes the performance of DFA on mBm data with linearly varying H(t) and further test the robustness of estimated time average with respect to data and technique related parameters. Our results serve as a bench-mark for using DFA as a sliding window estimator to obtain H(t) from time series data.
Estimating Environmental Compliance Costs for Industry (1981)
The paper discusses the pros and cons of existing approaches to compliance cost estimation such as ex post survey estimation and ex ante estimation techniques (input cost accounting methods, engineering process models and, econometric models).
Comparison of 2-D and 3-D estimates of placental volume in early pregnancy.
Aye, Christina Y L; Stevenson, Gordon N; Impey, Lawrence; Collins, Sally L
2015-03-01
Ultrasound estimation of placental volume (PlaV) between 11 and 13 wk has been proposed as part of a screening test for small-for-gestational-age babies. A semi-automated 3-D technique, validated against the gold standard of manual delineation, has been found at this stage of gestation to predict small-for-gestational-age at term. Recently, when used in the third trimester, an estimate obtained using a 2-D technique was found to correlate with placental weight at delivery. Given its greater simplicity, the 2-D technique might be more useful as part of an early screening test. We investigated if the two techniques produced similar results when used in the first trimester. The correlation between PlaV values calculated by the two different techniques was assessed in 139 first-trimester placentas. The agreement on PlaV and derived "standardized placental volume," a dimensionless index correcting for gestational age, was explored with the Mann-Whitney test and Bland-Altman plots. Placentas were categorized into five different shape subtypes, and a subgroup analysis was performed. Agreement was poor for both PlaV and standardized PlaV (p < 0.001 and p < 0.001), with the 2-D technique yielding larger estimates for both indices compared with the 3-D method. The mean difference in standardized PlaV values between the two methods was 0.007 (95% confidence interval: 0.006-0.009). The best agreement was found for regular rectangle-shaped placentas (p = 0.438 and p = 0.408). The poor correlation between the 2-D and 3-D techniques may result from the heterogeneity of placental morphology at this stage of gestation. In early gestation, the simpler 2-D estimates of PlaV do not correlate strongly with those obtained with the validated 3-D technique. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Estimating GATE rainfall with geosynchronous satellite images
NASA Technical Reports Server (NTRS)
Stout, J. E.; Martin, D. W.; Sikdar, D. N.
1979-01-01
A method of estimating GATE rainfall from either visible or infrared images of geosynchronous satellites is described. Rain is estimated from cumulonimbus cloud area by the equation R = a sub 0 A + a sub 1 dA/dt, where R is volumetric rainfall, A cloud area, t time, and a sub 0 and a sub 1 are constants. Rainfall, calculated from 5.3 cm ship radar, and cloud area are measured from clouds in the tropical North Atlantic. The constants a sub 0 and a sub 1 are fit to these measurements by the least-squares method. Hourly estimates by the infrared version of this technique correlate well (correlation coefficient of 0.84) with rain totals derived from composited radar for an area of 100,000 sq km. The accuracy of this method is described and compared to that of another technique using geosynchronous satellite images. It is concluded that this technique provides useful estimates of tropical oceanic rainfall on a convective scale.
Adaptive Elastic Net for Generalized Methods of Moments.
Caner, Mehmet; Zhang, Hao Helen
2014-01-30
Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.
Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics
Zhang, Guanqun; Hahn, Jin-Oh; Mukkamala, Ramakrishna
2011-01-01
A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel) while being defined by only a few parameters (unlike comprehensive distributed-parameter models). As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications. PMID:22053157
Estimation of hysteretic damping of structures by stochastic subspace identification
NASA Astrophysics Data System (ADS)
Bajrić, Anela; Høgsberg, Jan
2018-05-01
Output-only system identification techniques can estimate modal parameters of structures represented by linear time-invariant systems. However, the extension of the techniques to structures exhibiting non-linear behavior has not received much attention. This paper presents an output-only system identification method suitable for random response of dynamic systems with hysteretic damping. The method applies the concept of Stochastic Subspace Identification (SSI) to estimate the model parameters of a dynamic system with hysteretic damping. The restoring force is represented by the Bouc-Wen model, for which an equivalent linear relaxation model is derived. Hysteretic properties can be encountered in engineering structures exposed to severe cyclic environmental loads, as well as in vibration mitigation devices, such as Magneto-Rheological (MR) dampers. The identification technique incorporates the equivalent linear damper model in the estimation procedure. Synthetic data, representing the random vibrations of systems with hysteresis, validate the estimated system parameters by the presented identification method at low and high-levels of excitation amplitudes.
Methods for Multiloop Identification of Visual and Neuromuscular Pilot Responses.
Olivari, Mario; Nieuwenhuizen, Frank M; Venrooij, Joost; Bülthoff, Heinrich H; Pollini, Lorenzo
2015-12-01
In this paper, identification methods are proposed to estimate the neuromuscular and visual responses of a multiloop pilot model. A conventional and widely used technique for simultaneous identification of the neuromuscular and visual systems makes use of cross-spectral density estimates. This paper shows that this technique requires a specific noninterference hypothesis, often implicitly assumed, that may be difficult to meet during actual experimental designs. A mathematical justification of the necessity of the noninterference hypothesis is given. Furthermore, two methods are proposed that do not have the same limitations. The first method is based on autoregressive models with exogenous inputs, whereas the second one combines cross-spectral estimators with interpolation in the frequency domain. The two identification methods are validated by offline simulations and contrasted to the classic method. The results reveal that the classic method fails when the noninterference hypothesis is not fulfilled; on the contrary, the two proposed techniques give reliable estimates. Finally, the three identification methods are applied to experimental data from a closed-loop control task with pilots. The two proposed techniques give comparable estimates, different from those obtained by the classic method. The differences match those found with the simulations. Thus, the two identification methods provide a good alternative to the classic method and make it possible to simultaneously estimate human's neuromuscular and visual responses in cases where the classic method fails.
Psychometric Evaluation of Lexical Diversity Indices: Assessing Length Effects.
Fergadiotis, Gerasimos; Wright, Heather Harris; Green, Samuel B
2015-06-01
Several novel techniques have been developed recently to assess the breadth of a speaker's vocabulary exhibited in a language sample. The specific aim of this study was to increase our understanding of the validity of the scores generated by different lexical diversity (LD) estimation techniques. Four techniques were explored: D, Maas, measure of textual lexical diversity, and moving-average type-token ratio. Four LD indices were estimated for language samples on 4 discourse tasks (procedures, eventcasts, story retell, and recounts) from 442 adults who are neurologically intact. The resulting data were analyzed using structural equation modeling. The scores for measure of textual lexical diversity and moving-average type-token ratio were stronger indicators of the LD of the language samples. The results for the other 2 techniques were consistent with the presence of method factors representing construct-irrelevant sources. These findings offer a deeper understanding of the relative validity of the 4 estimation techniques and should assist clinicians and researchers in the selection of LD measures of language samples that minimize construct-irrelevant sources.
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Curlander, J. C.
1992-01-01
Estimation of the Doppler centroid ambiguity is a necessary element of the signal processing for SAR systems with large antenna pointing errors. Without proper resolution of the Doppler centroid estimation (DCE) ambiguity, the image quality will be degraded in the system impulse response function and the geometric fidelity. Two techniques for resolution of DCE ambiguity for the spaceborne SAR are presented; they include a brief review of the range cross-correlation technique and presentation of a new technique using multiple pulse repetition frequencies (PRFs). For SAR systems, where other performance factors control selection of the PRF's, an algorithm is devised to resolve the ambiguity that uses PRF's of arbitrary numerical values. The performance of this multiple PRF technique is analyzed based on a statistical error model. An example is presented that demonstrates for the Shuttle Imaging Radar-C (SIR-C) C-band SAR, the probability of correct ambiguity resolution is higher than 95 percent for antenna attitude errors as large as 3 deg.
A comparison of techniques for assessing farmland bumblebee populations.
Wood, T J; Holland, J M; Goulson, D
2015-04-01
Agri-environment schemes have been implemented across the European Union in order to reverse declines in farmland biodiversity. To assess the impact of these schemes for bumblebees, accurate measures of their populations are required. Here, we compared bumblebee population estimates on 16 farms using three commonly used techniques: standardised line transects, coloured pan traps and molecular estimates of nest abundance. There was no significant correlation between the estimates obtained by the three techniques, suggesting that each technique captured a different aspect of local bumblebee population size and distribution in the landscape. Bumblebee abundance as observed on the transects was positively influenced by the number of flowers present on the transect. The number of bumblebees caught in pan traps was positively influenced by the density of flowers surrounding the trapping location and negatively influenced by wider landscape heterogeneity. Molecular estimates of the number of nests of Bombus terrestris and B. hortorum were positively associated with the proportion of the landscape covered in oilseed rape and field beans. Both direct survey techniques are strongly affected by floral abundance immediately around the survey site, potentially leading to misleading results if attempting to infer overall abundance in an area or on a farm. In contrast, whilst the molecular method suffers from an inability to detect sister pairs at low sample sizes, it appears to be unaffected by the abundance of forage and thus is the preferred survey technique.
Energy Measurement Studies for CO2 Measurement with a Coherent Doppler Lidar System
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Koch, Grady J.; Vanvalkenburg, Randal L.; Yu, Jirong; Singh, Upendra N.; Kavaya, Michael J.
2010-01-01
The accurate measurement of energy in the application of lidar system for CO2 measurement is critical. Different techniques of energy estimation in the online and offline pulses are investigated for post processing of lidar returns. The cornerstone of the techniques is the accurate estimation of the spectrum of lidar signal and background noise. Since the background noise is not the ideal white Gaussian noise, simple average level estimation of noise level is not well fit in the energy estimation of lidar signal and noise. A brief review of the methods is presented in this paper.
Techniques for estimating flood hydrographs for ungaged urban watersheds
Stricker, V.A.; Sauer, V.B.
1984-01-01
The Clark Method, modified slightly was used to develop a synthetic, dimensionless hydrograph which can be used to estimate flood hydrographs for ungaged urban watersheds. Application of the technique results in a typical (average) flood hydrograph for a given peak discharge. Input necessary to apply the technique is an estimate of basin lagtime and the recurrence interval peak discharge. Equations for this purpose were obtained from a recent nationwide study on flood frequency in urban watersheds. A regression equation was developed which relates flood volumes to drainage area size, basin lagtime, and peak discharge. This equation is useful where storage of floodwater may be a part of design of flood prevention. (USGS)
Peak-picking fundamental period estimation for hearing prostheses.
Howard, D M
1989-09-01
A real-time peak-picking fundamental period estimation device is described which is used in advanced hearing prostheses for the totally and profoundly deafened. The operation of the peak picker is compared with three well-established fundamental frequency estimation techniques: the electrolaryngograph, which is used as a "standard" hardware implementations of the cepstral technique, and the Gold/Rabiner parallel processing algorithm. These comparisons illustrate and highlight some of the important advantages and disadvantages that characterize the operation of these techniques. The special requirements of the hearing prostheses are discussed with respect to the operation of each device, and the choice of the peak picker is found to be felicitous in this application.
Doppler centroid estimation ambiguity for synthetic aperture radars
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Curlander, J. C.
1989-01-01
A technique for estimation of the Doppler centroid of an SAR in the presence of large uncertainty in antenna boresight pointing is described. Also investigated is the image degradation resulting from data processing that uses an ambiguous centroid. Two approaches for resolving ambiguities in Doppler centroid estimation (DCE) are presented: the range cross-correlation technique and the multiple-PRF (pulse repetition frequency) technique. Because other design factors control the PRF selection for SAR, a generalized algorithm is derived for PRFs not containing a common divisor. An example using the SIR-C parameters illustrates that this algorithm is capable of resolving the C-band DCE ambiguities for antenna pointing uncertainties of about 2-3 deg.
Bayesian sparse channel estimation
NASA Astrophysics Data System (ADS)
Chen, Chulong; Zoltowski, Michael D.
2012-05-01
In Orthogonal Frequency Division Multiplexing (OFDM) systems, the technique used to estimate and track the time-varying multipath channel is critical to ensure reliable, high data rate communications. It is recognized that wireless channels often exhibit a sparse structure, especially for wideband and ultra-wideband systems. In order to exploit this sparse structure to reduce the number of pilot tones and increase the channel estimation quality, the application of compressed sensing to channel estimation is proposed. In this article, to make the compressed channel estimation more feasible for practical applications, it is investigated from a perspective of Bayesian learning. Under the Bayesian learning framework, the large-scale compressed sensing problem, as well as large time delay for the estimation of the doubly selective channel over multiple consecutive OFDM symbols, can be avoided. Simulation studies show a significant improvement in channel estimation MSE and less computing time compared to the conventional compressed channel estimation techniques.
The Highly Adaptive Lasso Estimator
Benkeser, David; van der Laan, Mark
2017-01-01
Estimation of a regression functions is a common goal of statistical learning. We propose a novel nonparametric regression estimator that, in contrast to many existing methods, does not rely on local smoothness assumptions nor is it constructed using local smoothing techniques. Instead, our estimator respects global smoothness constraints by virtue of falling in a class of right-hand continuous functions with left-hand limits that have variation norm bounded by a constant. Using empirical process theory, we establish a fast minimal rate of convergence of our proposed estimator and illustrate how such an estimator can be constructed using standard software. In simulations, we show that the finite-sample performance of our estimator is competitive with other popular machine learning techniques across a variety of data generating mechanisms. We also illustrate competitive performance in real data examples using several publicly available data sets. PMID:29094111
Sixth Annual Flight Mechanics/Estimation Theory Symposium
NASA Technical Reports Server (NTRS)
Lefferts, E. (Editor)
1981-01-01
Methods of orbital position estimation were reviewed. The problem of accuracy in orbital mechanics is discussed and various techniques in current use are presented along with suggested improvements. Of special interest is the compensation for bias in satelliteborne instruments due to attitude instabilities. Image processing and correctional techniques are reported for geodetic measurements and mapping.
NASA Technical Reports Server (NTRS)
Davis, P. A.; Penn, L. M. (Principal Investigator)
1981-01-01
A technique is developed for the estimation of total daily insolation on the basis of data derivable from operational polar-orbiting satellites. Although surface insolation and meteorological observations are used in the development, the algorithm is constrained in application by the infrequent daytime polar-orbiter coverage.
Development and evaluation of the photoload sampling technique
Robert E. Keane; Laura J. Dickinson
2007-01-01
Wildland fire managers need better estimates of fuel loading so they can accurately predict potential fire behavior and effects of alternative fuel and ecosystem restoration treatments. This report presents the development and evaluation of a new fuel sampling method, called the photoload sampling technique, to quickly and accurately estimate loadings for six common...
Estimation of Unsteady Aerodynamic Models from Dynamic Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick; Klein, Vladislav
2011-01-01
Demanding aerodynamic modelling requirements for military and civilian aircraft have motivated researchers to improve computational and experimental techniques and to pursue closer collaboration in these areas. Model identification and validation techniques are key components for this research. This paper presents mathematical model structures and identification techniques that have been used successfully to model more general aerodynamic behaviours in single-degree-of-freedom dynamic testing. Model parameters, characterizing aerodynamic properties, are estimated using linear and nonlinear regression methods in both time and frequency domains. Steps in identification including model structure determination, parameter estimation, and model validation, are addressed in this paper with examples using data from one-degree-of-freedom dynamic wind tunnel and water tunnel experiments. These techniques offer a methodology for expanding the utility of computational methods in application to flight dynamics, stability, and control problems. Since flight test is not always an option for early model validation, time history comparisons are commonly made between computational and experimental results and model adequacy is inferred by corroborating results. An extension is offered to this conventional approach where more general model parameter estimates and their standard errors are compared.
Ghosal, Sayan; Gannepalli, Anil; Salapaka, Murti
2017-08-11
In this article, we explore methods that enable estimation of material properties with the dynamic mode atomic force microscopy suitable for soft matter investigation. The article presents the viewpoint of casting the system, comprising of a flexure probe interacting with the sample, as an equivalent cantilever system and compares a steady-state analysis based method with a recursive estimation technique for determining the parameters of the equivalent cantilever system in real time. The steady-state analysis of the equivalent cantilever model, which has been implicitly assumed in studies on material property determination, is validated analytically and experimentally. We show that the steady-state based technique yields results that quantitatively agree with the recursive method in the domain of its validity. The steady-state technique is considerably simpler to implement, however, slower compared to the recursive technique. The parameters of the equivalent system are utilized to interpret storage and dissipative properties of the sample. Finally, the article identifies key pitfalls that need to be avoided toward the quantitative estimation of material properties.
Tropical Cyclone Intensity Estimation Using Deep Convolutional Neural Networks
NASA Technical Reports Server (NTRS)
Maskey, Manil; Cecil, Dan; Ramachandran, Rahul; Miller, Jeffrey J.
2018-01-01
Estimating tropical cyclone intensity by just using satellite image is a challenging problem. With successful application of the Dvorak technique for more than 30 years along with some modifications and improvements, it is still used worldwide for tropical cyclone intensity estimation. A number of semi-automated techniques have been derived using the original Dvorak technique. However, these techniques suffer from subjective bias as evident from the most recent estimations on October 10, 2017 at 1500 UTC for Tropical Storm Ophelia: The Dvorak intensity estimates ranged from T2.3/33 kt (Tropical Cyclone Number 2.3/33 knots) from UW-CIMSS (University of Wisconsin-Madison - Cooperative Institute for Meteorological Satellite Studies) to T3.0/45 kt from TAFB (the National Hurricane Center's Tropical Analysis and Forecast Branch) to T4.0/65 kt from SAB (NOAA/NESDIS Satellite Analysis Branch). In this particular case, two human experts at TAFB and SAB differed by 20 knots in their Dvorak analyses, and the automated version at the University of Wisconsin was 12 knots lower than either of them. The National Hurricane Center (NHC) estimates about 10-20 percent uncertainty in its post analysis when only satellite based estimates are available. The success of the Dvorak technique proves that spatial patterns in infrared (IR) imagery strongly relate to tropical cyclone intensity. This study aims to utilize deep learning, the current state of the art in pattern recognition and image recognition, to address the need for an automated and objective tropical cyclone intensity estimation. Deep learning is a multi-layer neural network consisting of several layers of simple computational units. It learns discriminative features without relying on a human expert to identify which features are important. Our study mainly focuses on convolutional neural network (CNN), a deep learning algorithm, to develop an objective tropical cyclone intensity estimation. CNN is a supervised learning algorithm requiring a large number of training data. Since the archives of intensity data and tropical cyclone centric satellite images is openly available for use, the training data is easily created by combining the two. Results, case studies, prototypes, and advantages of this approach will be discussed.
Estimation of submarine mass failure probability from a sequence of deposits with age dates
Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.
2013-01-01
The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.
Estimating index of refraction from polarimetric hyperspectral imaging measurements.
Martin, Jacob A; Gross, Kevin C
2016-08-08
Current material identification techniques rely on estimating reflectivity or emissivity which vary with viewing angle. As off-nadir remote sensing platforms become increasingly prevalent, techniques robust to changing viewing geometries are desired. A technique leveraging polarimetric hyperspectral imaging (P-HSI), to estimate complex index of refraction, N̂(ν̃), an inherent material property, is presented. The imaginary component of N̂(ν̃) is modeled using a small number of "knot" points and interpolation at in-between frequencies ν̃. The real component is derived via the Kramers-Kronig relationship. P-HSI measurements of blackbody radiation scattered off of a smooth quartz window show that N̂(ν̃) can be retrieved to within 0.08 RMS error between 875 cm-1 ≤ ν̃ ≤ 1250 cm-1. P-HSI emission measurements of a heated smooth Pyrex beaker also enable successful N̂(ν̃) estimates, which are also invariant to object temperature.
Evaluation of a technique for satellite-derived area estimation of forest fires
NASA Technical Reports Server (NTRS)
Cahoon, Donald R., Jr.; Stocks, Brian J.; Levine, Joel S.; Cofer, Wesley R., III; Chung, Charles C.
1992-01-01
The advanced very high resolution radiometer (AVHRR), has been found useful for the location and monitoring of both smoke and fires because of the daily observations, the large geographical coverage of the imagery, the spectral characteristics of the instrument, and the spatial resolution of the instrument. This paper will discuss the application of AVHRR data to assess the geographical extent of burning. Methods have been developed to estimate the surface area of burning by analyzing the surface area effected by fire with AVHRR imagery. Characteristics of the AVHRR instrument, its orbit, field of view, and archived data sets are discussed relative to the unique surface area of each pixel. The errors associated with this surface area estimation technique are determined using AVHRR-derived area estimates of target regions with known sizes. This technique is used to evaluate the area burned during the Yellowstone fires of 1988.
Estimation of Soil Moisture with L-band Multi-polarization Radar
NASA Technical Reports Server (NTRS)
Shi, J.; Chen, K. S.; Kim, Chung-Li Y.; Van Zyl, J. J.; Njoku, E.; Sun, G.; O'Neill, P.; Jackson, T.; Entekhabi, D.
2004-01-01
Through analyses of the model simulated data-base, we developed a technique to estimate surface soil moisture under HYDROS radar sensor (L-band multi-polarizations and 40deg incidence) configuration. This technique includes two steps. First, it decomposes the total backscattering signals into two components - the surface scattering components (the bare surface backscattering signals attenuated by the overlaying vegetation layer) and the sum of the direct volume scattering components and surface-volume interaction components at different polarizations. From the model simulated data-base, our decomposition technique works quit well in estimation of the surface scattering components with RMSEs of 0.12,0.25, and 0.55 dB for VV, HH, and VH polarizations, respectively. Then, we use the decomposed surface backscattering signals to estimate the soil moisture and the combined surface roughness and vegetation attenuation correction factors with all three polarizations.
A Fourier approach to cloud motion estimation
NASA Technical Reports Server (NTRS)
Arking, A.; Lo, R. C.; Rosenfield, A.
1977-01-01
A Fourier technique is described for estimating cloud motion from pairs of pictures using the phase of the cross spectral density. The method allows motion estimates to be made for individual spatial frequencies, which are related to cloud pattern dimensions. Results obtained are presented and compared with the results of a Fourier domain cross correlation scheme. Using both artificial and real cloud data show that the technique is relatively sensitive to the presence of mixtures of motions, changes in cloud shape, and edge effects.
WAATS: A computer program for Weights Analysis of Advanced Transportation Systems
NASA Technical Reports Server (NTRS)
Glatt, C. R.
1974-01-01
A historical weight estimating technique for advanced transportation systems is presented. The classical approach to weight estimation is discussed and sufficient data is presented to estimate weights for a large spectrum of flight vehicles including horizontal and vertical takeoff aircraft, boosters and reentry vehicles. A computer program, WAATS (Weights Analysis for Advanced Transportation Systems) embracing the techniques discussed has been written and user instructions are presented. The program was developed for use in the ODIN (Optimal Design Integration System) system.
Reconciling Estimates of Cell Proliferation from Stable Isotope Labeling Experiments
Drylewicz, Julia; Elemans, Marjet; Zhang, Yan; Kelly, Elizabeth; Reljic, Rajko; Tesselaar, Kiki; de Boer, Rob J.; Macallan, Derek C.; Borghans, José A. M.; Asquith, Becca
2015-01-01
Stable isotope labeling is the state of the art technique for in vivo quantification of lymphocyte kinetics in humans. It has been central to a number of seminal studies, particularly in the context of HIV-1 and leukemia. However, there is a significant discrepancy between lymphocyte proliferation rates estimated in different studies. Notably, deuterated 2H2-glucose (D2-glucose) labeling studies consistently yield higher estimates of proliferation than deuterated water (D2O) labeling studies. This hampers our understanding of immune function and undermines our confidence in this important technique. Whether these differences are caused by fundamental biochemical differences between the two compounds and/or by methodological differences in the studies is unknown. D2-glucose and D2O labeling experiments have never been performed by the same group under the same experimental conditions; consequently a direct comparison of these two techniques has not been possible. We sought to address this problem. We performed both in vitro and murine in vivo labeling experiments using identical protocols with both D2-glucose and D2O. This showed that intrinsic differences between the two compounds do not cause differences in the proliferation rate estimates, but that estimates made using D2-glucose in vivo were susceptible to difficulties in normalization due to highly variable blood glucose enrichment. Analysis of three published human studies made using D2-glucose and D2O confirmed this problem, particularly in the case of short term D2-glucose labeling. Correcting for these inaccuracies in normalization decreased proliferation rate estimates made using D2-glucose and slightly increased estimates made using D2O; thus bringing the estimates from the two methods significantly closer and highlighting the importance of reliable normalization when using this technique. PMID:26437372
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
Sim, K S; Yeap, Z X; Tso, C P
2016-11-01
An improvement to the existing technique of quantifying signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images using piecewise cubic Hermite interpolation (PCHIP) technique is proposed. The new technique uses an adaptive tuning onto the PCHIP, and is thus named as ATPCHIP. To test its accuracy, 70 images are corrupted with noise and their autocorrelation functions are then plotted. The ATPCHIP technique is applied to estimate the uncorrupted noise-free zero offset point from a corrupted image. Three existing methods, the nearest neighborhood, first order interpolation and original PCHIP, are used to compare with the performance of the proposed ATPCHIP method, with respect to their calculated SNR values. Results show that ATPCHIP is an accurate and reliable method to estimate SNR values from SEM images. SCANNING 38:502-514, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Downward longwave surface radiation from sun-synchronous satellite data - Validation of methodology
NASA Technical Reports Server (NTRS)
Darnell, W. L.; Gupta, S. K.; Staylor, W. F.
1986-01-01
An extensive study has been carried out to validate a satellite technique for estimating downward longwave radiation at the surface. The technique, mostly developed earlier, uses operational sun-synchronous satellite data and a radiative transfer model to provide the surface flux estimates. The satellite-derived fluxes were compared directly with corresponding ground-measured fluxes at four different sites in the United States for a common one-year period. This provided a study of seasonal variations as well as a diversity of meteorological conditions. Dome heating errors in the ground-measured fluxes were also investigated and were corrected prior to the comparisons. Comparison of the monthly averaged fluxes from the satellite and ground sources for all four sites for the entire year showed a correlation coefficient of 0.98 and a standard error of estimate of 10 W/sq m. A brief description of the technique is provided, and the results validating the technique are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahl, D.E.; Jakowatz, C.V. Jr.; Ghiglia, D.C.
1991-01-01
Autofocus methods in SAR and self-survey techniques in SONAR have a common mathematical basis in that they both involve estimation and correction of phase errors introduced by sensor position uncertainties. Time delay estimation and correlation methods have been shown to be effective in solving the self-survey problem for towed SONAR arrays. Since it can be shown that platform motion errors introduce similar time-delay estimation problems in SAR imaging, the question arises as to whether such techniques could be effectively employed for autofocus of SAR imagery. With a simple mathematical model for motion errors in SAR, we will show why suchmore » correlation/time-delay techniques are not nearly as effective as established SAR autofocus algorithms such as phase gradient autofocus or sub-aperture based methods. This analysis forms an important bridge between signal processing methodologies for SAR and SONAR. 5 refs., 4 figs.« less
A quantitative investigation of the fracture pump-in/flowback test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plahn, S.V.; Nolte, K.G.; Thompson, L.G.
1997-02-01
Fracture-closure pressure is an important parameter for fracture treatment design and evaluation. The pump-in/flowback (PIFB) test is frequently used to estimate its magnitude. The test is attractive because bottomhole pressures (BHP`s) during flowback develop a distinct and repeatable signature. This is in contrast to the pump-in/shut-in test, where strong indications of fracture closure are rarely seen. Various techniques are used to extract closure pressure from the flowback-pressure response. Unfortunately, these techniques give different estimates for closure pressure, and their theoretical bases are not well established. The authors present results that place the PIFB test on a firmer foundation. A numericalmore » model is used to simulate the PIFB test and glean physical mechanisms contributing to the response. On the basis of their simulation results, they propose interpretation techniques that give better estimates of closure pressure than existing techniques.« less
NASA Technical Reports Server (NTRS)
Doneaud, Andre A.; Miller, James R., Jr.; Johnson, L. Ronald; Vonder Haar, Thomas H.; Laybe, Patrick
1987-01-01
The use of the area-time-integral (ATI) technique, based only on satellite data, to estimate convective rain volume over a moving target is examined. The technique is based on the correlation between the radar echo area coverage integrated over the lifetime of the storm and the radar estimated rain volume. The processing of the GOES and radar data collected in 1981 is described. The radar and satellite parameters for six convective clusters from storm events occurring on June 12 and July 2, 1981 are analyzed and compared in terms of time steps and cluster lifetimes. Rain volume is calculated by first using the regression analysis to generate the regression equation used to obtain the ATI; the ATI versus rain volume relation is then employed to compute rain volume. The data reveal that the ATI technique using satellite data is applicable to the calculation of rain volume.
Wavelet Analyses of Oil Prices, USD Variations and Impact on Logistics
NASA Astrophysics Data System (ADS)
Melek, M.; Tokgozlu, A.; Aslan, Z.
2009-07-01
This paper is related with temporal variations of historical oil prices and Dollar and Euro in Turkey. Daily data based on OECD and Central Bank of Turkey records beginning from 1946 has been considered. 1D-continuous wavelets and wavelet packets analysis techniques have been applied on data. Wavelet techniques help to detect abrupt changing's, increasing and decreasing trends of data. Estimation of variables has been presented by using linear regression estimation techniques. The results of this study have been compared with the small and large scale effects. Transportation costs of track show a similar variation with fuel prices. The second part of the paper is related with estimation of imports, exports, costs, total number of vehicles and annual variations by considering temporal variation of oil prices and Dollar currency in Turkey. Wavelet techniques offer a user friendly methodology to interpret some local effects on increasing trend of imports and exports data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, J.J. Jr.; Hyder, Z.
The Nguyen and Pinder method is one of four techniques commonly used for analysis of response data from slug tests. Limited field research has raised questions about the reliability of the parameter estimates obtained with this method. A theoretical evaluation of this technique reveals that errors were made in the derivation of the analytical solution upon which the technique is based. Simulation and field examples show that the errors result in parameter estimates that can differ from actual values by orders of magnitude. These findings indicate that the Nguyen and Pinder method should no longer be a tool in themore » repertoire of the field hydrogeologist. If data from a slug test performed in a partially penetrating well in a confined aquifer need to be analyzed, recent work has shown that the Hvorslev method is the best alternative among the commonly used techniques.« less
NASA Astrophysics Data System (ADS)
Asfahani, Jamal
2017-08-01
An alternative approach using nuclear neutron-porosity and electrical resistivity well logging of long (64 inch) and short (16 inch) normal techniques is proposed to estimate the porosity and the hydraulic conductivity ( K) of the basaltic aquifers in Southern Syria. This method is applied on the available logs of Kodana well in Southern Syria. It has been found that the obtained K value by applying this technique seems to be reasonable and comparable with the hydraulic conductivity value of 3.09 m/day obtained by the pumping test carried out at Kodana well. The proposed alternative well logging methodology seems as promising and could be practiced in the basaltic environments for the estimation of hydraulic conductivity parameter. However, more detailed researches are still required to make this proposed technique very performed in basaltic environments.
NASA Astrophysics Data System (ADS)
Nakamura, T. K. M.; Nakamura, R.; Varsani, A.; Genestreti, K. J.; Baumjohann, W.; Liu, Y.-H.
2018-05-01
A remote sensing technique to infer the local reconnection electric field based on in situ multipoint spacecraft observation at the reconnection separatrix is proposed. In this technique, the increment of the reconnected magnetic flux is estimated by integrating the in-plane magnetic field during the sequential observation of the separatrix boundary by multipoint measurements. We tested this technique by applying it to virtual observations in a two-dimensional fully kinetic particle-in-cell simulation of magnetic reconnection without a guide field and confirmed that the estimated reconnection electric field indeed agrees well with the exact value computed at the X-line. We then applied this technique to an event observed by the Magnetospheric Multiscale mission when crossing an energetic plasma sheet boundary layer during an intense substorm. The estimated reconnection electric field for this event is nearly 1 order of magnitude higher than a typical value of magnetotail reconnection.
Application of remote sensing techniques for identification of irrigated crop lands in Arizona
NASA Technical Reports Server (NTRS)
Billings, H. A.
1981-01-01
Satellite imagery was used in a project developed to demonstrate remote sensing methods of determining irrigated acreage in Arizona. The Maricopa water district, west of Phoenix, was chosen as the test area. Band rationing and unsupervised categorization were used to perform the inventory. For both techniques the irrigation district boundaries and section lines were digitized and calculated and displayed by section. Both estimation techniques were quite accurate in estimating irrigated acreage in the 1979 growing season.
NASA Astrophysics Data System (ADS)
Jin, Minquan; Delshad, Mojdeh; Dwarakanath, Varadarajan; McKinney, Daene C.; Pope, Gary A.; Sepehrnoori, Kamy; Tilburg, Charles E.; Jackson, Richard E.
1995-05-01
In this paper we present a partitioning interwell tracer test (PITT) technique for the detection, estimation, and remediation performance assessment of the subsurface contaminated by nonaqueous phase liquids (NAPLs). We demonstrate the effectiveness of this technique by examples of experimental and simulation results. The experimental results are from partitioning tracer experiments in columns packed with Ottawa sand. Both the method of moments and inverse modeling techniques for estimating NAPL saturation in the sand packs are demonstrated. In the simulation examples we use UTCHEM, a comprehensive three-dimensional, chemical flood compositional simulator developed at the University of Texas, to simulate a hypothetical two-dimensional aquifer with properties similar to the Borden site contaminated by tetrachloroethylene (PCE), and we show how partitioning interwell tracer tests can be used to estimate the amount of PCE contaminant before remedial action and as the remediation process proceeds. Tracer tests results from different stages of remediation are compared to determine the quantity of PCE removed and the amount remaining. Both the experimental (small-scale) and simulation (large-scale) results demonstrate that PITT can be used as an innovative and effective technique to detect and estimate the amount of residual NAPL and for remediation performance assessment in subsurface formations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, M.; Delshad, M.; Dwarakanath, V.
1995-05-01
In this paper we present a partitioning interwell tracer test (PITT) technique for the detection, estimation, and remediation performance assessment of the subsurface contaminated by nonaqueous phase liquids (NAPLs). We demonstrate the effectiveness of this technique by examples of experimental and simulation results. The experimental results are from partitioning tracer experiments in columns packed with Ottawa sand. Both the method of moments and inverse modeling techniques for estimating NAPL saturation in the sand packs are demonstrated. In the simulation examples we use UTCHEM, a comprehensive three-dimensional, chemical flood compositional simulator developed at the University of Texas, to simulate a hypotheticalmore » two-dimensional aquifer with properties similar to the Borden site contaminated by tetrachloroethylene (PCE), and we show how partitioning interwell tracer tests can be used to estimate the amount of PCE contaminant before remedial action and as the remediation process proceeds. Tracer test results from different stages of remediation are compared to determine the quantity of PCE removed and the amount remaining. Both the experimental (small-scale) and simulation (large-scale) results demonstrate that PITT can be used as an innovative and effective technique to detect and estimate the amount of residual NAPL and for remediation performance assessment in subsurface formations. 43 refs., 10 figs., 1 tab.« less
Piovesan, Davide; Pierobon, Alberto; DiZio, Paul; Lackner, James R
2012-01-01
This study presents and validates a Time-Frequency technique for measuring 2-dimensional multijoint arm stiffness throughout a single planar movement as well as during static posture. It is proposed as an alternative to current regressive methods which require numerous repetitions to obtain average stiffness on a small segment of the hand trajectory. The method is based on the analysis of the reassigned spectrogram of the arm's response to impulsive perturbations and can estimate arm stiffness on a trial-by-trial basis. Analytic and empirical methods are first derived and tested through modal analysis on synthetic data. The technique's accuracy and robustness are assessed by modeling the estimation of stiffness time profiles changing at different rates and affected by different noise levels. Our method obtains results comparable with two well-known regressive techniques. We also test how the technique can identify the viscoelastic component of non-linear and higher than second order systems with a non-parametrical approach. The technique proposed here is very impervious to noise and can be used easily for both postural and movement tasks. Estimations of stiffness profiles are possible with only one perturbation, making our method a useful tool for estimating limb stiffness during motor learning and adaptation tasks, and for understanding the modulation of stiffness in individuals with neurodegenerative diseases.
Kate, Rohit J.; Swartz, Ann M.; Welch, Whitney A.; Strath, Scott J.
2016-01-01
Wearable accelerometers can be used to objectively assess physical activity. However, the accuracy of this assessment depends on the underlying method used to process the time series data obtained from accelerometers. Several methods have been proposed that use this data to identify the type of physical activity and estimate its energy cost. Most of the newer methods employ some machine learning technique along with suitable features to represent the time series data. This paper experimentally compares several of these techniques and features on a large dataset of 146 subjects doing eight different physical activities wearing an accelerometer on the hip. Besides features based on statistics, distance based features and simple discrete features straight from the time series were also evaluated. On the physical activity type identification task, the results show that using more features significantly improve results. Choice of machine learning technique was also found to be important. However, on the energy cost estimation task, choice of features and machine learning technique were found to be less influential. On that task, separate energy cost estimation models trained specifically for each type of physical activity were found to be more accurate than a single model trained for all types of physical activities. PMID:26862679
A Comparison of Anthropogenic Carbon Dioxide Emissions Datasets: UND and CDIAC
NASA Astrophysics Data System (ADS)
Gregg, J. S.; Andres, R. J.
2005-05-01
Using data from the Department of Energy's Energy Information Administration (EIA), a technique is developed to estimate the monthly consumption of solid, liquid and gaseous fossil fuels for each state in the union. This technique employs monthly sales data to estimate the relative monthly proportions of the total annual carbon dioxide emissions from fossil-fuel use for all states in the union. The University of North Dakota (UND) results are compared to those published by Carbon Dioxide Information Analysis Center (CDIAC) at the Oak Ridge National Laboratory (ORNL). Recently, annual emissions per U.S. state (Blasing, Broniak, Marland, 2004a) as well as monthly CO2 emissions for the United States (Blasing, Broniak, Marland, 2004b) have been added to the CDIAC website. To determine the success of this technique, the individual state results are compared to the annual state totals calculated by CDIAC. In addition, the monthly country totals are compared with those produced by CDIAC. In general, the UND technique produces estimates that are consistent with those available on the CDIAC Trends website. Comparing the results from these two methods permits an improved understanding of the strengths and shortcomings of both estimation techniques. The primary advantages of the UND approach are its ease of implementation, the improved spatial and temporal resolution it can produce, and its universal applicability.
NASA Technical Reports Server (NTRS)
Moes, Timothy R.; Smith, Mark S.; Morelli, Eugene A.
2003-01-01
Near real-time stability and control derivative extraction is required to support flight demonstration of Intelligent Flight Control System (IFCS) concepts being developed by NASA, academia, and industry. Traditionally, flight maneuvers would be designed and flown to obtain stability and control derivative estimates using a postflight analysis technique. The goal of the IFCS concept is to be able to modify the control laws in real time for an aircraft that has been damaged in flight. In some IFCS implementations, real-time parameter identification (PID) of the stability and control derivatives of the damaged aircraft is necessary for successfully reconfiguring the control system. This report investigates the usefulness of Prescribed Simultaneous Independent Surface Excitations (PreSISE) to provide data for rapidly obtaining estimates of the stability and control derivatives. Flight test data were analyzed using both equation-error and output-error PID techniques. The equation-error PID technique is known as Fourier Transform Regression (FTR) and is a frequency-domain real-time implementation. Selected results were compared with a time-domain output-error technique. The real-time equation-error technique combined with the PreSISE maneuvers provided excellent derivative estimation in the longitudinal axis. However, the PreSISE maneuvers as presently defined were not adequate for accurate estimation of the lateral-directional derivatives.
NASA Astrophysics Data System (ADS)
Tran, H.; Mansfield, M. L.; Lyman, S. N.; O'Neil, T.; Jones, C. P.
2015-12-01
Emissions from produced-water treatment ponds are poorly characterized sources in oil and gas emission inventories that play a critical role in studying elevated winter ozone events in the Uintah Basin, Utah, U.S. Information gaps include un-quantified amounts and compositions of gases emitted from these facilities. The emitted gases are often known as volatile organic compounds (VOCs) which, beside nitrogen oxides (NOX), are major precursors for ozone formation in the near-surface layer. Field measurement campaigns using the flux-chamber technique have been performed to measure VOC emissions from a limited number of produced water ponds in the Uintah Basin of eastern Utah. Although the flux chamber provides accurate measurements at the point of sampling, it covers just a limited area of the ponds and is prone to altering environmental conditions (e.g., temperature, pressure). This fact raises the need to validate flux chamber measurements. In this study, we apply an inverse-dispersion modeling technique with evacuated canister sampling to validate the flux-chamber measurements. This modeling technique applies an initial and arbitrary emission rate to estimate pollutant concentrations at pre-defined receptors, and adjusts the emission rate until the estimated pollutant concentrations approximates measured concentrations at the receptors. The derived emission rates are then compared with flux-chamber measurements and differences are analyzed. Additionally, we investigate the applicability of the WATER9 wastewater emission model for the estimation of VOC emissions from produced-water ponds in the Uintah Basin. WATER9 estimates the emission of each gas based on properties of the gas, its concentration in the waste water, and the characteristics of the influent and treatment units. Results of VOC emission estimations using inverse-dispersion and WATER9 modeling techniques will be reported.
NASA Astrophysics Data System (ADS)
Yamazaki, Takaharu; Futai, Kazuma; Tomita, Tetsuya; Sato, Yoshinobu; Yoshikawa, Hideki; Tamura, Shinichi; Sugamoto, Kazuomi
2011-03-01
To achieve 3D kinematic analysis of total knee arthroplasty (TKA), 2D/3D registration techniques, which use X-ray fluoroscopic images and computer-aided design (CAD) model of the knee implant, have attracted attention in recent years. These techniques could provide information regarding the movement of radiopaque femoral and tibial components but could not provide information of radiolucent polyethylene insert, because the insert silhouette on X-ray image did not appear clearly. Therefore, it was difficult to obtain 3D kinemaitcs of polyethylene insert, particularly mobile-bearing insert that move on the tibial component. This study presents a technique and the accuracy for 3D kinematic analysis of mobile-bearing insert in TKA using X-ray fluoroscopy, and finally performs clinical applications. For a 3D pose estimation technique of the mobile-bearing insert in TKA using X-ray fluoroscopy, tantalum beads and CAD model with its beads are utilized, and the 3D pose of the insert model is estimated using a feature-based 2D/3D registration technique. In order to validate the accuracy of the present technique, experiments including computer simulation test were performed. The results showed the pose estimation accuracy was sufficient for analyzing mobile-bearing TKA kinematics (the RMS error: about 1.0 mm, 1.0 degree). In the clinical applications, seven patients with mobile-bearing TKA in deep knee bending motion were studied and analyzed. Consequently, present technique enables us to better understand mobile-bearing TKA kinematics, and this type of evaluation was thought to be helpful for improving implant design and optimizing TKA surgical techniques.
England, M L; Broderick, G A; Shaver, R D; Combs, D K
1997-11-01
Ruminally undegraded protein (RUP) values of blood meal (n = 2), hydrolyzed feather meal (n = 2), fish meal (n = 2), meat and bone meal, and soybean meal were estimated using an in situ method, an inhibitor in vitro method, and an inhibitor in vitro technique applying Michaelis-Menten saturation kinetics. Degradation rates for in situ and inhibitor in vitro methods were calculated by regression of the natural log of the proportion of crude protein (CP) remaining undegraded versus time. Nonlinear regression analysis of the integrated Michaelis-Menten equation was used to determine maximum velocity, the Michaelis constant, and degradation rate (the ratio of maximum velocity to the Michaelis constant). A ruminal passage rate of 0.06/h was assumed in the calculation of RUP. The in situ and inhibitor in vitro techniques yielded similar estimates of ruminal degradation. Mean RUP estimated for soybean meal, blood meal, hydrolyzed feather meal, fish meal, and meat and bone meal were, respectively, 28.6, 86.0, 77.4, 52.9, and 52.6% of CP by the in situ method and 26.4, 86.1, 76.0, 59.6, and 49.5% of CP by the inhibitor in vitro technique. The Michaelis-Menten inhibitor in vitro technique yielded more rapid CP degradation rates and decreased estimates of RUP. The inhibitor in vitro method required less time and labor than did the other two techniques to estimate the RUP values of animal by-product proteins. Results from in vitro incubations with pepsin.HCl suggested that low postruminal digestibility of hydrolyzed feather meal may impair its value as a source of RUP.
Improving Focal Depth Estimates: Studies of Depth Phase Detection at Regional Distances
NASA Astrophysics Data System (ADS)
Stroujkova, A.; Reiter, D. T.; Shumway, R. H.
2006-12-01
The accurate estimation of the depth of small, regionally recorded events continues to be an important and difficult explosion monitoring research problem. Depth phases (free surface reflections) are the primary tool that seismologists use to constrain the depth of a seismic event. When depth phases from an event are detected, an accurate source depth is easily found by using the delay times of the depth phases relative to the P wave and a velocity profile near the source. Cepstral techniques, including cepstral F-statistics, represent a class of methods designed for the depth-phase detection and identification; however, they offer only a moderate level of success at epicentral distances less than 15°. This is due to complexities in the Pn coda, which can lead to numerous false detections in addition to the true phase detection. Therefore, cepstral methods cannot be used independently to reliably identify depth phases. Other evidence, such as apparent velocities, amplitudes and frequency content, must be used to confirm whether the phase is truly a depth phase. In this study we used a variety of array methods to estimate apparent phase velocities and arrival azimuths, including beam-forming, semblance analysis, MUltiple SIgnal Classification (MUSIC) (e.g., Schmidt, 1979), and cross-correlation (e.g., Cansi, 1995; Tibuleac and Herrin, 1997). To facilitate the processing and comparison of results, we developed a MATLAB-based processing tool, which allows application of all of these techniques (i.e., augmented cepstral processing) in a single environment. The main objective of this research was to combine the results of three focal-depth estimation techniques and their associated standard errors into a statistically valid unified depth estimate. The three techniques include: 1. Direct focal depth estimate from the depth-phase arrival times picked via augmented cepstral processing. 2. Hypocenter location from direct and surface-reflected arrivals observed on sparse networks of regional stations using a Grid-search, Multiple-Event Location method (GMEL; Rodi and Toksöz, 2000; 2001). 3. Surface-wave dispersion inversion for event depth and focal mechanism (Herrmann and Ammon, 2002). To validate our approach and provide quality control for our solutions, we applied the techniques to moderated- sized events (mb between 4.5 and 6.0) with known focal mechanisms. We illustrate the techniques using events observed at regional distances from the KSAR (Wonju, South Korea) teleseismic array and other nearby broadband three-component stations. Our results indicate that the techniques can produce excellent agreement between the various depth estimates. In addition, combining the techniques into a "unified" estimate greatly reduced location errors and improved robustness of the solution, even if results from the individual methods yielded large standard errors.
NASA Astrophysics Data System (ADS)
Gaona Garcia, J.; Lewandowski, J.; Bellin, A.
2017-12-01
Groundwater-stream water interactions in rivers determine water balances, but also chemical and biological processes in the streambed at different spatial and temporal scales. Due to the difficult identification and quantification of gaining, neutral and losing conditions, it is necessary to combine techniques with complementary capabilities and scale ranges. We applied this concept to a study site at the River Schlaube, East Brandenburg-Germany, a sand bed stream with intense sediment heterogeneity and complex environmental conditions. In our approach, point techniques such as temperature profiles of the streambed together with vertical hydraulic gradients provide data for the estimation of fluxes between groundwater and surface water with the numerical model 1DTempPro. On behalf of distributed techniques, fiber optic distributed temperature sensing identifies the spatial patterns of neutral, down- and up-welling areas by analysis of the changes in the thermal patterns at the streambed interface under certain flow. The study finally links point and surface temperatures to provide a method for upscaling of fluxes. Point techniques provide point flux estimates with essential depth detail to infer streambed structures while the results hardly represent the spatial distribution of fluxes caused by the heterogeneity of streambed properties. Fiber optics proved capable of providing spatial thermal patterns with enough resolution to observe distinct hyporheic thermal footprints at multiple scales. The relation of thermal footprint patterns and temporal behavior with flux results from point techniques enabled the use of methods for spatial flux estimates. The lack of detailed information of the physical driver's spatial distribution restricts the spatial flux estimation to the application of the T-proxy method, whose highly uncertain results mainly provide coarse spatial flux estimates. The study concludes that the upscaling of groundwater-stream water interactions using thermal measurements with combined point and distributed techniques requires the integration of physical drivers because of the heterogeneity of the flux patterns. Combined experimental and modeling approaches may help to obtain more reliable understanding of groundwater-surface water interactions at multiple scales.
An extended stochastic method for seismic hazard estimation
NASA Astrophysics Data System (ADS)
Abd el-aal, A. K.; El-Eraki, M. A.; Mostafa, S. I.
2015-12-01
In this contribution, we developed an extended stochastic technique for seismic hazard assessment purposes. This technique depends on the hypothesis of stochastic technique of Boore (2003) "Simulation of ground motion using the stochastic method. Appl. Geophy. 160:635-676". The essential characteristics of extended stochastic technique are to obtain and simulate ground motion in order to minimize future earthquake consequences. The first step of this technique is defining the seismic sources which mostly affect the study area. Then, the maximum expected magnitude is defined for each of these seismic sources. It is followed by estimating the ground motion using an empirical attenuation relationship. Finally, the site amplification is implemented in calculating the peak ground acceleration (PGA) at each site of interest. We tested and applied this developed technique at Cairo, Suez, Port Said, Ismailia, Zagazig and Damietta cities to predict the ground motion. Also, it is applied at Cairo, Zagazig and Damietta cities to estimate the maximum peak ground acceleration at actual soil conditions. In addition, 0.5, 1, 5, 10 and 20 % damping median response spectra are estimated using the extended stochastic simulation technique. The calculated highest acceleration values at bedrock conditions are found at Suez city with a value of 44 cm s-2. However, these acceleration values decrease towards the north of the study area to reach 14.1 cm s-2 at Damietta city. This comes in agreement with the results of previous studies of seismic hazards in northern Egypt and is found to be comparable. This work can be used for seismic risk mitigation and earthquake engineering purposes.
The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions
NASA Astrophysics Data System (ADS)
Loaiciga, Hugo A.; MariñO, Miguel A.
1987-01-01
The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.
Longitudinal Factor Score Estimation Using the Kalman Filter.
ERIC Educational Resources Information Center
Oud, Johan H.; And Others
1990-01-01
How longitudinal factor score estimation--the estimation of the evolution of factor scores for individual examinees over time--can profit from the Kalman filter technique is described. The Kalman estimates change more cautiously over time, have lower estimation error variances, and reproduce the LISREL program latent state correlations more…
A spline-based parameter and state estimation technique for static models of elastic surfaces
NASA Technical Reports Server (NTRS)
Banks, H. T.; Daniel, P. L.; Armstrong, E. S.
1983-01-01
Parameter and state estimation techniques for an elliptic system arising in a developmental model for the antenna surface in the Maypole Hoop/Column antenna are discussed. A computational algorithm based on spline approximations for the state and elastic parameters is given and numerical results obtained using this algorithm are summarized.
B. Lane Rivenbark; C. Rhett Jackson
2004-01-01
Regional average evapotranspiration estimates developed by water balance techniques are frequently used to estimate average discharge in ungaged strttams. However, the lower stream size range for the validity of these techniques has not been explored. Flow records were collected and evaluated for 16 small streams in the Southern Appalachians to test whether the...
Estimating abundance of Sitka black-tailed deer using DNA from fecal pellets
Todd J. Brinkman; David K. Person; F. Stuart Chapin; Winston Smith; Kris J. Hundertmark
2011-01-01
Densely vegetated environments have hindered collection of basic population parameters on forest-dwelling ungulates. Our objective was to develop a mark-recapture technique that used DNA from fecal pellets to overcome constraints associated with estimating abundance of ungulates in landscapes where direct observation is difficult. We tested our technique on Sitka black...
Two above-ground forest biomass estimation techniques were evaluated for the United States Territory of Puerto Rico using predictor variables acquired from satellite based remotely sensed data and ground data from the U.S. Department of Agriculture Forest Inventory Analysis (FIA)...
Comparing techniques for estimating flame temperature of prescribed fires
Deborah K. Kennard; Kenneth W. Outcalt; David Jones; Joseph J. O' Brien
2005-01-01
A variety of techniques that estimate temperature and/or heat output during fires are available. We assessed the predictive ability of metal and tile pyrometers, calorimeters of different sizes, and fuel consumption to time-temperature metrics derived from thick and thin thermocouples at 140 points distributed over 9 management-scale burns in a longleaf pine forest in...
NASA Astrophysics Data System (ADS)
Luu, Gia Thien; Boualem, Abdelbassit; Duy, Tran Trung; Ravier, Philippe; Butteli, Olivier
Muscle Fiber Conduction Velocity (MFCV) can be calculated from the time delay between the surface electromyographic (sEMG) signals recorded by electrodes aligned with the fiber direction. In order to take into account the non-stationarity during the dynamic contraction (the most daily life situation) of the data, the developed methods have to consider that the MFCV changes over time, which induces time-varying delays and the data is non-stationary (change of Power Spectral Density (PSD)). In this paper, the problem of TVD estimation is considered using a parametric method. First, the polynomial model of TVD has been proposed. Then, the TVD model parameters are estimated by using a maximum likelihood estimation (MLE) strategy solved by a deterministic optimization technique (Newton) and stochastic optimization technique, called simulated annealing (SA). The performance of the two techniques is also compared. We also derive two appropriate Cramer-Rao Lower Bounds (CRLB) for the estimated TVD model parameters and for the TVD waveforms. Monte-Carlo simulation results show that the estimation of both the model parameters and the TVD function is unbiased and that the variance obtained is close to the derived CRBs. A comparison with non-parametric approaches of the TVD estimation is also presented and shows the superiority of the method proposed.
Satellite angular velocity estimation based on star images and optical flow techniques.
Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele
2013-09-25
An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components.
Satellite Angular Velocity Estimation Based on Star Images and Optical Flow Techniques
Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele
2013-01-01
An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components. PMID:24072023
AMT-200S Motor Glider Parameter and Performance Estimation
NASA Technical Reports Server (NTRS)
Taylor, Brian R.
2011-01-01
Parameter and performance estimation of an instrumented motor glider was conducted at the National Aeronautics and Space Administration Dryden Flight Research Center in order to provide the necessary information to create a simulation of the aircraft. An output-error technique was employed to generate estimates from doublet maneuvers, and performance estimates were compared with results from a well-known flight-test evaluation of the aircraft in order to provide a complete set of data. Aircraft specifications are given along with information concerning instrumentation, flight-test maneuvers flown, and the output-error technique. Discussion of Cramer-Rao bounds based on both white noise and colored noise assumptions is given. Results include aerodynamic parameter and performance estimates for a range of angles of attack.
Computerized technique for recording board defect data
R. Bruce Anderson; R. Edward Thomas; Charles J. Gatchell; Neal D. Bennett; Neal D. Bennett
1993-01-01
A computerized technique for recording board defect data has been developed that is faster and more accurate than manual techniques. The lumber database generated by this technique is a necessary input to computer simulation models that estimate potential cutting yields from various lumber breakdown sequences. The technique allows collection of detailed information...
Czarnecki, John B.; Stannard, David I.
1997-01-01
Franklin Lake playa is one of the principal discharge areas of the ground-water-flow system associated with Yucca Mountain, Nevada, the potential site of a high-level nuclear-waste repository. By using the energy-budget eddy-correlation technique, measurements made between June 1983 and April 1984 to estimate evapotranspiration were found to range from 0.1 centimeter per day during winter months to about 0.3 centimeter per day during summer months; the annual average was 0.16 centimeter per day. These estimates were compared with evapotranspiration estimates calculated from six other methods.
Estimating pixel variances in the scenes of staring sensors
Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM
2012-01-24
A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.
Predicting the long tail of book sales: Unearthing the power-law exponent
NASA Astrophysics Data System (ADS)
Fenner, Trevor; Levene, Mark; Loizou, George
2010-06-01
The concept of the long tail has recently been used to explain the phenomenon in e-commerce where the total volume of sales of the items in the tail is comparable to that of the most popular items. In the case of online book sales, the proportion of tail sales has been estimated using regression techniques on the assumption that the data obeys a power-law distribution. Here we propose a different technique for estimation based on a generative model of book sales that results in an asymptotic power-law distribution of sales, but which does not suffer from the problems related to power-law regression techniques. We show that the proportion of tail sales predicted is very sensitive to the estimated power-law exponent. In particular, if we assume that the power-law exponent of the cumulative distribution is closer to 1.1 rather than to 1.2 (estimates published in 2003, calculated using regression by two groups of researchers), then our computations suggest that the tail sales of Amazon.com, rather than being 40% as estimated by Brynjolfsson, Hu and Smith in 2003, are actually closer to 20%, the proportion estimated by its CEO.
Winter bird population studies and project prairie birds for surveying grassland birds
Twedt, D.J.; Hamel, P.B.; Woodrey, M.S.
2008-01-01
We compared 2 survey methods for assessing winter bird communities in temperate grasslands: Winter Bird Population Study surveys are area-searches that have long been used in a variety of habitats whereas Project Prairie Bird surveys employ active-flushing techniques on strip-transects and are intended for use in grasslands. We used both methods to survey birds on 14 herbaceous reforested sites and 9 coastal pine savannas during winter and compared resultant estimates of species richness and relative abundance. These techniques did not yield similar estimates of avian populations. We found Winter Bird Population Studies consistently produced higher estimates of species richness, whereas Project Prairie Birds produced higher estimates of avian abundance for some species. When it is important to identify all species within the winter bird community, Winter Bird Population Studies should be the survey method of choice. If estimates of the abundance of relatively secretive grassland bird species are desired, the use of Project Prairie Birds protocols is warranted. However, we suggest that both survey techniques, as currently employed, are deficient and recommend distance- based survey methods that provide species-specific estimates of detection probabilities be incorporated into these survey methods.
De Tobel, J; Phlypo, I; Fieuws, S; Politis, C; Verstraete, K L; Thevissen, P W
2017-12-01
The development of third molars can be evaluated with medical imaging to estimate age in subadults. The appearance of third molars on magnetic resonance imaging (MRI) differs greatly from that on radiographs. Therefore a specific staging technique is necessary to classify third molar development on MRI and to apply it for age estimation. To develop a specific staging technique to register third molar development on MRI and to evaluate its performance for age estimation in subadults. Using 3T MRI in three planes, all third molars were evaluated in 309 healthy Caucasian participants from 14 to 26 years old. According to the appearance of the developing third molars on MRI, descriptive criteria and schematic representations were established to define a specific staging technique. Two observers, with different levels of experience, staged all third molars independently with the developed technique. Intra- and inter-observer agreement were calculated. The data were imported in a Bayesian model for age estimation as described by Fieuws et al. (2016). This approach adequately handles correlation between age indicators and missing age indicators. It was used to calculate a point estimate and a prediction interval of the estimated age. Observed age minus predicted age was calculated, reflecting the error of the estimate. One-hundred and sixty-six third molars were agenetic. Five percent (51/1096) of upper third molars and 7% (70/1044) of lower third molars were not assessable. Kappa for inter-observer agreement ranged from 0.76 to 0.80. For intra-observer agreement kappa ranged from 0.80 to 0.89. However, two stage differences between observers or between staging sessions occurred in up to 2.2% (20/899) of assessments, probably due to a learning effect. Using the Bayesian model for age estimation, a mean absolute error of 2.0 years in females and 1.7 years in males was obtained. Root mean squared error equalled 2.38 years and 2.06 years respectively. The performance to discern minors from adults was better for males than for females, with specificities of 96% and 73% respectively. Age estimations based on the proposed staging method for third molars on MRI showed comparable reproducibility and performance as the established methods based on radiographs.
Nagwani, Naresh Kumar; Deo, Shirish V
2014-01-01
Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm.
Nagwani, Naresh Kumar; Deo, Shirish V.
2014-01-01
Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939
Poisson and negative binomial item count techniques for surveys with sensitive question.
Tian, Guo-Liang; Tang, Man-Lai; Wu, Qin; Liu, Yin
2017-04-01
Although the item count technique is useful in surveys with sensitive questions, privacy of those respondents who possess the sensitive characteristic of interest may not be well protected due to a defect in its original design. In this article, we propose two new survey designs (namely the Poisson item count technique and negative binomial item count technique) which replace several independent Bernoulli random variables required by the original item count technique with a single Poisson or negative binomial random variable, respectively. The proposed models not only provide closed form variance estimate and confidence interval within [0, 1] for the sensitive proportion, but also simplify the survey design of the original item count technique. Most importantly, the new designs do not leak respondents' privacy. Empirical results show that the proposed techniques perform satisfactorily in the sense that it yields accurate parameter estimate and confidence interval.
Krishan, Kewal; Chatterjee, Preetika M; Kanchan, Tanuj; Kaur, Sandeep; Baryah, Neha; Singh, R K
2016-04-01
Sex estimation is considered as one of the essential parameters in forensic anthropology casework, and requires foremost consideration in the examination of skeletal remains. Forensic anthropologists frequently employ morphologic and metric methods for sex estimation of human remains. These methods are still very imperative in identification process in spite of the advent and accomplishment of molecular techniques. A constant boost in the use of imaging techniques in forensic anthropology research has facilitated to derive as well as revise the available population data. These methods however, are less reliable owing to high variance and indistinct landmark details. The present review discusses the reliability and reproducibility of various analytical approaches; morphological, metric, molecular and radiographic methods in sex estimation of skeletal remains. Numerous studies have shown a higher reliability and reproducibility of measurements taken directly on the bones and hence, such direct methods of sex estimation are considered to be more reliable than the other methods. Geometric morphometric (GM) method and Diagnose Sexuelle Probabiliste (DSP) method are emerging as valid methods and widely used techniques in forensic anthropology in terms of accuracy and reliability. Besides, the newer 3D methods are shown to exhibit specific sexual dimorphism patterns not readily revealed by traditional methods. Development of newer and better methodologies for sex estimation as well as re-evaluation of the existing ones will continue in the endeavour of forensic researchers for more accurate results. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Lee, Bang Yeon; Kang, Su-Tae; Yun, Hae-Bum; Kim, Yun Yong
2016-01-12
The distribution of fiber orientation is an important factor in determining the mechanical properties of fiber-reinforced concrete. This study proposes a new image analysis technique for improving the evaluation accuracy of fiber orientation distribution in the sectional image of fiber-reinforced concrete. A series of tests on the accuracy of fiber detection and the estimation performance of fiber orientation was performed on artificial fiber images to assess the validity of the proposed technique. The validation test results showed that the proposed technique estimates the distribution of fiber orientation more accurately than the direct measurement of fiber orientation by image analysis.
Lee, Bang Yeon; Kang, Su-Tae; Yun, Hae-Bum; Kim, Yun Yong
2016-01-01
The distribution of fiber orientation is an important factor in determining the mechanical properties of fiber-reinforced concrete. This study proposes a new image analysis technique for improving the evaluation accuracy of fiber orientation distribution in the sectional image of fiber-reinforced concrete. A series of tests on the accuracy of fiber detection and the estimation performance of fiber orientation was performed on artificial fiber images to assess the validity of the proposed technique. The validation test results showed that the proposed technique estimates the distribution of fiber orientation more accurately than the direct measurement of fiber orientation by image analysis. PMID:28787839
NASA Astrophysics Data System (ADS)
Zvietcovich, Fernando; Yao, Jianing; Chu, Ying-Ju; Meemon, Panomsak; Rolland, Jannick P.; Parker, Kevin J.
2016-03-01
Optical Coherence Elastography (OCE) is a widely investigated noninvasive technique for estimating the mechanical properties of tissue. In particular, vibrational OCE methods aim to estimate the shear wave velocity generated by an external stimulus in order to calculate the elastic modulus of tissue. In this study, we compare the performance of five acquisition and processing techniques for estimating the shear wave speed in simulations and experiments using tissue-mimicking phantoms. Accuracy, contrast-to-noise ratio, and resolution are measured for all cases. The first two techniques make the use of one piezoelectric actuator for generating a continuous shear wave propagation (SWP) and a tone-burst propagation (TBP) of 400 Hz over the gelatin phantom. The other techniques make use of one additional actuator located on the opposite side of the region of interest in order to create an interference pattern. When both actuators have the same frequency, a standing wave (SW) pattern is generated. Otherwise, when there is a frequency difference df between both actuators, a crawling wave (CrW) pattern is generated and propagates with less speed than a shear wave, which makes it suitable for being detected by the 2D cross-sectional OCE imaging. If df is not small compared to the operational frequency, the CrW travels faster and a sampled version of it (SCrW) is acquired by the system. Preliminary results suggest that TBP (error < 4.1%) and SWP (error < 6%) techniques are more accurate when compared to mechanical measurement test results.
Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods.
Rad, Kamiar Rahnama; Paninski, Liam
2010-01-01
Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian process nonparametric Bayesian techniques for estimating these two-dimensional rate maps. These techniques offer a number of advantages: the estimates may be computed efficiently, come equipped with natural errorbars, adapt their smoothness automatically to the local density and informativeness of the observed data, and permit direct fitting of the model hyperparameters (e.g., the prior smoothness of the rate map) via maximum marginal likelihood. We illustrate the method's flexibility and performance on a variety of simulated and real data.
Montuno, Michael A; Kohner, Andrew B; Foote, Kelly D; Okun, Michael S
2013-01-01
Deep brain stimulation (DBS) is an effective technique that has been utilized to treat advanced and medication-refractory movement and psychiatric disorders. In order to avoid implanted pulse generator (IPG) failure and consequent adverse symptoms, a better understanding of IPG battery longevity and management is necessary. Existing methods for battery estimation lack the specificity required for clinical incorporation. Technical challenges prevent higher accuracy longevity estimations, and a better approach to managing end of DBS battery life is needed. The literature was reviewed and DBS battery estimators were constructed by the authors and made available on the web at http://mdc.mbi.ufl.edu/surgery/dbs-battery-estimator. A clinical algorithm for management of DBS battery life was constructed. The algorithm takes into account battery estimations and clinical symptoms. Existing methods of DBS battery life estimation utilize an interpolation of averaged current drains to calculate how long a battery will last. Unfortunately, this technique can only provide general approximations. There are inherent errors in this technique, and these errors compound with each iteration of the battery estimation. Some of these errors cannot be accounted for in the estimation process, and some of the errors stem from device variation, battery voltage dependence, battery usage, battery chemistry, impedance fluctuations, interpolation error, usage patterns, and self-discharge. We present web-based battery estimators along with an algorithm for clinical management. We discuss the perils of using a battery estimator without taking into account the clinical picture. Future work will be needed to provide more reliable management of implanted device batteries; however, implementation of a clinical algorithm that accounts for both estimated battery life and for patient symptoms should improve the care of DBS patients. © 2012 International Neuromodulation Society.
Survival of European mouflon (Artiodactyla: Bovidae) in Hawai'i based on tooth cementum lines
Hess, S.C.; Stephens, R.M.; Thompson, T.L.; Danner, R.M.; Kawakami, B.
2011-01-01
Reliable techniques for estimating age of ungulates are necessary to determine population parameters such as age structure and survival. Techniques that rely on dentition, horn, and facial patterns have limited utility for European mouflon sheep (Ovis gmelini musimon), but tooth cementum lines may offer a useful alternative. Cementum lines may not be reliable outside temperate regions, however, because lack of seasonality in diet may affect annulus formation. We evaluated the utility of tooth cementum lines for estimating age of mouflon in Hawai'i in comparison to dentition. Cementum lines were present in mouflon from Mauna Loa, island of Hawai'i, but were less distinct than in North American sheep. The two age-estimation methods provided similar estimates for individuals aged ???3 yr by dentition (the maximum age estimable by dentition), with exact matches in 51% (18/35) of individuals, and an average difference of 0.8 yr (range 04). Estimates of age from cementum lines were higher than those from dentition in 40% (14/35) and lower in 9% (3/35) of individuals. Discrepancies in age estimates between techniques and between paired tooth samples estimated by cementum lines were related to certainty categories assigned by the clarity of cementum lines, reinforcing the importance of collecting a sufficient number of samples to compensate for samples of lower quality, which in our experience, comprised approximately 22% of teeth. Cementum lines appear to provide relatively accurate age estimates for mouflon in Hawai'i, allow estimating age beyond 3 yr, and they offer more precise estimates than tooth eruption patterns. After constructing an age distribution, we estimated annual survival with a log-linear model to be 0.596 (95% CI 0.5540.642) for this heavily controlled population. ?? 2011 by University of Hawai'i Press.
Results and Error Estimates from GRACE Forward Modeling over Greenland, Canada, and Alaska
NASA Astrophysics Data System (ADS)
Bonin, J. A.; Chambers, D. P.
2012-12-01
Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Greenland and Antarctica. However, the accuracy of the forward model technique has not been determined, nor is it known how the distribution of the local basins affects the results. We use a "truth" model composed of hydrology and ice-melt slopes as an example case, to estimate the uncertainties of this forward modeling method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We then apply these optimal parameters in a forward model estimate created from RL05 GRACE data. We compare the resulting mass slopes with the expected systematic errors from the simulation, as well as GIA and basic trend-fitting uncertainties. We also consider whether specific regions (such as Ellesmere Island and Baffin Island) can be estimated reliably using our optimal basin layout.
NASA Astrophysics Data System (ADS)
Qarib, Hossein; Adeli, Hojjat
2015-12-01
In this paper authors introduce a new adaptive signal processing technique for feature extraction and parameter estimation in noisy exponentially damped signals. The iterative 3-stage method is based on the adroit integration of the strengths of parametric and nonparametric methods such as multiple signal categorization, matrix pencil, and empirical mode decomposition algorithms. The first stage is a new adaptive filtration or noise removal scheme. The second stage is a hybrid parametric-nonparametric signal parameter estimation technique based on an output-only system identification technique. The third stage is optimization of estimated parameters using a combination of the primal-dual path-following interior point algorithm and genetic algorithm. The methodology is evaluated using a synthetic signal and a signal obtained experimentally from transverse vibrations of a steel cantilever beam. The method is successful in estimating the frequencies accurately. Further, it estimates the damping exponents. The proposed adaptive filtration method does not include any frequency domain manipulation. Consequently, the time domain signal is not affected as a result of frequency domain and inverse transformations.
Robb, Matthew L; Böhning, Dankmar
2011-02-01
Capture–recapture techniques have been used for considerable time to predict population size. Estimators usually rely on frequency counts for numbers of trappings; however, it may be the case that these are not available for a particular problem, for example if the original data set has been lost and only a summary table is available. Here, we investigate techniques for specific examples; the motivating example is an epidemiology study by Mosley et al., which focussed on a cholera outbreak in East Pakistan. To demonstrate the wider range of the technique, we also look at a study for predicting the long-term outlook of the AIDS epidemic using information on number of sexual partners. A new estimator is developed here which uses the EM algorithm to impute unobserved values and then uses these values in a similar way to the existing estimators. The results show that a truncated approach – mimicking the Chao lower bound approach – gives an improved estimate when population homogeneity is violated.
Parametric Model Based On Imputations Techniques for Partly Interval Censored Data
NASA Astrophysics Data System (ADS)
Zyoud, Abdallah; Elfaki, F. A. M.; Hrairi, Meftah
2017-12-01
The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function.
Sonko, Bakary J; Miller, Leland V; Jones, Richard H; Donnelly, Joseph E; Jacobsen, Dennis J; Hill, James O; Fennessey, Paul V
2003-12-15
Reducing water to hydrogen gas by zinc or uranium metal for determining D/H ratio is both tedious and time consuming. This has forced most energy metabolism investigators to use the "two-point" technique instead of the "Multi-point" technique for estimating total energy expenditure (TEE). Recently, we purchased a new platinum (Pt)-equilibration system that significantly reduces both time and labor required for D/H ratio determination. In this study, we compared TEE obtained from nine overweight but healthy subjects, estimated using the traditional Zn-reduction method to that obtained from the new Pt-equilibration system. Rate constants, pool spaces, and CO2 production rates obtained from use of the two methodologies were not significantly different. Correlation analysis demonstrated that TEEs estimated using the two methods were significantly correlated (r=0.925, p=0.0001). Sample equilibration time was reduced by 66% compared to those of similar methods. The data demonstrated that the Zn-reduction method could be replaced by the Pt-equilibration method when TEE was estimated using the "Multi-Point" technique. Furthermore, D equilibration time was significantly reduced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambie, F.W.; Yee, S.N.
The purpose of this and a previous project was to examine the feasibility of estimating intermediate grade uranium (0.01 to 0.05% U/sub 3/O/sub 8/) on the basis of existing, sparsely drilled holes. All data are from the Powder River Basin in Wyoming. DOE makes preliminary estimates of endowment by calculating an Average Area of Influence (AAI) based on densely drilled areas, multiplying that by the thickness of the mineralization and then dividing by a tonnage factor. The resulting tonnage of ore is then multiplied by the average grade of the interval to obtain the estimate of U/sub 3/O/sub 8/ tonnage.more » Total endowment is the sum of these values over all mineralized intervals in all wells in the area. In regions where wells are densely drilled and approximately regularly spaced this technique approaches the classical polygonal estimation technique used to estimate ore reserves and should be fairly reliable. The method is conservative because: (1) in sparsely drilled regions a large fraction of the area is not considered to contribute to endowment; (2) there is a bias created by the different distributions of point grades and mining block grades. A conservative approach may be justified for purposes of ore reserve estimation, where large investments may hinge on local forecasts. But for estimates of endowment over areas as large as 1/sup 0/ by 2/sup 0/ quadrangles, or the nation as a whole, errors in local predictions are not critical as long as they tend to cancel and a less conservative estimation approach may be justified.One candidate, developed for this study and described is called the contoured thickness technique. A comparison of estimates based on the contoured thickness approach with DOE calculations for five areas of Wyoming roll-fronts in the Powder River Basin is presented. The sensitivity of the technique to well density is examined and the question of predicting intermediate grade endowment from data on higher grades is discussed.« less
Comparative assessment of bone pose estimation using Point Cluster Technique and OpenSim.
Lathrop, Rebecca L; Chaudhari, Ajit M W; Siston, Robert A
2011-11-01
Estimating the position of the bones from optical motion capture data is a challenge associated with human movement analysis. Bone pose estimation techniques such as the Point Cluster Technique (PCT) and simulations of movement through software packages such as OpenSim are used to minimize soft tissue artifact and estimate skeletal position; however, using different methods for analysis may produce differing kinematic results which could lead to differences in clinical interpretation such as a misclassification of normal or pathological gait. This study evaluated the differences present in knee joint kinematics as a result of calculating joint angles using various techniques. We calculated knee joint kinematics from experimental gait data using the standard PCT, the least squares approach in OpenSim applied to experimental marker data, and the least squares approach in OpenSim applied to the results of the PCT algorithm. Maximum and resultant RMS differences in knee angles were calculated between all techniques. We observed differences in flexion/extension, varus/valgus, and internal/external rotation angles between all approaches. The largest differences were between the PCT results and all results calculated using OpenSim. The RMS differences averaged nearly 5° for flexion/extension angles with maximum differences exceeding 15°. Average RMS differences were relatively small (< 1.08°) between results calculated within OpenSim, suggesting that the choice of marker weighting is not critical to the results of the least squares inverse kinematics calculations. The largest difference between techniques appeared to be a constant offset between the PCT and all OpenSim results, which may be due to differences in the definition of anatomical reference frames, scaling of musculoskeletal models, and/or placement of virtual markers within OpenSim. Different methods for data analysis can produce largely different kinematic results, which could lead to the misclassification of normal or pathological gait. Improved techniques to allow non-uniform scaling of generic models to more accurately reflect subject-specific bone geometries and anatomical reference frames may reduce differences between bone pose estimation techniques and allow for comparison across gait analysis platforms.
Rath, Hemamalini; Rath, Rachna; Mahapatra, Sandeep; Debta, Tribikram
2017-01-01
The age of an individual can be assessed by a plethora of widely available tooth-based techniques, among which radiological methods prevail. The Demirjian's technique of age assessment based on tooth development stages has been extensively investigated in different populations of the world. The present study is to assess the applicability of Demirjian's modified 8-teeth technique in age estimation of population of East India (Odisha), utilizing Acharya's Indian-specific cubic functions. One hundred and six pretreatment orthodontic radiographs of patients in an age group of 7-23 years with representation from both genders were assessed for eight left mandibular teeth and scored as per the Demirjian's 9-stage criteria for teeth development stages. Age was calculated on the basis of Acharya's Indian formula. Statistical analysis was performed to compare the estimated and actual age. All data were analyzed using SPSS 20.0 (SPSS Inc., Chicago, Illinois, USA) and MS Excel Package. The results revealed that the mean absolute error (MAE) in age estimation of the entire sample was 1.3 years with 50% of the cases having an error rate within ± 1 year. The MAE in males and females (7-16 years) was 1.8 and 1.5, respectively. Likewise, the MAE in males and females (16.1-23 years) was 1.1 and 1.3, respectively. The low error rate in estimating age justifies the application of this modified technique and Acharya's Indian formulas in the present East Indian population.
Simultaneous multiple non-crossing quantile regression estimation using kernel constraints
Liu, Yufeng; Wu, Yichao
2011-01-01
Quantile regression (QR) is a very useful statistical tool for learning the relationship between the response variable and covariates. For many applications, one often needs to estimate multiple conditional quantile functions of the response variable given covariates. Although one can estimate multiple quantiles separately, it is of great interest to estimate them simultaneously. One advantage of simultaneous estimation is that multiple quantiles can share strength among them to gain better estimation accuracy than individually estimated quantile functions. Another important advantage of joint estimation is the feasibility of incorporating simultaneous non-crossing constraints of QR functions. In this paper, we propose a new kernel-based multiple QR estimation technique, namely simultaneous non-crossing quantile regression (SNQR). We use kernel representations for QR functions and apply constraints on the kernel coefficients to avoid crossing. Both unregularised and regularised SNQR techniques are considered. Asymptotic properties such as asymptotic normality of linear SNQR and oracle properties of the sparse linear SNQR are developed. Our numerical results demonstrate the competitive performance of our SNQR over the original individual QR estimation. PMID:22190842
1982-02-01
For these data elements, Initial Milestone 11 values were established as the Flanning Estimate (PE) with the Development Estimate ( DE ) to he based ...development of improved forensic collection techniques for Naval Investigative Agents on ships and overseas bases . As this is a continuing program, the above...overseas bases ), and continue development of improved forensic collection techniques for Naval Investigative Agents on ships and overseas baszs. 4. (U) FY
Surface albedo from bidirectional reflectance
NASA Technical Reports Server (NTRS)
Ranson, K. J.; Irons, J. R.; Daughtry, C. S. T.
1991-01-01
The validity of integrating over discrete wavelength bands is examined to estimate total shortwave bidirectional reflectance of vegetated and bare soil surfaces. Methods for estimating albedo from multiple angle, discrete wavelength band radiometer measurements are studied. These methods include a numerical integration technique and the integration of an empirically derived equation for bidirectional reflectance. It is concluded that shortwave albedos estimated through both techniques agree favorably with the independent pyranometer measurements. Absolute rms errors are found to be 0.5 percent or less for both grass sod and bare soil surfaces.
Quality assessment and control of finite element solutions
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Babuska, Ivo
1987-01-01
Status and some recent developments in the techniques for assessing the reliability of finite element solutions are summarized. Discussion focuses on a number of aspects including: the major types of errors in the finite element solutions; techniques used for a posteriori error estimation and the reliability of these estimators; the feedback and adaptive strategies for improving the finite element solutions; and postprocessing approaches used for improving the accuracy of stresses and other important engineering data. Also, future directions for research needed to make error estimation and adaptive movement practical are identified.
Theoretical and simulated performance for a novel frequency estimation technique
NASA Technical Reports Server (NTRS)
Crozier, Stewart N.
1993-01-01
A low complexity, open-loop, discrete-time, delay-multiply-average (DMA) technique for estimating the frequency offset for digitally modulated MPSK signals is investigated. A nonlinearity is used to remove the MPSK modulation and generate the carrier component to be extracted. Theoretical and simulated performance results are presented and compared to the Cramer-Rao lower bound (CRLB) for the variance of the frequency estimation error. For all signal-to-noise ratios (SNR's) above threshold, it is shown that the CRLB can essentially be achieved with linear complexity.
NASA Technical Reports Server (NTRS)
Kotoda, K.; Nakagawa, S.; Kai, K.; Yoshino, M. M.; Takeda, K.; Seki, K.
1985-01-01
In a humid region like Japan, it seems that the radiation term in the energy balance equation plays a more important role for evapotranspiration then does the vapor pressure difference between the surface and lower atmospheric boundary layer. A Priestley-Taylor type equation (equilibrium evaporation model) is used to estimate evapotranspiration. Net radiation, soil heat flux, and surface temperature data are obtained. Only temperature data obtained by remotely sensed techniques are used.
ESTIMATING CHLOROFORM BIOTRANSFORMATION IN F-344 RAT LIVER USING IN VITRO TECHNIQUES AND PHARMACOKINETIC MODELING
Linskey, C.F.1, Harrison, R.A.2., Zhao, G.3., Barton, H.A., Lipscomb, J.C4., and Evans, M.V2., 1UNC, ESE, Chapel Hill, NC ; 2USEPA, ORD, NHEERL, RTP, NC; 3 UN...
Regional distribution of forest height and biomass from multisensor data fusion
Yifan Yu; Sassan Saatch; Linda S. Heath; Elizabeth LaPoint; Ranga Myneni; Yuri Knyazikhin
2010-01-01
Elevation data acquired from radar interferometry at C-band from SRTM are used in data fusion techniques to estimate regional scale forest height and aboveground live biomass (AGLB) over the state of Maine. Two fusion techniques have been developed to perform post-processing and parameter estimations from four data sets: 1 arc sec National Elevation Data (NED), SRTM...
Estimation of Biochemical Constituents From Fresh, Green Leaves By Spectrum Matching Techniques
NASA Technical Reports Server (NTRS)
Goetz, A. F. H.; Gao, B. C.; Wessman, C. A.; Bowman, W. D.
1990-01-01
Estimation of biochemical constituents in vegetation such as lignin, cellulose, starch, sugar and protein by remote sensing methods is an important goal in ecological research. The spectral reflectances of dried leaves exhibit diagnostic absorption features which can be used to estimate the abundance of important constituents. Lignin and nitrogen concentrations have been obtained from canopies by use of imaging spectrometry and multiple linear regression techniques. The difficulty in identifying individual spectra of leaf constituents in the region beyond 1 micrometer is that liquid water contained in the leaf dominates the spectral reflectance of leaves in this region. By use of spectrum matching techniques, originally used to quantify whole column water abundance in the atmosphere and equivalent liquid water thickness in leaves, we have been able to remove the liquid water contribution to the spectrum. The residual spectra resemble spectra for cellulose in the 1.1 micrometer region, lignin in the 1.7 micrometer region, and starch in the 2.0-2.3 micrometer region. In the entire 1.0-2.3 micrometer region each of the major constituents contributes to the spectrum. Quantitative estimates will require using unmixing techniques on the residual spectra.
NASA Technical Reports Server (NTRS)
Schkolnik, Gerard S.
1993-01-01
The application of an adaptive real-time measurement-based performance optimization technique is being explored for a future flight research program. The key technical challenge of the approach is parameter identification, which uses a perturbation-search technique to identify changes in performance caused by forced oscillations of the controls. The controls on the NASA F-15 highly integrated digital electronic control (HIDEC) aircraft were perturbed using inlet cowl rotation steps at various subsonic and supersonic flight conditions to determine the effect on aircraft performance. The feasibility of the perturbation-search technique for identifying integrated airframe-propulsion system performance effects was successfully shown through flight experiments and postflight data analysis. Aircraft response and control data were analyzed postflight to identify gradients and to determine the minimum drag point. Changes in longitudinal acceleration as small as 0.004 g were measured, and absolute resolution was estimated to be 0.002 g or approximately 50 lbf of drag. Two techniques for identifying performance gradients were compared: a least-squares estimation algorithm and a modified maximum likelihood estimator algorithm. A complementary filter algorithm was used with the least squares estimator.
NASA Technical Reports Server (NTRS)
Schkolnik, Gerald S.
1993-01-01
The application of an adaptive real-time measurement-based performance optimization technique is being explored for a future flight research program. The key technical challenge of the approach is parameter identification, which uses a perturbation-search technique to identify changes in performance caused by forced oscillations of the controls. The controls on the NASA F-15 highly integrated digital electronic control (HIDEC) aircraft were perturbed using inlet cowl rotation steps at various subsonic and supersonic flight conditions to determine the effect on aircraft performance. The feasibility of the perturbation-search technique for identifying integrated airframe-propulsion system performance effects was successfully shown through flight experiments and postflight data analysis. Aircraft response and control data were analyzed postflight to identify gradients and to determine the minimum drag point. Changes in longitudinal acceleration as small as 0.004 g were measured, and absolute resolution was estimated to be 0.002 g or approximately 50 lbf of drag. Two techniques for identifying performance gradients were compared: a least-squares estimation algorithm and a modified maximum likelihood estimator algorithm. A complementary filter algorithm was used with the least squares estimator.
Strategies for Estimating Discrete Quantities.
ERIC Educational Resources Information Center
Crites, Terry W.
1993-01-01
Describes the benchmark and decomposition-recomposition estimation strategies and presents five techniques to develop students' estimation ability. Suggests situations involving quantities of candy and popcorn in which the teacher can model those strategies for the students. (MDH)
Thevissen, Patrick W; Fieuws, Steffen; Willems, Guy
2013-03-01
Multiple third molar development registration techniques exist. Therefore the aim of this study was to detect which third molar development registration technique was most promising to use as a tool for subadult age estimation. On a collection of 1199 panoramic radiographs the development of all present third molars was registered following nine different registration techniques [Gleiser, Hunt (GH); Haavikko (HV); Demirjian (DM); Raungpaka (RA); Gustafson, Koch (GK); Harris, Nortje (HN); Kullman (KU); Moorrees (MO); Cameriere (CA)]. Regression models with age as response and the third molar registration as predictor were developed for each registration technique separately. The MO technique disclosed highest R(2) (F 51%, M 45%) and lowest root mean squared error (F 3.42 years; M 3.67 years) values, but differences with other techniques were small in magnitude. The amount of stages utilized in the explored staging techniques slightly influenced the age predictions. © 2013 American Academy of Forensic Sciences.
A method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
Nonlinear Spectral Mixture Modeling to Estimate Water-Ice Abundance of Martian Regolith
NASA Astrophysics Data System (ADS)
Gyalay, Szilard; Chu, Kathryn; Zeev Noe Dobrea, Eldar
2017-10-01
We present a novel technique to estimate the abundance of water-ice in the Martian permafrost using Phoenix Surface Stereo Imager multispectral data. In previous work, Cull et al. (2010) estimated the abundance of water-ice in trenches dug by the Mars Phoenix lander by modeling the spectra of the icy regolith using the radiative transfer methods described in Hapke (2008) with optical constants for Mauna Kea palagonite (Clancy et al., 1995) as a substitute for unknown Martian regolith optical constants. Our technique, which uses the radiative transfer methods described in Shkuratov et al. (1999), seeks to eliminate the uncertainty that stems from not knowing the composition of the Martian regolith by using observations of the Martian soil before and after the water-ice has sublimated away. We use observations of the desiccated regolith sample to estimate its complex index of refraction from its spectrum. This removes any a priori assumptions of Martian regolith composition, limiting our free parameters to the estimated real index of refraction of the dry regolith at one specific wavelength, ice grain size, and regolith porosity. We can then model mixtures of regolith and water-ice, fitting to the original icy spectrum to estimate the ice abundance. To constrain the uncertainties in this technique, we performed laboratory measurements of the spectra of known mixtures of water-ice and dry soils as well as those of soils after desiccation with controlled viewing geometries. Finally, we applied the technique to Phoenix Surface Stereo Imager observations and estimated water-ice abundances consistent with pore-fill in the near-surface ice. This abundance is consistent with atmospheric diffusion, which has implications to our understanding of the history of water-ice on Mars and the role of the regolith at high latitudes as a reservoir of atmospheric H2O.
Techniques for estimating flood-peak discharges from urban basins in Missouri
Becker, L.D.
1986-01-01
Techniques are defined for estimating the magnitude and frequency of future flood peak discharges of rainfall-induced runoff from small urban basins in Missouri. These techniques were developed from an initial analysis of flood records of 96 gaged sites in Missouri and adjacent states. Final regression equations are based on a balanced, representative sampling of 37 gaged sites in Missouri. This sample included 9 statewide urban study sites, 18 urban sites in St. Louis County, and 10 predominantly rural sites statewide. Short-term records were extended on the basis of long-term climatic records and use of a rainfall-runoff model. Linear least-squares regression analyses were used with log-transformed variables to relate flood magnitudes of selected recurrence intervals (dependent variables) to selected drainage basin indexes (independent variables). For gaged urban study sites within the State, the flood peak estimates are from the frequency curves defined from the synthesized long-term discharge records. Flood frequency estimates are made for ungaged sites by using regression equations that require determination of the drainage basin size and either the percentage of impervious area or a basin development factor. Alternative sets of equations are given for the 2-, 5-, 10-, 25-, 50-, and 100-yr recurrence interval floods. The average standard errors of estimate range from about 33% for the 2-yr flood to 26% for the 100-yr flood. The techniques for estimation are applicable to flood flows that are not significantly affected by storage caused by manmade activities. Flood peak discharge estimating equations are considered applicable for sites on basins draining approximately 0.25 to 40 sq mi. (Author 's abstract)
Khan, I.; Hawlader, Sophie Mohammad Delwer Hossain; Arifeen, Shams El; Moore, Sophie; Hills, Andrew P.; Wells, Jonathan C.; Persson, Lars-Åke; Kabir, Iqbal
2012-01-01
The aim of this study was to investigate the validity of the Tanita TBF 300A leg-to-leg bioimpedance analyzer for estimating fat-free mass (FFM) in Bangladeshi children aged 4-10 years and to develop novel prediction equations for use in this population, using deuterium dilution as the reference method. Two hundred Bangladeshi children were enrolled. The isotope dilution technique with deuterium oxide was used for estimation of total body water (TBW). FFM estimated by Tanita was compared with results of deuterium oxide dilution technique. Novel prediction equations were created for estimating FFM, using linear regression models, fitting child's height and impedance as predictors. There was a significant difference in FFM and percentage of body fat (BF%) between methods (p<0.01), Tanita underestimating TBW in boys (p=0.001) and underestimating BF% in girls (p<0.001). A basic linear regression model with height and impedance explained 83% of the variance in FFM estimated by deuterium oxide dilution technique. The best-fit equation to predict FFM from linear regression modelling was achieved by adding weight, sex, and age to the basic model, bringing the adjusted R2 to 89% (standard error=0.90, p<0.001). These data suggest Tanita analyzer may be a valid field-assessment technique in Bangladeshi children when using population-specific prediction equations, such as the ones developed here. PMID:23082630
NASA Technical Reports Server (NTRS)
Scaife, Bradley James
1999-01-01
In any satellite communication, the Doppler shift associated with the satellite's position and velocity must be calculated in order to determine the carrier frequency. If the satellite state vector is unknown then some estimate must be formed of the Doppler-shifted carrier frequency. One elementary technique is to examine the signal spectrum and base the estimate on the dominant spectral component. If, however, the carrier is spread (as in most satellite communications) this technique may fail unless the chip rate-to-data rate ratio (processing gain) associated with the carrier is small. In this case, there may be enough spectral energy to allow peak detection against a noise background. In this thesis, we present a method to estimate the frequency (without knowledge of the Doppler shift) of a spread-spectrum carrier assuming a small processing gain and binary-phase shift keying (BPSK) modulation. Our method relies on an averaged discrete Fourier transform along with peak detection on spectral match filtered data. We provide theory and simulation results indicating the accuracy of this method. In addition, we will describe an all-digital hardware design based around a Motorola DSP56303 and high-speed A/D which implements this technique in real-time. The hardware design is to be used in NMSU's implementation of NASA's demand assignment, multiple access (DAMA) service.
The measurement of linear frequency drift in oscillators
NASA Astrophysics Data System (ADS)
Barnes, J. A.
1985-04-01
A linear drift in frequency is an important element in most stochastic models of oscillator performance. Quartz crystal oscillators often have drifts in excess of a part in ten to the tenth power per day. Even commercial cesium beam devices often show drifts of a few parts in ten to the thirteenth per year. There are many ways to estimate the drift rates from data samples (e.g., regress the phase on a quadratic; regress the frequency on a linear; compute the simple mean of the first difference of frequency; use Kalman filters with a drift term as one element in the state vector; and others). Although most of these estimators are unbiased, they vary in efficiency (i.e., confidence intervals). Further, the estimation of confidence intervals using the standard analysis of variance (typically associated with the specific estimating technique) can give amazingly optimistic results. The source of these problems is not an error in, say, the regressions techniques, but rather the problems arise from correlations within the residuals. That is, the oscillator model is often not consistent with constraints on the analysis technique or, in other words, some specific analysis techniques are often inappropriate for the task at hand. The appropriateness of a specific analysis technique is critically dependent on the oscillator model and can often be checked with a simple whiteness test on the residuals.
Piovesan, Davide; Pierobon, Alberto; DiZio, Paul; Lackner, James R.
2012-01-01
This study presents and validates a Time-Frequency technique for measuring 2-dimensional multijoint arm stiffness throughout a single planar movement as well as during static posture. It is proposed as an alternative to current regressive methods which require numerous repetitions to obtain average stiffness on a small segment of the hand trajectory. The method is based on the analysis of the reassigned spectrogram of the arm's response to impulsive perturbations and can estimate arm stiffness on a trial-by-trial basis. Analytic and empirical methods are first derived and tested through modal analysis on synthetic data. The technique's accuracy and robustness are assessed by modeling the estimation of stiffness time profiles changing at different rates and affected by different noise levels. Our method obtains results comparable with two well-known regressive techniques. We also test how the technique can identify the viscoelastic component of non-linear and higher than second order systems with a non-parametrical approach. The technique proposed here is very impervious to noise and can be used easily for both postural and movement tasks. Estimations of stiffness profiles are possible with only one perturbation, making our method a useful tool for estimating limb stiffness during motor learning and adaptation tasks, and for understanding the modulation of stiffness in individuals with neurodegenerative diseases. PMID:22448233
NASA Astrophysics Data System (ADS)
Wang, L.-P.; Ochoa-Rodríguez, S.; Onof, C.; Willems, P.
2015-09-01
Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field) that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive) technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2) (Edinburgh, UK) during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban drainage system's dynamics, particularly of peak runoff flows.
A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2001-01-01
In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.
Remote sensing investigations of wetland biomass and productivity for global biosystems research
NASA Technical Reports Server (NTRS)
Harkisky, M.; Klemas, V.
1983-01-01
Monitoring biomass of wetlands ecosystems can provide information on net primary production and on the chemical and physical status of wetland soils relative to anaerobic microbial transformation of key elements. Multispectral remote sensing techniques successfully estimated macrophytic biomass in wetlands systems. Regression models developed from ground spectral data for predicting Spartina alterniflora biomass over an entire growing season include seasonal variations in biomass density and illumination intensity. An independent set of biomass and spectral data were collected and the standing crop biomass and net primary productivity were estimated. The improved spatial, radiometric and spectral resolution of th LANDSAT-4 Thematic Mapper over the LANDSAT MSS can greatly enhance multispectral techniques for estimating wetlands biomass over large areas. These techniques can provide the biomass data necessary for global ecology studies.
Application of cokriging techniques for the estimation of hail size
NASA Astrophysics Data System (ADS)
Farnell, Carme; Rigo, Tomeu; Martin-Vide, Javier
2018-01-01
There are primarily two ways of estimating hail size: the first is the direct interpolation of point observations, and the second is the transformation of remote sensing fields into measurements of hail properties. Both techniques have advantages and limitations as regards generating the resultant map of hail damage. This paper presents a new methodology that combines the above mentioned techniques in an attempt to minimise the limitations and take advantage of the benefits of interpolation and the use of remote sensing data. The methodology was tested for several episodes with good results being obtained for the estimation of hail size at practically all the points analysed. The study area presents a large database of hail episodes, and for this reason, it constitutes an optimal test bench.
Channel Estimation for Filter Bank Multicarrier Systems in Low SNR Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Driggs, Jonathan; Sibbett, Taylor; Moradiy, Hussein
Channel estimation techniques are crucial for reliable communications. This paper is concerned with channel estimation in a filter bank multicarrier spread spectrum (FBMCSS) system. We explore two channel estimator options: (i) a method that makes use of a periodic preamble and mimics the channel estimation techniques that are widely used in OFDM-based systems; and (ii) a method that stays within the traditional realm of filter bank signal processing. For the case where the channel noise is white, both methods are analyzed in detail and their performance is compared against their respective Cramer-Rao Lower Bounds (CRLB). Advantages and disadvantages of themore » two methods under different channel conditions are given to provide insight to the reader as to when one will outperform the other.« less
Determination of Time Dependent Virus Inactivation Rates
NASA Astrophysics Data System (ADS)
Chrysikopoulos, C. V.; Vogler, E. T.
2003-12-01
A methodology is developed for estimating temporally variable virus inactivation rate coefficients from experimental virus inactivation data. The methodology consists of a technique for slope estimation of normalized virus inactivation data in conjunction with a resampling parameter estimation procedure. The slope estimation technique is based on a relatively flexible geostatistical method known as universal kriging. Drift coefficients are obtained by nonlinear fitting of bootstrap samples and the corresponding confidence intervals are obtained by bootstrap percentiles. The proposed methodology yields more accurate time dependent virus inactivation rate coefficients than those estimated by fitting virus inactivation data to a first-order inactivation model. The methodology is successfully applied to a set of poliovirus batch inactivation data. Furthermore, the importance of accurate inactivation rate coefficient determination on virus transport in water saturated porous media is demonstrated with model simulations.
Estimating numbers of greater prairie-chickens using mark-resight techniques
Clifton, A.M.; Krementz, D.G.
2006-01-01
Current monitoring efforts for greater prairie-chicken (Tympanuchus cupido pinnatus) populations indicate that populations are declining across their range. Monitoring the population status of greater prairie-chickens is based on traditional lek surveys (TLS) that provide an index without considering detectability. Estimators, such as immigration-emigration joint maximum-likelihood estimator from a hypergeometric distribution (IEJHE), can account for detectability and provide reliable population estimates based on resightings. We evaluated the use of mark-resight methods using radiotelemetry to estimate population size and density of greater prairie-chickens on 2 sites at a tallgrass prairie in the Flint Hills of Kansas, USA. We used average distances traveled from lek of capture to estimate density. Population estimates and confidence intervals at the 2 sites were 54 (CI 50-59) on 52.9 km 2 and 87 (CI 82-94) on 73.6 km2. The TLS performed at the same sites resulted in population ranges of 7-34 and 36-63 and always produced a lower population index than the mark-resight population estimate with a larger range. Mark-resight simulations with varying male:female ratios of marks indicated that this ratio was important in designing a population study on prairie-chickens. Confidence intervals for estimates when no marks were placed on females at the 2 sites (CI 46-50, 76-84) did not overlap confidence intervals when 40% of marks were placed on females (CI 54-64, 91-109). Population estimates derived using this mark-resight technique were apparently more accurate than traditional methods and would be more effective in detecting changes in prairie-chicken populations. Our technique could improve prairie-chicken management by providing wildlife biologists and land managers with a tool to estimate the population size and trends of lekking bird species, such as greater prairie-chickens.
Rodríguez-Entrena, Macario; Schuberth, Florian; Gelhard, Carsten
2018-01-01
Structural equation modeling using partial least squares (PLS-SEM) has become a main-stream modeling approach in various disciplines. Nevertheless, prior literature still lacks a practical guidance on how to properly test for differences between parameter estimates. Whereas existing techniques such as parametric and non-parametric approaches in PLS multi-group analysis solely allow to assess differences between parameters that are estimated for different subpopulations, the study at hand introduces a technique that allows to also assess whether two parameter estimates that are derived from the same sample are statistically different. To illustrate this advancement to PLS-SEM, we particularly refer to a reduced version of the well-established technology acceptance model.
Trends in shuttle entry heating from the correction of flight test maneuvers
NASA Technical Reports Server (NTRS)
Hodge, J. K.
1983-01-01
A new technique was developed to systematically expand the aerothermodynamic envelope of the Space Shuttle Protection System (TPS). The technique required transient flight test maneuvers which were performed on the second, fourth, and fifth Shuttle reentries. Kalman filtering and parameter estimation were used for the reduction of embedded thermocouple data to obtain best estimates of aerothermal parameters. Difficulties in reducing the data were overcome or minimized. Thermal parameters were estimated to minimize uncertainties, and heating rate parameters were estimated to correlate with angle of attack, sideslip, deflection angle, and Reynolds number changes. Heating trends from the maneuvers allow for rapid and safe envelope expansion needed for future missions, except for some local areas.
NASA Astrophysics Data System (ADS)
Arai, Hiroyuki; Miyagawa, Isao; Koike, Hideki; Haseyama, Miki
We propose a novel technique for estimating the number of people in a video sequence; it has the advantages of being stable even in crowded situations and needing no ground-truth data. By analyzing the geometrical relationships between image pixels and their intersection volumes in the real world quantitatively, a foreground image directly indicates the number of people. Because foreground detection is possible even in crowded situations, the proposed method can be applied in such situations. Moreover, it can estimate the number of people in an a priori manner, so it needs no ground-truth data unlike existing feature-based estimation techniques. Experiments show the validity of the proposed method.
Transit Project Planning Guidance : Estimation of Transit Supply Parameters
DOT National Transportation Integrated Search
1984-04-01
This report discusses techniques applicable to the estimation of transit vehicle fleet requirements, vehicle-hours and vehicle-miles, and other related transit supply parameters. These parameters are used for estimating operating costs and certain ca...
Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo
2016-04-01
Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.
Preliminary evaluation of spectral, normal and meteorological crop stage estimation approaches
NASA Technical Reports Server (NTRS)
Cate, R. B.; Artley, J. A.; Doraiswamy, P. C.; Hodges, T.; Kinsler, M. C.; Phinney, D. E.; Sestak, M. L. (Principal Investigator)
1980-01-01
Several of the projects in the AgRISTARS program require crop phenology information, including classification, acreage and yield estimation, and detection of episodal events. This study evaluates several crop calendar estimation techniques for their potential use in the program. The techniques, although generic in approach, were developed and tested on spring wheat data collected in 1978. There are three basic approaches to crop stage estimation: historical averages for an area (normal crop calendars), agrometeorological modeling of known crop-weather relationships agrometeorological (agromet) crop calendars, and interpretation of spectral signatures (spectral crop calendars). In all, 10 combinations of planting and biostage estimation models were evaluated. Dates of stage occurrence are estimated with biases between -4 and +4 days while root mean square errors range from 10 to 15 days. Results are inconclusive as to the superiority of any of the models and further evaluation of the models with the 1979 data set is recommended.
Adaptive torque estimation of robot joint with harmonic drive transmission
NASA Astrophysics Data System (ADS)
Shi, Zhiguo; Li, Yuankai; Liu, Guangjun
2017-11-01
Robot joint torque estimation using input and output position measurements is a promising technique, but the result may be affected by the load variation of the joint. In this paper, a torque estimation method with adaptive robustness and optimality adjustment according to load variation is proposed for robot joint with harmonic drive transmission. Based on a harmonic drive model and a redundant adaptive robust Kalman filter (RARKF), the proposed approach can adapt torque estimation filtering optimality and robustness to the load variation by self-tuning the filtering gain and self-switching the filtering mode between optimal and robust. The redundant factor of RARKF is designed as a function of the motor current for tolerating the modeling error and load-dependent filtering mode switching. The proposed joint torque estimation method has been experimentally studied in comparison with a commercial torque sensor and two representative filtering methods. The results have demonstrated the effectiveness of the proposed torque estimation technique.
Ar+ and CuBr laser-assisted chemical bleaching of teeth: estimation of whiteness degree
NASA Astrophysics Data System (ADS)
Dimitrov, S.; Todorovska, Roumyana; Gizbrecht, Alexander I.; Raychev, L.; Petrov, Lyubomir P.
2003-11-01
In this work the results of adaptation of impartial methods for color determination aimed at developing of techniques for estimation of human teeth whiteness degree, sufficiently handy for common use in clinical practice are presented. For approbation and by the way of illustration of the techniques, standards of teeth colors were used as well as model and naturally discolored human teeth treated by two bleaching chemical compositions activated by three light sources each: Ar+ and CuBr lasers, and a standard halogen photopolymerization lamp. Typical reflection and fluorescence spectra of some samples are presented; the samples colors were estimated by a standard computer processing in RGB and B coordinates. The results of the applied spectral and colorimetric techniques are in a good agreement with those of the standard computer processing of the corresponding digital photographs and complies with the visually estimated degree of the teeth whiteness judged according to the standard reference scale commonly used in the aesthetic dentistry.
Whole-Body Human Inverse Dynamics with Distributed Micro-Accelerometers, Gyros and Force Sensing †
Latella, Claudia; Kuppuswamy, Naveen; Romano, Francesco; Traversaro, Silvio; Nori, Francesco
2016-01-01
Human motion tracking is a powerful tool used in a large range of applications that require human movement analysis. Although it is a well-established technique, its main limitation is the lack of estimation of real-time kinetics information such as forces and torques during the motion capture. In this paper, we present a novel approach for a human soft wearable force tracking for the simultaneous estimation of whole-body forces along with the motion. The early stage of our framework encompasses traditional passive marker based methods, inertial and contact force sensor modalities and harnesses a probabilistic computational technique for estimating dynamic quantities, originally proposed in the domain of humanoid robot control. We present experimental analysis on subjects performing a two degrees-of-freedom bowing task, and we estimate the motion and kinetics quantities. The results demonstrate the validity of the proposed method. We discuss the possible use of this technique in the design of a novel soft wearable force tracking device and its potential applications. PMID:27213394
Applications of physiological bases of ageing to forensic sciences. Estimation of age-at-death.
C Zapico, Sara; Ubelaker, Douglas H
2013-03-01
Age-at-death estimation is one of the main challenges in forensic sciences since it contributes to the identification of individuals. There are many anthropological techniques to estimate the age at death in children and adults. However, in adults this methodology is less accurate and requires population specific references. For that reason, new methodologies have been developed. Biochemical methods are based on the natural process of ageing, which induces different biochemical changes that lead to alterations in cells and tissues. In this review, we describe different attempts to estimate the age in adults based on these changes. Chemical approaches imply modifications in molecules or accumulation of some products. Molecular biology approaches analyze the modifications in DNA and chromosomes. Although the most accurate technique appears to be aspartic acid racemization, it is important to take into account the other techniques because the forensic context and the human remains available will determine the possibility to apply one or another methodology. Copyright © 2013 Elsevier B.V. All rights reserved.
Uncertainty Quantification and Statistical Convergence Guidelines for PIV Data
NASA Astrophysics Data System (ADS)
Stegmeir, Matthew; Kassen, Dan
2016-11-01
As Particle Image Velocimetry has continued to mature, it has developed into a robust and flexible technique for velocimetry used by expert and non-expert users. While historical estimates of PIV accuracy have typically relied heavily on "rules of thumb" and analysis of idealized synthetic images, recently increased emphasis has been placed on better quantifying real-world PIV measurement uncertainty. Multiple techniques have been developed to provide per-vector instantaneous uncertainty estimates for PIV measurements. Often real-world experimental conditions introduce complications in collecting "optimal" data, and the effect of these conditions is important to consider when planning an experimental campaign. The current work utilizes the results of PIV Uncertainty Quantification techniques to develop a framework for PIV users to utilize estimated PIV confidence intervals to compute reliable data convergence criteria for optimal sampling of flow statistics. Results are compared using experimental and synthetic data, and recommended guidelines and procedures leveraging estimated PIV confidence intervals for efficient sampling for converged statistics are provided.
NASA Astrophysics Data System (ADS)
Liu, Di; Mishra, Ashok K.; Yu, Zhongbo
2016-07-01
This paper examines the combination of support vector machines (SVM) and the dual ensemble Kalman filter (EnKF) technique to estimate root zone soil moisture at different soil layers up to 100 cm depth. Multiple experiments are conducted in a data rich environment to construct and validate the SVM model and to explore the effectiveness and robustness of the EnKF technique. It was observed that the performance of SVM relies more on the initial length of training set than other factors (e.g., cost function, regularization parameter, and kernel parameters). The dual EnKF technique proved to be efficient to improve SVM with observed data either at each time step or at a flexible time steps. The EnKF technique can reach its maximum efficiency when the updating ensemble size approaches a certain threshold. It was observed that the SVM model performance for the multi-layer soil moisture estimation can be influenced by the rainfall magnitude (e.g., dry and wet spells).
NASA Technical Reports Server (NTRS)
Finley, Tom D.; Wong, Douglas T.; Tripp, John S.
1993-01-01
A newly developed technique for enhanced data reduction provides an improved procedure that allows least squares minimization to become possible between data sets with an unequal number of data points. This technique was applied in the Crew and Equipment Translation Aid (CETA) experiment on the STS-37 Shuttle flight in April 1991 to obtain the velocity profile from the acceleration data. The new technique uses a least-squares method to estimate the initial conditions and calibration constants. These initial conditions are estimated by least-squares fitting the displacements indicated by the Hall-effect sensor data to the corresponding displacements obtained from integrating the acceleration data. The velocity and displacement profiles can then be recalculated from the corresponding acceleration data using the estimated parameters. This technique, which enables instantaneous velocities to be obtained from the test data instead of only average velocities at varying discrete times, offers more detailed velocity information, particularly during periods of large acceleration or deceleration.
NASA Astrophysics Data System (ADS)
Malinowski, Arkadiusz; Takeuchi, Takuya; Chen, Shang; Suzuki, Toshiya; Ishikawa, Kenji; Sekine, Makoto; Hori, Masaru; Lukasiak, Lidia; Jakubowski, Andrzej
2013-07-01
This paper describes a new, fast, and case-independent technique for sticking coefficient (SC) estimation based on pallet for plasma evaluation (PAPE) structure and numerical analysis. Our approach does not require complicated structure, apparatus, or time-consuming measurements but offers high reliability of data and high flexibility. Thermal analysis is also possible. This technique has been successfully applied to estimation of very low value of SC of hydrogen radicals on chemically amplified ArF 193 nm photoresist (the main goal of this study). Upper bound of our technique has been determined by investigation of SC of fluorine radical on polysilicon (in elevated temperature). Sources of estimation error and ways of its reduction have been also discussed. Results of this study give an insight into the process kinetics, and not only they are helpful in better process understanding but additionally they may serve as parameters in a phenomenological model development for predictive modelling of etching for ultimate CMOS topography simulation.
Modeling, simulation, and estimation of optical turbulence
NASA Astrophysics Data System (ADS)
Formwalt, Byron Paul
This dissertation documents three new contributions to simulation and modeling of optical turbulence. The first contribution is the formalization, optimization, and validation of a modeling technique called successively conditioned rendering (SCR). The SCR technique is empirically validated by comparing the statistical error of random phase screens generated with the technique. The second contribution is the derivation of the covariance delineation theorem, which provides theoretical bounds on the error associated with SCR. It is shown empirically that the theoretical bound may be used to predict relative algorithm performance. Therefore, the covariance delineation theorem is a powerful tool for optimizing SCR algorithms. For the third contribution, we introduce a new method for passively estimating optical turbulence parameters, and demonstrate the method using experimental data. The technique was demonstrated experimentally, using a 100 m horizontal path at 1.25 m above sun-heated tarmac on a clear afternoon. For this experiment, we estimated C2n ≈ 6.01 · 10-9 m-23 , l0 ≈ 17.9 mm, and L0 ≈ 15.5 m.
Sim, K S; Lim, M S; Yeap, Z X
2016-07-01
A new technique to quantify signal-to-noise ratio (SNR) value of the scanning electron microscope (SEM) images is proposed. This technique is known as autocorrelation Levinson-Durbin recursion (ACLDR) model. To test the performance of this technique, the SEM image is corrupted with noise. The autocorrelation function of the original image and the noisy image are formed. The signal spectrum based on the autocorrelation function of image is formed. ACLDR is then used as an SNR estimator to quantify the signal spectrum of noisy image. The SNR values of the original image and the quantified image are calculated. The ACLDR is then compared with the three existing techniques, which are nearest neighbourhood, first-order linear interpolation and nearest neighbourhood combined with first-order linear interpolation. It is shown that ACLDR model is able to achieve higher accuracy in SNR estimation. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Leurs, G; O'Connell, C P; Andreotti, S; Rutzen, M; Vonk Noordegraaf, H
2015-06-01
This study employed a non-lethal measurement tool, which combined an existing photo-identification technique with a surface, parallel laser photogrammetry technique, to accurately estimate the size of free-ranging white sharks Carcharodon carcharias. Findings confirmed the hypothesis that surface laser photogrammetry is more accurate than crew-based estimations that utilized a shark cage of known size as a reference tool. Furthermore, field implementation also revealed that the photographer's angle of reference and the shark's body curvature could greatly influence technique accuracy, exposing two limitations. The findings showed minor inconsistencies with previous studies that examined pre-caudal to total length ratios of dead specimens. This study suggests that surface laser photogrammetry can successfully increase length estimation accuracy and illustrates the potential utility of this technique for growth and stock assessments on free-ranging marine organisms, which will lead to an improvement of the adaptive management of the species. © 2015 The Fisheries Society of the British Isles.
Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod
2016-08-06
In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively.
A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications
Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod
2016-01-01
In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively. PMID:27509495
Eigenvector of gravity gradient tensor for estimating fault dips considering fault type
NASA Astrophysics Data System (ADS)
Kusumoto, Shigekazu
2017-12-01
The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.
Estimating Ocean Currents from Automatic Identification System Based Ship Drift Measurements
NASA Astrophysics Data System (ADS)
Jakub, Thomas D.
Ship drift is a technique that has been used over the last century and a half to estimate ocean currents. Several of the shortcomings of the ship drift technique include obtaining the data from multiple ships, the time delay in getting those ship positions to a data center for processing and the limited resolution based on the amount of time between position measurements. These shortcomings can be overcome through the use of the Automatic Identification System (AIS). AIS enables more precise ocean current estimates, the option of finer resolution and more timely estimates. In this work, a demonstration of the use of AIS to compute ocean currents is performed. A corresponding error and sensitivity analysis is performed to help identify under which conditions errors will be smaller. A case study in San Francisco Bay with constant AIS message updates was compared against high frequency radar and demonstrated ocean current magnitude residuals of 19 cm/s for ship tracks in a high signal to noise environment. These ship tracks were only minutes long compared to the normally 12 to 24 hour ship tracks. The Gulf of Mexico case study demonstrated the ability to estimate ocean currents over longer baselines and identified the dependency of the estimates on the accuracy of time measurements. Ultimately, AIS measurements when combined with ship drift can provide another method of estimating ocean currents, particularly when other measurements techniques are not available.
Studies on spectral analysis of randomly sampled signals: Application to laser velocimetry data
NASA Technical Reports Server (NTRS)
Sree, David
1992-01-01
Spectral analysis is very useful in determining the frequency characteristics of many turbulent flows, for example, vortex flows, tail buffeting, and other pulsating flows. It is also used for obtaining turbulence spectra from which the time and length scales associated with the turbulence structure can be estimated. These estimates, in turn, can be helpful for validation of theoretical/numerical flow turbulence models. Laser velocimetry (LV) is being extensively used in the experimental investigation of different types of flows, because of its inherent advantages; nonintrusive probing, high frequency response, no calibration requirements, etc. Typically, the output of an individual realization laser velocimeter is a set of randomly sampled velocity data. Spectral analysis of such data requires special techniques to obtain reliable estimates of correlation and power spectral density functions that describe the flow characteristics. FORTRAN codes for obtaining the autocorrelation and power spectral density estimates using the correlation-based slotting technique were developed. Extensive studies have been conducted on simulated first-order spectrum and sine signals to improve the spectral estimates. A first-order spectrum was chosen because it represents the characteristics of a typical one-dimensional turbulence spectrum. Digital prefiltering techniques, to improve the spectral estimates from randomly sampled data were applied. Studies show that the spectral estimates can be increased up to about five times the mean sampling rate.
Hess, G.W.; Bohman, L.R.
1996-01-01
Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada were developed using streamflow records at six gaged sites and basin physical and climatic characteristics. Streamflow data at gaged sites were related by regression techniques to concurrent flows at nearby gaging stations so that monthly mean streamflows for periods of missing or no record can be estimated for gaged sites in central Nevada. The standard error of estimate for relations at these sites ranged from 12 to 196 percent. Also, monthly streamflow data for selected percent exceedence levels were used in regression analyses with basin and climatic variables to determine relations for ungaged basins for annual and monthly percent exceedence levels. Analyses indicate that the drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the annual percent exceedence, the standard error of estimate of the relations for ungaged sites ranged from 51 to 96 percent and standard error of prediction for ungaged sites ranged from 96 to 249 percent. For the monthly percent exceedence values, the standard error of estimate of the relations ranged from 31 to 168 percent, and the standard error of prediction ranged from 115 to 3,124 percent. Reliability and limitations of the estimating methods are described.
UNCERTAINTY ON RADIATION DOSES ESTIMATED BY BIOLOGICAL AND RETROSPECTIVE PHYSICAL METHODS.
Ainsbury, Elizabeth A; Samaga, Daniel; Della Monaca, Sara; Marrale, Maurizio; Bassinet, Celine; Burbidge, Christopher I; Correcher, Virgilio; Discher, Michael; Eakins, Jon; Fattibene, Paola; Güçlü, Inci; Higueras, Manuel; Lund, Eva; Maltar-Strmecki, Nadica; McKeever, Stephen; Rääf, Christopher L; Sholom, Sergey; Veronese, Ivan; Wieser, Albrecht; Woda, Clemens; Trompier, Francois
2018-03-01
Biological and physical retrospective dosimetry are recognised as key techniques to provide individual estimates of dose following unplanned exposures to ionising radiation. Whilst there has been a relatively large amount of recent development in the biological and physical procedures, development of statistical analysis techniques has failed to keep pace. The aim of this paper is to review the current state of the art in uncertainty analysis techniques across the 'EURADOS Working Group 10-Retrospective dosimetry' members, to give concrete examples of implementation of the techniques recommended in the international standards, and to further promote the use of Monte Carlo techniques to support characterisation of uncertainties. It is concluded that sufficient techniques are available and in use by most laboratories for acute, whole body exposures to highly penetrating radiation, but further work will be required to ensure that statistical analysis is always wholly sufficient for the more complex exposure scenarios.
Development and validation of a MRgHIFU non-invasive tissue acoustic property estimation technique.
Johnson, Sara L; Dillon, Christopher; Odéen, Henrik; Parker, Dennis; Christensen, Douglas; Payne, Allison
2016-11-01
MR-guided high-intensity focussed ultrasound (MRgHIFU) non-invasive ablative surgeries have advanced into clinical trials for treating many pathologies and cancers. A remaining challenge of these surgeries is accurately planning and monitoring tissue heating in the face of patient-specific and dynamic acoustic properties of tissues. Currently, non-invasive measurements of acoustic properties have not been implemented in MRgHIFU treatment planning and monitoring procedures. This methods-driven study presents a technique using MR temperature imaging (MRTI) during low-temperature HIFU sonications to non-invasively estimate sample-specific acoustic absorption and speed of sound values in tissue-mimicking phantoms. Using measured thermal properties, specific absorption rate (SAR) patterns are calculated from the MRTI data and compared to simulated SAR patterns iteratively generated via the Hybrid Angular Spectrum (HAS) method. Once the error between the simulated and measured patterns is minimised, the estimated acoustic property values are compared to the true phantom values obtained via an independent technique. The estimated values are then used to simulate temperature profiles in the phantoms, and compared to experimental temperature profiles. This study demonstrates that trends in acoustic absorption and speed of sound can be non-invasively estimated with average errors of 21% and 1%, respectively. Additionally, temperature predictions using the estimated properties on average match within 1.2 °C of the experimental peak temperature rises in the phantoms. The positive results achieved in tissue-mimicking phantoms presented in this study indicate that this technique may be extended to in vivo applications, improving HIFU sonication temperature rise predictions and treatment assessment.
National scale biomass estimators for United States tree species
Jennifer C. Jenkins; David C. Chojnacky; Linda S. Heath; Richard A. Birdsey
2003-01-01
Estimates of national-scale forest carbon (C) stocks and fluxes are typically based on allometric regression equations developed using dimensional analysis techniques. However, the literature is inconsistent and incomplete with respect to large-scale forest C estimation. We compiled all available diameter-based allometric regression equations for estimating total...
77 FR 15004 - Updating of Employer Identification Numbers
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-14
... the IRS, including whether the information will have practical utility; The accuracy of the estimated... techniques or other forms of information technology; and Estimates of capital or start-up costs and costs of... respondents are persons that have an EIN. Estimated total annual reporting burden: 403,177 hours. Estimated...
Hybrid estimation of complex systems.
Hofbaur, Michael W; Williams, Brian C
2004-10-01
Modern automated systems evolve both continuously and discretely, and hence require estimation techniques that go well beyond the capability of a typical Kalman Filter. Multiple model (MM) estimation schemes track these system evolutions by applying a bank of filters, one for each discrete system mode. Modern systems, however, are often composed of many interconnected components that exhibit rich behaviors, due to complex, system-wide interactions. Modeling these systems leads to complex stochastic hybrid models that capture the large number of operational and failure modes. This large number of modes makes a typical MM estimation approach infeasible for online estimation. This paper analyzes the shortcomings of MM estimation, and then introduces an alternative hybrid estimation scheme that can efficiently estimate complex systems with large number of modes. It utilizes search techniques from the toolkit of model-based reasoning in order to focus the estimation on the set of most likely modes, without missing symptoms that might be hidden amongst the system noise. In addition, we present a novel approach to hybrid estimation in the presence of unknown behavioral modes. This leads to an overall hybrid estimation scheme for complex systems that robustly copes with unforeseen situations in a degraded, but fail-safe manner.
NASA Technical Reports Server (NTRS)
Chapman, G. M. (Principal Investigator); Carnes, J. G.
1981-01-01
Several techniques which use clusters generated by a new clustering algorithm, CLASSY, are proposed as alternatives to random sampling to obtain greater precision in crop proportion estimation: (1) Proportional Allocation/relative count estimator (PA/RCE) uses proportional allocation of dots to clusters on the basis of cluster size and a relative count cluster level estimate; (2) Proportional Allocation/Bayes Estimator (PA/BE) uses proportional allocation of dots to clusters and a Bayesian cluster-level estimate; and (3) Bayes Sequential Allocation/Bayesian Estimator (BSA/BE) uses sequential allocation of dots to clusters and a Bayesian cluster level estimate. Clustering in an effective method in making proportion estimates. It is estimated that, to obtain the same precision with random sampling as obtained by the proportional sampling of 50 dots with an unbiased estimator, samples of 85 or 166 would need to be taken if dot sets with AI labels (integrated procedure) or ground truth labels, respectively were input. Dot reallocation provides dot sets that are unbiased. It is recommended that these proportion estimation techniques are maintained, particularly the PA/BE because it provides the greatest precision.
Scatter and veiling glare corrections for quantitative digital subtraction angiography
NASA Astrophysics Data System (ADS)
Ersahin, Atila; Molloi, Sabee Y.; Qian, Yao-Jin
1994-05-01
In order to quantitate anatomical and physiological parameters such as vessel dimensions and volumetric blood flow, it is necessary to make corrections for scatter and veiling glare (SVG), which are the major sources of nonlinearities in videodensitometric digital subtraction angiography (DSA). A convolution filtering technique has been investigated to estimate SVG distribution in DSA images without the need to sample the SVG for each patient. This technique utilizes exposure parameters and image gray levels to estimate SVG intensity by predicting the total thickness for every pixel in the image. At this point, corrections were also made for variation of SVG fraction with beam energy and field size. To test its ability to estimate SVG intensity, the correction technique was applied to images of a Lucite step phantom, anthropomorphic chest phantom, head phantom, and animal models at different thicknesses, projections, and beam energies. The root-mean-square (rms) percentage error of these estimates were obtained by comparison with direct SVG measurements made behind a lead strip. The average rms percentage errors in the SVG estimate for the 25 phantom studies and for the 17 animal studies were 6.22% and 7.96%, respectively. These results indicate that the SVG intensity can be estimated for a wide range of thicknesses, projections, and beam energies.
Schwarz, L.K.; Runge, M.C.
2009-01-01
Age estimation of individuals is often an integral part of species management research, and a number of ageestimation techniques are commonly employed. Often, the error in these techniques is not quantified or accounted for in other analyses, particularly in growth curve models used to describe physiological responses to environment and human impacts. Also, noninvasive, quick, and inexpensive methods to estimate age are needed. This research aims to provide two Bayesian methods to (i) incorporate age uncertainty into an age-length Schnute growth model and (ii) produce a method from the growth model to estimate age from length. The methods are then employed for Florida manatee (Trichechus manatus) carcasses. After quantifying the uncertainty in the aging technique (counts of ear bone growth layers), we fit age-length data to the Schnute growth model separately by sex and season. Independent prior information about population age structure and the results of the Schnute model are then combined to estimate age from length. Results describing the age-length relationship agree with our understanding of manatee biology. The new methods allow us to estimate age, with quantified uncertainty, for 98% of collected carcasses: 36% from ear bones, 62% from length.
NASA Astrophysics Data System (ADS)
Wood, W. T.; Runyan, T. E.; Palmsten, M.; Dale, J.; Crawford, C.
2016-12-01
Natural Gas (primarily methane) and gas hydrate accumulations require certain bio-geochemical, as well as physical conditions, some of which are poorly sampled and/or poorly understood. We exploit recent advances in the prediction of seafloor porosity and heat flux via machine learning techniques (e.g. Random forests and Bayesian networks) to predict the occurrence of gas and subsequently gas hydrate in marine sediments. The prediction (actually guided interpolation) of key parameters we use in this study is a K-nearest neighbor technique. KNN requires only minimal pre-processing of the data and predictors, and requires minimal run-time input so the results are almost entirely data-driven. Specifically we use new estimates of sedimentation rate and sediment type, along with recently derived compaction modeling to estimate profiles of porosity and age. We combined the compaction with seafloor heat flux to estimate temperature with depth and geologic age, which, with estimates of organic carbon, and models of methanogenesis yield limits on the production of methane. Results include geospatial predictions of gas (and gas hydrate) accumulations, with quantitative estimates of uncertainty. The Generic Earth Modeling System (GEMS) we have developed to derive the machine learning estimates is modular and easily updated with new algorithms or data.
Head movement compensation in real-time magnetoencephalographic recordings.
Little, Graham; Boe, Shaun; Bardouille, Timothy
2014-01-01
Neurofeedback- and brain-computer interface (BCI)-based interventions can be implemented using real-time analysis of magnetoencephalographic (MEG) recordings. Head movement during MEG recordings, however, can lead to inaccurate estimates of brain activity, reducing the efficacy of the intervention. Most real-time applications in MEG have utilized analyses that do not correct for head movement. Effective means of correcting for head movement are needed to optimize the use of MEG in such applications. Here we provide preliminary validation of a novel analysis technique, real-time source estimation (rtSE), that measures head movement and generates corrected current source time course estimates in real-time. rtSE was applied while recording a calibrated phantom to determine phantom position localization accuracy and source amplitude estimation accuracy under stationary and moving conditions. Results were compared to off-line analysis methods to assess validity of the rtSE technique. The rtSE method allowed for accurate estimation of current source activity at the source-level in real-time, and accounted for movement of the source due to changes in phantom position. The rtSE technique requires modifications and specialized analysis of the following MEG work flow steps.•Data acquisition•Head position estimation•Source localization•Real-time source estimation This work explains the technical details and validates each of these steps.
Stegman, Kelly J; Park, Edward J; Dechev, Nikolai
2012-07-01
The motivation of this research is to non-invasively monitor the wrist tendon's displacement and velocity, for purposes of controlling a prosthetic device. This feasibility study aims to determine if the proposed technique using Doppler ultrasound is able to accurately estimate the tendon's instantaneous velocity and displacement. This study is conducted with a tendon mimicking experiment consisting of two different materials: a commercial ultrasound scanner, and a reference linear motion stage set-up. Audio-based output signals are acquired from the ultrasound scanner, and are processed with our proposed Fourier technique to obtain the tendon's velocity and displacement estimates. We then compare our estimates to an external reference system, and also to the ultrasound scanner's own estimates based on its proprietary software. The proposed tendon motion estimation method has been shown to be repeatable, effective and accurate in comparison to the external reference system, and is generally more accurate than the scanner's own estimates. After establishing this feasibility study, future testing will include cadaver-based studies to test the technique on the human arm tendon anatomy, and later on live human test subjects in order to further refine the proposed method for the novel purpose of detecting user-intended tendon motion for controlling wearable prosthetic devices.
A Radial Basis Function Approach to Financial Time Series Analysis
1993-12-01
including efficient methods for parameter estimation and pruning, a pointwise prediction error estimator, and a methodology for controlling the "data...collection of practical techniques to address these issues for a modeling methodology . Radial Basis Function networks. These techniques in- clude efficient... methodology often then amounts to a careful consideration of the interplay between model complexity and reliability. These will be recurrent themes
ERIC Educational Resources Information Center
Papadopoulos, Ioannis
2010-01-01
The issue of the area of irregular shapes is absent from the modern mathematical textbooks in elementary education in Greece. However, there exists a collection of books written for educational purposes by famous Greek scholars dating from the eighteenth century, which propose certain techniques concerning the estimation of the area of such…
Optimization of planar PIV-based pressure estimates in laminar and turbulent wakes
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2017-05-01
The performance of four pressure estimation techniques using Eulerian material acceleration estimates from planar, two-component Particle Image Velocimetry (PIV) data were evaluated in a bluff body wake. To allow for the ground truth comparison of the pressure estimates, direct numerical simulations of flow over a circular cylinder were used to obtain synthetic velocity fields. Direct numerical simulations were performed for Re_D = 100, 300, and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A parametric study encompassing a range of temporal and spatial resolutions was performed for each Re_D. The effect of random noise typical of experimental velocity measurements was also evaluated. The results identified optimal temporal and spatial resolutions that minimize the propagation of random and truncation errors to the pressure field estimates. A model derived from linear error propagation through the material acceleration central difference estimators was developed to predict these optima, and showed good agreement with the results from common pressure estimation techniques. The results of the model are also shown to provide acceptable first-order approximations for sampling parameters that reduce error propagation when Lagrangian estimations of material acceleration are employed. For pressure integration based on planar PIV, the effect of flow three-dimensionality was also quantified, and shown to be most pronounced at higher Reynolds numbers downstream of the vortex formation region, where dominant vortices undergo substantial three-dimensional deformations. The results of the present study provide a priori recommendations for the use of pressure estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
Correcting for deformation in skin-based marker systems.
Alexander, E J; Andriacchi, T P
2001-03-01
A new technique is described that reduces error due to skin movement artifact in the opto-electronic measurement of in vivo skeletal motion. This work builds on a previously described point cluster technique marker set and estimation algorithm by extending the transformation equations to the general deformation case using a set of activity-dependent deformation models. Skin deformation during activities of daily living are modeled as consisting of a functional form defined over the observation interval (the deformation model) plus additive noise (modeling error). The method is described as an interval deformation technique. The method was tested using simulation trials with systematic and random components of deformation error introduced into marker position vectors. The technique was found to substantially outperform methods that require rigid-body assumptions. The method was tested in vivo on a patient fitted with an external fixation device (Ilizarov). Simultaneous measurements from markers placed on the Ilizarov device (fixed to bone) were compared to measurements derived from skin-based markers. The interval deformation technique reduced the errors in limb segment pose estimate by 33 and 25% compared to the classic rigid-body technique for position and orientation, respectively. This newly developed method has demonstrated that by accounting for the changing shape of the limb segment, a substantial improvement in the estimates of in vivo skeletal movement can be achieved.
Kinematic Measurement of Knee Prosthesis from Single-Plane Projection Images
NASA Astrophysics Data System (ADS)
Hirokawa, Shunji; Ariyoshi, Shogo; Takahashi, Kenji; Maruyama, Koichi
In this paper, the measurement of 3D motion from 2D perspective projections of knee prosthesis is described. The technique reported by Banks and Hodge was further developed in this study. The estimation was performed in two steps. The first-step estimation was performed on the assumption of orthogonal projection. Then, the second-step estimation was subsequently carried out based upon the perspective projection to accomplish more accurate estimation. The simulation results have demonstrated that the technique archived sufficient accuracies of position/orientation estimation for prosthetic kinematics. Then we applied our algorithm to the CCD images, thereby examining the influences of various artifacts, possibly incorporated through an imaging process, on the estimation accuracies. We found that accuracies in the experiment were influenced mainly by the geometric discrepancies between the prosthesis component and computer generated model and by the spacial inconsistencies between the coordinate axes of the positioner and that of the computer model. However, we verified that our algorithm could achieve proper and consistent estimation even for the CCD images.
An Automated Technique for Estimating Daily Precipitation over the State of Virginia
NASA Technical Reports Server (NTRS)
Follansbee, W. A.; Chamberlain, L. W., III
1981-01-01
Digital IR and visible imagery obtained from a geostationary satellite located over the equator at 75 deg west latitude were provided by NASA and used to obtain a linear relationship between cloud top temperature and hourly precipitation. Two computer programs written in FORTRAN were used. The first program computes the satellite estimate field from the hourly digital IR imagery. The second program computes the final estimate for the entire state area by comparing five preliminary estimates of 24 hour precipitation with control raingage readings and determining which of the five methods gives the best estimate for the day. The final estimate is then produced by incorporating control gage readings into the winning method. In presenting reliable precipitation estimates for every cell in Virginia in near real time on a daily on going basis, the techniques require on the order of 125 to 150 daily gage readings by dependable, highly motivated observers distributed as uniformly as feasible across the state.
Utilization of bone impedance for age estimation in postmortem cases.
Ishikawa, Noboru; Suganami, Hideki; Nishida, Atsushi; Miyamori, Daisuke; Kakiuchi, Yasuhiro; Yamada, Naotake; Wook-Cheol, Kim; Kubo, Toshikazu; Ikegaya, Hiroshi
2015-11-01
In the field of Forensic Medicine the number of unidentified cadavers has increased due to natural disasters and international terrorism. The age estimation is very important for identification of the victims. The degree of sagittal closure is one of such age estimation methods. However it is not widely accepted as a reliable method for age estimation. In this study, we have examined whether measuring impedance value (z-values) of the sagittal suture of the skull is related to the age in men and women and discussed the possibility to use bone impedance for age estimation. Bone impedance values increased with aging and decreased after the age of 64.5. Then we compared age estimation through the conventional visual method and the proposed bone impedance measurement technique. It is suggested that the bone impedance measuring technique may be of value to forensic science as a method of age estimation. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Linear Estimation of Particle Bulk Parameters from Multi-Wavelength Lidar Measurements
NASA Technical Reports Server (NTRS)
Veselovskii, Igor; Dubovik, Oleg; Kolgotin, A.; Korenskiy, M.; Whiteman, D. N.; Allakhverdiev, K.; Huseyinoglu, F.
2012-01-01
An algorithm for linear estimation of aerosol bulk properties such as particle volume, effective radius and complex refractive index from multiwavelength lidar measurements is presented. The approach uses the fact that the total aerosol concentration can well be approximated as a linear combination of aerosol characteristics measured by multiwavelength lidar. Therefore, the aerosol concentration can be estimated from lidar measurements without the need to derive the size distribution, which entails more sophisticated procedures. The definition of the coefficients required for the linear estimates is based on an expansion of the particle size distribution in terms of the measurement kernels. Once the coefficients are established, the approach permits fast retrieval of aerosol bulk properties when compared with the full regularization technique. In addition, the straightforward estimation of bulk properties stabilizes the inversion making it more resistant to noise in the optical data. Numerical tests demonstrate that for data sets containing three aerosol backscattering and two extinction coefficients (so called 3 + 2 ) the uncertainties in the retrieval of particle volume and surface area are below 45% when input data random uncertainties are below 20 %. Moreover, using linear estimates allows reliable retrievals even when the number of input data is reduced. To evaluate the approach, the results obtained using this technique are compared with those based on the previously developed full inversion scheme that relies on the regularization procedure. Both techniques were applied to the data measured by multiwavelength lidar at NASA/GSFC. The results obtained with both methods using the same observations are in good agreement. At the same time, the high speed of the retrieval using linear estimates makes the method preferable for generating aerosol information from extended lidar observations. To demonstrate the efficiency of the method, an extended time series of observations acquired in Turkey in May 2010 was processed using the linear estimates technique permitting, for what we believe to be the first time, temporal-height distributions of particle parameters.
Telis, Pamela A.
1992-01-01
Mississippi State water laws require that the 7-day, 10-year low-flow characteristic (7Q10) of streams be used as a criterion for issuing wastedischarge permits to dischargers to streams and for limiting withdrawals of water from streams. This report presents techniques for estimating the 7Q10 for ungaged sites on streams in Mississippi based on the availability of baseflow discharge measurements at the site, location of nearby gaged sites on the same stream, and drainage area of the ungaged site. These techniques may be used to estimate the 7Q10 at sites on natural, unregulated or partially regulated, and non-tidal streams. Low-flow characteristics for streams in the Mississippi River alluvial plain were not estimated because the annual lowflow data exhibit decreasing trends with time. Also presented are estimates of the 7Q10 for 493 gaged sites on Mississippi streams.Techniques for estimating the 7Q10 have been developed for ungaged sites with base-flow discharge measurements, for ungaged sites on gaged streams, and for ungaged sites on ungaged streams. For an ungaged site with one or more base-flow discharge measurements, base-flow discharge data at the ungaged site are related to concurrent discharge data at a nearby gaged site. For ungaged sites on gaged streams, several methods of transferring the 7Q10 from a gaged site to an ungaged site were developed; the resulting 7Q10 values are based on drainage area prorations for the sites. For ungaged sites on ungaged streams, the 7Q10 is estimated from a map developed for. this study that shows the unit 7Q10 (7Q10 per square mile of drainage area) for ungaged basins in the State. The mapped values were estimated from the unit 7Q10 determined for nearby gaged basins, adjusted on the basis of the geology and topography of the ungaged basins.
Snow, Richard A.; Porta, Michael J.; Long, James M.
2018-01-01
The White Perch Morone americana is an invasive species in many Midwestern states and is widely distributed in reservoir systems, yet little is known about the species' age structure and population dynamics. White Perch were first observed in Sooner Reservoir, a thermally altered cooling reservoir in Oklahoma, by the Oklahoma Department of Wildlife Conservation in 2006. It is unknown how thermally altered systems like Sooner Reservoir may affect the precision of White Perch age estimates. Previous studies have found that age structures from Largemouth Bass Micropterus salmoides and Bluegills Lepomis macrochirus from thermally altered reservoirs had false annuli, which increased error when estimating ages. Our objective was to quantify the precision of White Perch age estimates using four sagittal otolith preparation techniques (whole, broken, browned, and stained). Because Sooner Reservoir is thermally altered, we also wanted to identify the best month to collect a White Perch age sample based on aging precision. Ages of 569 White Perch (20–308 mm TL) were estimated using the four techniques. Age estimates from broken, stained, and browned otoliths ranged from 0 to 8 years; whole‐view otolith age estimates ranged from 0 to 7 years. The lowest mean coefficient of variation (CV) was obtained using broken otoliths, whereas the highest CV was observed using browned otoliths. July was the most precise month (lowest mean CV) for estimating age of White Perch, whereas April was the least precise month (highest mean CV). These results underscore the importance of knowing the best method to prepare otoliths for achieving the most precise age estimates and the best time of year to obtain those samples, as these factors may affect other estimates of population dynamics.
Barreto, Rafael E; Narváez, Javier; Sepúlveda, Natalia A; Velásquez, Fabián C; Díaz, Sandra C; López, Myriam Consuelo; Reyes, Patricia; Moncada, Ligia I
2017-09-01
Public health programs for the control of soil-transmitted helminthiases require valid diagnostic tests for surveillance and parasitic control evaluation. However, there is currently no agreement about what test should be used as a gold standard for the diagnosis of hookworm infection. Still, in presence of concurrent data for multiple tests it is possible to use statistical models to estimate measures of test performance and prevalence. The aim of this study was to estimate the diagnostic accuracy of five parallel tests (direct microscopic examination, Kato-Katz, Harada-Mori, modified Ritchie-Frick, and culture in agar plate) to detect hookworm infections in a sample of school-aged children from a rural area in Colombia. We used both, a frequentist approach, and Bayesian latent class models to estimate the sensitivity and specificity of five tests for hookworm detection, and to estimate the prevalence of hookworm infection in absence of a Gold Standard. The Kato-Katz and agar plate methods had an overall agreement of 95% and kappa coefficient of 0.76. Different models estimated a sensitivity between 76% and 92% for the agar plate technique, and 52% to 87% for the Kato-Katz technique. The other tests had lower sensitivity. All tests had specificity between 95% and 98%. The prevalence estimated by the Kato-Katz and Agar plate methods for different subpopulations varied between 10% and 14%, and was consistent with the prevalence estimated from the combination of all tests. The Harada-Mori, Ritchie-Frick and direct examination techniques resulted in lower and disparate prevalence estimates. Bayesian approaches assuming imperfect specificity resulted in lower prevalence estimates than the frequentist approach. Copyright © 2017 Elsevier B.V. All rights reserved.
Study of synthesis techniques for insensitive aircraft control systems
NASA Technical Reports Server (NTRS)
Harvey, C. A.; Pope, R. E.
1977-01-01
Insensitive flight control system design criteria was defined in terms of maximizing performance (handling qualities, RMS gust response, transient response, stability margins) over a defined parameter range. Wing load alleviation for the C-5A was chosen as a design problem. The C-5A model was a 79-state, two-control structure with uncertainties assumed to exist in dynamic pressure, structural damping and frequency, and the stability derivative, M sub w. Five new techniques (mismatch estimation, uncertainty weighting, finite dimensional inverse, maximum difficulty, dual Lyapunov) were developed. Six existing techniques (additive noise, minimax, multiplant, sensitivity vector augmentation, state dependent noise, residualization) and the mismatch estimation and uncertainty weighting techniques were synthesized and evaluated on the design example. Evaluation and comparison of these six techniques indicated that the minimax and the uncertainty weighting techniques were superior to the other six, and of these two, uncertainty weighting has lower computational requirements. Techniques based on the three remaining new concepts appear promising and are recommended for further research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, H. M. Abdul; Ukkusuri, Satish V.
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Aziz, H. M. Abdul; Ukkusuri, Satish V.
2017-06-29
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Uncertainty estimates of altimetric Global Mean Sea Level timeseries
NASA Astrophysics Data System (ADS)
Scharffenberg, Martin; Hemming, Michael; Stammer, Detlef
2016-04-01
An attempt is being presented concerned with providing uncertainty measures for global mean sea level time series. For this purpose sea surface height (SSH) fields, simulated by the high resolution STORM/NCEP model for the period 1993 - 2010, were subsampled along altimeter tracks and processed similar to techniques used by five working groups to estimate GMSL. Results suggest that the spatial and temporal resolution have a substantial impact on GMSL estimates. Major impacts can especially result from the interpolation technique or the treatment of SSH outliers and easily lead to artificial temporal variability in the resulting time series.
NASA Technical Reports Server (NTRS)
Berendes, Todd; Sengupta, Sailes K.; Welch, Ron M.; Wielicki, Bruce A.; Navar, Murgesh
1992-01-01
A semiautomated methodology is developed for estimating cumulus cloud base heights on the basis of high spatial resolution Landsat MSS data, using various image-processing techniques to match cloud edges with their corresponding shadow edges. The cloud base height is then estimated by computing the separation distance between the corresponding generalized Hough transform reference points. The differences between the cloud base heights computed by these means and a manual verification technique are of the order of 100 m or less; accuracies of 50-70 m may soon be possible via EOS instruments.
Prognoses of diameter and height of trees of eucalyptus using artificial intelligence.
Vieira, Giovanni Correia; de Mendonça, Adriano Ribeiro; da Silva, Gilson Fernandes; Zanetti, Sidney Sára; da Silva, Mayra Marques; Dos Santos, Alexandre Rosa
2018-04-01
Models of individual trees are composed of sub-models that generally estimate competition, mortality, and growth in height and diameter of each tree. They are usually adopted when we want more detailed information to estimate forest multiproduct. In these models, estimates of growth in diameter at 1.30m above the ground (DBH) and total height (H) are obtained by regression analysis. Recently, artificial intelligence techniques (AIT) have been used with satisfactory performance in forest measurement. Therefore, the objective of this study was to evaluate the performance of two AIT, artificial neural networks and adaptive neuro-fuzzy inference system, to estimate the growth in DBH and H of eucalyptus trees. We used data of continuous forest inventories of eucalyptus, with annual measurements of DBH, H, and the dominant height of trees of 398 plots, plus two qualitative variables: genetic material and site index. It was observed that the two AIT showed accuracy in growth estimation of DBH and H. Therefore, the two techniques discussed can be used for the prognosis of DBH and H in even-aged eucalyptus stands. The techniques used could also be adapted to other areas and forest species. Copyright © 2017 Elsevier B.V. All rights reserved.
Donato, David I.
2012-01-01
This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.
Weak Value Amplification is Suboptimal for Estimation and Detection
NASA Astrophysics Data System (ADS)
Ferrie, Christopher; Combes, Joshua
2014-01-01
We show by using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of single parameter estimation and signal detection. Specifically, we prove that postselection, a necessary ingredient for weak value amplification, decreases estimation accuracy and, moreover, arranging for anomalously large weak values is a suboptimal strategy. In doing so, we explicitly provide the optimal estimator, which in turn allows us to identify the optimal experimental arrangement to be the one in which all outcomes have equal weak values (all as small as possible) and the initial state of the meter is the maximal eigenvalue of the square of the system observable. Finally, we give precise quantitative conditions for when weak measurement (measurements without postselection or anomalously large weak values) can mitigate the effect of uncharacterized technical noise in estimation.
Price responsiveness of demand for cigarettes: does rationality matter?
Laporte, Audrey
2006-01-01
Meta-analysis is applied to aggregate-level studies that model the demand for cigarettes using static, myopic, or rational addiction frameworks in an attempt to synthesize key findings in the literature and to identify determinants of the variation in reported price elasticity estimates across studies. The results suggest that the rational addiction framework produces statistically similar estimates to the static framework but that studies that use the myopic framework tend to report more elastic price effects. Studies that applied panel data techniques or controlled for cross-border smuggling reported more elastic price elasticity estimates, whereas the use of instrumental variable techniques and time trends or time dummy variables produced less elastic estimates. The finding that myopic models produce different estimates than either of the other two model frameworks underscores that careful attention must be given to time series properties of the data.
Gauterin, Eckhard; Kammerer, Philipp; Kühn, Martin; Schulte, Horst
2016-05-01
Advanced model-based control of wind turbines requires knowledge of the states and the wind speed. This paper benchmarks a nonlinear Takagi-Sugeno observer for wind speed estimation with enhanced Kalman Filter techniques: The performance and robustness towards model-structure uncertainties of the Takagi-Sugeno observer, a Linear, Extended and Unscented Kalman Filter are assessed. Hence the Takagi-Sugeno observer and enhanced Kalman Filter techniques are compared based on reduced-order models of a reference wind turbine with different modelling details. The objective is the systematic comparison with different design assumptions and requirements and the numerical evaluation of the reconstruction quality of the wind speed. Exemplified by a feedforward loop employing the reconstructed wind speed, the benefit of wind speed estimation within wind turbine control is illustrated. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowyer, Theodore W.; Gesh, Christopher J.; Haas, Daniel A.
This report details efforts to develop a technique which is able to detect and quantify the mass of 240Pu in waste storage tanks and other enclosed spaces. If the isotopic ratios of the plutonium contained in the enclosed space is also known, then this technique is capable of estimating the total mass of the plutonium without physical sample retrieval and radiochemical analysis of hazardous material. Results utilizing this technique are reported for a Hanford Site waste tank (TX-118) and a well-characterized plutonium sample in a laboratory environment.
Williams, Larry J; O'Boyle, Ernest H
2015-09-01
A persistent concern in the management and applied psychology literature is the effect of common method variance on observed relations among variables. Recent work (i.e., Richardson, Simmering, & Sturman, 2009) evaluated 3 analytical approaches to controlling for common method variance, including the confirmatory factor analysis (CFA) marker technique. Their findings indicated significant problems with this technique, especially with nonideal marker variables (those with theoretical relations with substantive variables). Based on their simulation results, Richardson et al. concluded that not correcting for method variance provides more accurate estimates than using the CFA marker technique. We reexamined the effects of using marker variables in a simulation study and found the degree of error in estimates of a substantive factor correlation was relatively small in most cases, and much smaller than error associated with making no correction. Further, in instances in which the error was large, the correlations between the marker and substantive scales were higher than that found in organizational research with marker variables. We conclude that in most practical settings, the CFA marker technique yields parameter estimates close to their true values, and the criticisms made by Richardson et al. are overstated. (c) 2015 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Brown, M. G. L.; He, T.; Liang, S.
2016-12-01
Satellite-derived estimates of incident photosynthetically active radiation (PAR) can be used to monitor global change, are required by most terrestrial ecosystem models, and can be used to estimate primary production according to the theory of light use efficiency. Compared with parametric approaches, non-parametric techniques that include an artificial neural network (ANN), support vector machine regression (SVM), an artificial bee colony (ABC), and a look-up table (LUT) do not require many ancillary data as inputs for the estimation of PAR from satellite data. In this study, a selection of machine learning methods to estimate PAR from MODIS top of atmosphere (TOA) radiances are compared to a LUT approach to determine which techniques might best handle the nonlinear relationship between TOA radiance and incident PAR. Evaluation of these methods (ANN, SVM, and LUT) is performed with ground measurements at seven SURFRAD sites. Due to the design of the ANN, it can handle the nonlinear relationship between TOA radiance and PAR better than linearly interpolating between the values in the LUT; however, training the ANN has to be carried out on an angular-bin basis, which results in a LUT of ANNs. The SVM model may be better for incorporating multiple viewing angles than the ANN; however, both techniques require a large amount of training data, which may introduce a regional bias based on where the most training and validation data are available. Based on the literature, the ABC is a promising alternative to an ANN, SVM regression and a LUT, but further development for this application is required before concrete conclusions can be drawn. For now, the LUT method outperforms the machine-learning techniques, but future work should be directed at developing and testing the ABC method. A simple, robust method to estimate direct and diffuse incident PAR, with minimal inputs and a priori knowledge, would be very useful for monitoring global change of primary production, particularly of pastures and rangeland, which have implications for livestock and food security. Future work will delve deeper into the utility of satellite-derived PAR estimation for monitoring primary production in pasture and rangelands.
Estimator banks: a new tool for direction-of-arrival estimation
NASA Astrophysics Data System (ADS)
Gershman, Alex B.; Boehme, Johann F.
1997-10-01
A new powerful tool for improving the threshold performance of direction-of-arrival (DOA) estimation is considered. The essence of our approach is to reduce the number of outliers in the threshold domain using the so-called estimator bank containing multiple 'parallel' underlying DOA estimators which are based on pseudorandom resampling of the MUSIC spatial spectrum for given data batch or sample covariance matrix. To improve the threshold performance relative to conventional MUSIC, evolutionary principles are used, i.e., only 'successful' underlying estimators (having no failure in the preliminary estimated source localization sectors) are exploited in the final estimate. An efficient beamspace root implementation of the estimator bank approach is developed, combined with the array interpolation technique which enables the application to arbitrary arrays. A higher-order extension of our approach is also presented, where the cumulant-based MUSIC estimator is exploited as a basic technique for spatial spectrum resampling. Simulations and experimental data processing show that our algorithm performs well below the MUSIC threshold, namely, has the threshold performance similar to that of the stochastic ML method. At the same time, the computational cost of our algorithm is much lower than that of stochastic ML because no multidimensional optimization is involved.
Antibodies against toluene diisocyanate protein conjugates. Three methods of measurement.
Patterson, R; Harris, K E; Zeiss, C R
1983-12-01
With the use of canine antisera against toluene diisocyanate (TDI)-dog serum albumin (DSA), techniques for measuring antibody against TDI-DSA were evaluated. The use of an ammonium sulfate precipitation assay showed suggestive evidence of antibody binding but high levels of TDI-DSA precipitation in the absence of antibody limit any usefulness of this technique. Double-antibody co-precipitation techniques will measure total antibody or Ig class antibody against 125I-TDI-DSA. These techniques are quantitative. The polystyrene tube radioimmunoassay is a highly sensitive method of detecting and quantitatively estimating IgG antibody. The enzyme linked immunosorbent assay is a rapidly adaptable method for the quantitative estimation of IgG, IgA, and IgM against TDI-homologous proteins. All these techniques were compared and results are demonstrated by using the same serum sample for analysis.
Classification of the Regional Ionospheric Disturbance Based on Machine Learning Techniques
NASA Astrophysics Data System (ADS)
Terzi, Merve Begum; Arikan, Orhan; Karatay, Secil; Arikan, Feza; Gulyaeva, Tamara
2016-08-01
In this study, Total Electron Content (TEC) estimated from GPS receivers is used to model the regional and local variability that differs from global activity along with solar and geomagnetic indices. For the automated classification of regional disturbances, a classification technique based on a robust machine learning technique that have found wide spread use, Support Vector Machine (SVM) is proposed. Performance of developed classification technique is demonstrated for midlatitude ionosphere over Anatolia using TEC estimates generated from GPS data provided by Turkish National Permanent GPS Network (TNPGN-Active) for solar maximum year of 2011. As a result of implementing developed classification technique to Global Ionospheric Map (GIM) TEC data, which is provided by the NASA Jet Propulsion Laboratory (JPL), it is shown that SVM can be a suitable learning method to detect anomalies in TEC variations.
Regression sampling: some results for resource managers and researchers
William G. O' Regan; Robert W. Boyd
1974-01-01
Regression sampling is widely used in natural resources management and research to estimate quantities of resources per unit area. This note brings together results found in the statistical literature in the application of this sampling technique. Conditional and unconditional estimators are listed and for each estimator, exact variances and unbiased estimators for the...
Application of the Combination Approach for Estimating Evapotranspiration in Puerto Rico
NASA Technical Reports Server (NTRS)
Harmsen, Eric; Luvall, Jeffrey; Gonzalez, Jorge
2005-01-01
The ability to estimate short-term fluxes of water vapor from the land surface is important for validating latent heat flux estimates from high resolution remote sensing techniques. A new, relatively inexpensive method is presented for estimating t h e ground-based values of the surface latent heat flux or evapotranspiration.
Calibration of remotely sensed proportion or area estimates for misclassification error
Raymond L. Czaplewski; Glenn P. Catts
1992-01-01
Classifications of remotely sensed data contain misclassification errors that bias areal estimates. Monte Carlo techniques were used to compare two statistical methods that correct or calibrate remotely sensed areal estimates for misclassification bias using reference data from an error matrix. The inverse calibration estimator was consistently superior to the...
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1
NASA Technical Reports Server (NTRS)
1983-01-01
The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.
Accurately estimating PSF with straight lines detected by Hough transform
NASA Astrophysics Data System (ADS)
Wang, Ruichen; Xu, Liangpeng; Fan, Chunxiao; Li, Yong
2018-04-01
This paper presents an approach to estimating point spread function (PSF) from low resolution (LR) images. Existing techniques usually rely on accurate detection of ending points of the profile normal to edges. In practice however, it is often a great challenge to accurately localize profiles of edges from a LR image, which hence leads to a poor PSF estimation of the lens taking the LR image. For precisely estimating the PSF, this paper proposes firstly estimating a 1-D PSF kernel with straight lines, and then robustly obtaining the 2-D PSF from the 1-D kernel by least squares techniques and random sample consensus. Canny operator is applied to the LR image for obtaining edges and then Hough transform is utilized to extract straight lines of all orientations. Estimating 1-D PSF kernel with straight lines effectively alleviates the influence of the inaccurate edge detection on PSF estimation. The proposed method is investigated on both natural and synthetic images for estimating PSF. Experimental results show that the proposed method outperforms the state-ofthe- art and does not rely on accurate edge detection.
Dombrowski, Kirk; Khan, Bilal; Wendel, Travis; McLean, Katherine; Misshula, Evan; Curtis, Ric
2012-12-01
As part of a recent study of the dynamics of the retail market for methamphetamine use in New York City, we used network sampling methods to estimate the size of the total networked population. This process involved sampling from respondents' list of co-use contacts, which in turn became the basis for capture-recapture estimation. Recapture sampling was based on links to other respondents derived from demographic and "telefunken" matching procedures-the latter being an anonymized version of telephone number matching. This paper describes the matching process used to discover the links between the solicited contacts and project respondents, the capture-recapture calculation, the estimation of "false matches", and the development of confidence intervals for the final population estimates. A final population of 12,229 was estimated, with a range of 8235 - 23,750. The techniques described here have the special virtue of deriving an estimate for a hidden population while retaining respondent anonymity and the anonymity of network alters, but likely require larger sample size than the 132 persons interviewed to attain acceptable confidence levels for the estimate.
Building a Competitive Edge with Additive Manufacturing
2013-02-14
construct ceramic molds for complex metal parts using a 3D printing technique. They estimate the new 6 technique could eliminate all of the...processes. They include 3D printing and Additive Beam Techniques.15 Most Additive Manufacturing techniques are specific to certain classes of materials...9 Example Additive Manufacturing Techniques16 3D Printing Additive Beam Stereolithography (SLA) Direct Metal Laser Sintering (DMLS
Consistency of nuclear thermometric measurements at moderate excitation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, T. K.; Bhattacharya, C.; Kundu, S.
2008-08-15
A comparison of various thermometric techniques used for the estimation of nuclear temperature has been made from the decay of hot composite {sup 32}S* produced in the reaction {sup 20}Ne (145 MeV) + {sup 12}C. It is shown that the temperatures estimated by different techniques, known to vary significantly in the Fermi energy domain, are consistent with each other within experimental limits for the system studied here.
An Information-Based Machine Learning Approach to Elasticity Imaging
Hoerig, Cameron; Ghaboussi, Jamshid; Insana, Michael. F.
2016-01-01
An information-based technique is described for applications in mechanical-property imaging of soft biological media under quasi-static loads. We adapted the Autoprogressive method that was originally developed for civil engineering applications for this purpose. The Autoprogressive method is a computational technique that combines knowledge of object shape and a sparse distribution of force and displacement measurements with finite-element analyses and artificial neural networks to estimate a complete set of stress and strain vectors. Elasticity imaging parameters are then computed from estimated stresses and strains. We introduce the technique using ultrasonic pulse-echo measurements in simple gelatin imaging phantoms having linear-elastic properties so that conventional finite-element modeling can be used to validate results. The Autoprogressive algorithm does not require any assumptions about the material properties and can, in principle, be used to image media with arbitrary properties. We show that by selecting a few well-chosen force-displacement measurements that are appropriately applied during training and establish convergence, we can estimate all nontrivial stress and strain vectors throughout an object and accurately estimate an elastic modulus at high spatial resolution. This new method of modeling the mechanical properties of tissue-like materials introduces a unique method of solving the inverse problem and is the first technique for imaging stress without assuming the underlying constitutive model. PMID:27858175
NASA Astrophysics Data System (ADS)
Lim, Yee Yan; Kiong Soh, Chee
2011-12-01
Structures in service are often subjected to fatigue loads. Cracks would develop and lead to failure if left unnoticed after a large number of cyclic loadings. Monitoring the process of fatigue crack propagation as well as estimating the remaining useful life of a structure is thus essential to prevent catastrophe while minimizing earlier-than-required replacement. The advent of smart materials such as piezo-impedance transducers (lead zirconate titanate, PZT) has ushered in a new era of structural health monitoring (SHM) based on non-destructive evaluation (NDE). This paper presents a series of investigative studies to evaluate the feasibility of fatigue crack monitoring and estimation of remaining useful life using the electromechanical impedance (EMI) technique employing a PZT transducer. Experimental tests were conducted to study the ability of the EMI technique in monitoring fatigue crack in 1D lab-sized aluminum beams. The experimental results prove that the EMI technique is very sensitive to fatigue crack propagation. A proof-of-concept semi-analytical damage model for fatigue life estimation has been developed by incorporating the linear elastic fracture mechanics (LEFM) theory into the finite element (FE) model. The prediction of the model matches closely with the experiment, suggesting the possibility of replacing costly experiments in future.
A straightforward frequency-estimation technique for GPS carrier-phase time transfer.
Hackman, Christine; Levine, Judah; Parker, Thomas E; Piester, Dirk; Becker, Jürgen
2006-09-01
Although Global Positioning System (GPS) carrier-phase time transfer (GPSCPTT) offers frequency stability approaching 10-15 at averaging times of 1 d, a discontinuity occurs in the time-transfer estimates between the end of one processing batch (1-3 d in length) and the beginning of the next. The average frequency over a multiday analysis period often has been computed by first estimating and removing these discontinuities, i.e., through concatenation. We present a new frequency-estimation technique in which frequencies are computed from the individual batches then averaged to obtain the mean frequency for a multiday period. This allows the frequency to be computed without the uncertainty associated with the removal of the discontinuities and requires fewer computational resources. The new technique was tested by comparing the fractional frequency-difference values it yields to those obtained using a GPSCPTT concatenation method and those obtained using two-way satellite time-and-frequency transfer (TWSTFT). The clocks studied were located in Braunschweig, Germany, and in Boulder, CO. The frequencies obtained from the GPSCPTT measurements using either method agreed with those obtained from TWSTFT at several parts in 1016. The frequency values obtained from the GPSCPTT data by use of the new method agreed with those obtained using the concatenation technique at 1-4 x 10(-16).
Optimal Use of TDOA Geo-Location Techniques Within the Mountainous Terrain of Turkey
2012-09-01
Cross -Correlation TDOA Estimation Technique ................. 49 3. Standard Deviation...76 Figure 32. The Effect of Noise on Accuracy ........................................................ 77 Figure 33. The Effect of Noise to...finding techniques. In contrast, people have been using active location finding techniques, such as radar , for decades. When active location finding
Linear regression techniques for use in the EC tracer method of secondary organic aerosol estimation
NASA Astrophysics Data System (ADS)
Saylor, Rick D.; Edgerton, Eric S.; Hartsell, Benjamin E.
A variety of linear regression techniques and simple slope estimators are evaluated for use in the elemental carbon (EC) tracer method of secondary organic carbon (OC) estimation. Linear regression techniques based on ordinary least squares are not suitable for situations where measurement uncertainties exist in both regressed variables. In the past, regression based on the method of Deming [1943. Statistical Adjustment of Data. Wiley, London] has been the preferred choice for EC tracer method parameter estimation. In agreement with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], we find that in the limited case where primary non-combustion OC (OC non-comb) is assumed to be zero, the ratio of averages (ROA) approach provides a stable and reliable estimate of the primary OC-EC ratio, (OC/EC) pri. In contrast with Chu [2005. Stable estimate of primary OC/EC ratios in the EC tracer method. Atmospheric Environment 39, 1383-1392], however, we find that the optimal use of Deming regression (and the more general York et al. [2004. Unified equations for the slope, intercept, and standard errors of the best straight line. American Journal of Physics 72, 367-375] regression) provides excellent results as well. For the more typical case where OC non-comb is allowed to obtain a non-zero value, we find that regression based on the method of York is the preferred choice for EC tracer method parameter estimation. In the York regression technique, detailed information on uncertainties in the measurement of OC and EC is used to improve the linear best fit to the given data. If only limited information is available on the relative uncertainties of OC and EC, then Deming regression should be used. On the other hand, use of ROA in the estimation of secondary OC, and thus the assumption of a zero OC non-comb value, generally leads to an overestimation of the contribution of secondary OC to total measured OC.
NASA Astrophysics Data System (ADS)
Courchesne, Samuel
Knowledge of the dynamic characteristics of a fixed-wing UAV is necessary to design flight control laws and to conceive a high quality flight simulator. The basic features of a flight mechanic model include the properties of mass, inertia and major aerodynamic terms. They respond to a complex process involving various numerical analysis techniques and experimental procedures. This thesis focuses on the analysis of estimation techniques applied to estimate problems of stability and control derivatives from flight test data provided by an experimental UAV. To achieve this objective, a modern identification methodology (Quad-M) is used to coordinate the processing tasks from multidisciplinary fields, such as parameter estimation modeling, instrumentation, the definition of flight maneuvers and validation. The system under study is a non-linear model with six degrees of freedom with a linear aerodynamic model. The time domain techniques are used for identification of the drone. The first technique, the equation error method is used to determine the structure of the aerodynamic model. Thereafter, the output error method and filter error method are used to estimate the aerodynamic coefficients values. The Matlab scripts for estimating the parameters obtained from the American Institute of Aeronautics and Astronautics (AIAA) are used and modified as necessary to achieve the desired results. A commendable effort in this part of research is devoted to the design of experiments. This includes an awareness of the system data acquisition onboard and the definition of flight maneuvers. The flight tests were conducted under stable flight conditions and with low atmospheric disturbance. Nevertheless, the identification results showed that the filter error method is most effective for estimating the parameters of the drone due to the presence of process noise and measurement. The aerodynamic coefficients are validated using a numerical analysis of the vortex method. In addition, a simulation model incorporating the estimated parameters is used to compare the behavior of states measured. Finally, a good correspondence between the results is demonstrated despite a limited number of flight data. Keywords: drone, identification, estimation, nonlinear, flight test, system, aerodynamic coefficient.
Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation
NASA Astrophysics Data System (ADS)
Demir, Uygar; Toker, Cenk; Çenet, Duygu
2016-07-01
Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent GNSS Network) network. This study is supported by by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.
The report discusses an EPA investigation of techniques to improve methods for estimating volatile organic compound (VOC) emissions from area sources. Using the automobile refinishing industry for a detailed area source case study, an emission estimation method is being developed...
An evaluation of the precision of fin ray, otolith, and scale age determinations for brook trout
Stolarski, J.T.; Hartman, K.J.
2008-01-01
The ages of brook trout Salvelinus fontinalis are typically estimated using scales despite a lack of research documenting the effectiveness of this technique. The use of scales is often preferred because it is nonlethal and is believed to require less effort than alternative methods. To evaluate the relative effectiveness of different age estimation methodologies for brook trout, we measured the precision and processing times of scale, sagittal otolith, and pectoral fin ray age estimation techniques. Three independent readers, age bias plots, coefficients of variation (CV = 100 x SD/mean), and percent agreement (PA) were used to measure within-reader, among-structure bias and within-structure, among-reader precision. Bias was generally minimal; however, the age estimates derived from scales tended to be lower than those derived from otoliths within older (age > 2) cohorts. Otolith, fin ray, and scale age estimates were within 1 year of each other for 95% of the comparisons. The measures of precision for scales (CV = 6.59; PA = 82.30) and otoliths (CV = 7.45; PA = 81.48) suggest higher agreement between these structures than with fin rays (CV = 11.30; PA = 65.84). The mean per-sample processing times were lower for scale (13.88 min) and otolith techniques (12.23 min) than for fin ray techniques (22.68 min). The comparable processing times of scales and otoliths contradict popular belief and are probably a result of the high proportion of regenerated scales within samples and the ability to infer age from whole (as opposed to sectioned) otoliths. This research suggests that while scales produce age estimates rivaling those of otoliths for younger (age > 3) cohorts, they may be biased within older cohorts and therefore should be used with caution. ?? Copyright by the American Fisheries Society 2008.
Statistical Techniques to Analyze Pesticide Data Program Food Residue Observations.
Szarka, Arpad Z; Hayworth, Carol G; Ramanarayanan, Tharacad S; Joseph, Robert S I
2018-06-26
The U.S. EPA conducts dietary-risk assessments to ensure that levels of pesticides on food in the U.S. food supply are safe. Often these assessments utilize conservative residue estimates, maximum residue levels (MRLs), and a high-end estimate derived from registrant-generated field-trial data sets. A more realistic estimate of consumers' pesticide exposure from food may be obtained by utilizing residues from food-monitoring programs, such as the Pesticide Data Program (PDP) of the U.S. Department of Agriculture. A substantial portion of food-residue concentrations in PDP monitoring programs are below the limits of detection (left-censored), which makes the comparison of regulatory-field-trial and PDP residue levels difficult. In this paper, we present a novel adaption of established statistical techniques, the Kaplan-Meier estimator (K-M), the robust regression on ordered statistic (ROS), and the maximum-likelihood estimator (MLE), to quantify the pesticide-residue concentrations in the presence of heavily censored data sets. The examined statistical approaches include the most commonly used parametric and nonparametric methods for handling left-censored data that have been used in the fields of medical and environmental sciences. This work presents a case study in which data of thiamethoxam residue on bell pepper generated from registrant field trials were compared with PDP-monitoring residue values. The results from the statistical techniques were evaluated and compared with commonly used simple substitution methods for the determination of summary statistics. It was found that the maximum-likelihood estimator (MLE) is the most appropriate statistical method to analyze this residue data set. Using the MLE technique, the data analyses showed that the median and mean PDP bell pepper residue levels were approximately 19 and 7 times lower, respectively, than the corresponding statistics of the field-trial residues.
Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio
2018-03-01
To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Green, W. Reed; Haggard, Brian E.
2001-01-01
Water-quality sampling consisting of every other month (bimonthly) routine sampling and storm event sampling (six storms annually) is used to estimate annual phosphorus and nitrogen loads at Illinois River south of Siloam Springs, Arkansas. Hydrograph separation allowed assessment of base-flow and surfacerunoff nutrient relations and yield. Discharge and nutrient relations indicate that water quality at Illinois River south of Siloam Springs, Arkansas, is affected by both point and nonpoint sources of contamination. Base-flow phosphorus concentrations decreased with increasing base-flow discharge indicating the dilution of phosphorus in water from point sources. Nitrogen concentrations increased with increasing base-flow discharge, indicating a predominant ground-water source. Nitrogen concentrations at higher base-flow discharges often were greater than median concentrations reported for ground water (from wells and springs) in the Springfield Plateau aquifer. Total estimated phosphorus and nitrogen annual loads for calendar year 1997-1999 using the regression techniques presented in this paper (35 samples) were similar to estimated loads derived from integration techniques (1,033 samples). Flow-weighted nutrient concentrations and nutrient yields at the Illinois River site were about 10 to 100 times greater than national averages for undeveloped basins and at North Sylamore Creek and Cossatot River (considered to be undeveloped basins in Arkansas). Total phosphorus and soluble reactive phosphorus were greater than 10 times and total nitrogen and dissolved nitrite plus nitrate were greater than 10 to 100 times the national and regional averages for undeveloped basins. These results demonstrate the utility of a strategy whereby samples are collected every other month and during selected storm events annually, with use of regression models to estimate nutrient loads. Annual loads of phosphorus and nitrogen estimated using regression techniques could provide similar results to estimates using integration techniques, with much less investment.
Empirical single sample quantification of bias and variance in Q-ball imaging.
Hainline, Allison E; Nath, Vishwesh; Parvathaneni, Prasanna; Blaber, Justin A; Schilling, Kurt G; Anderson, Adam W; Kang, Hakmook; Landman, Bennett A
2018-02-06
The bias and variance of high angular resolution diffusion imaging methods have not been thoroughly explored in the literature and may benefit from the simulation extrapolation (SIMEX) and bootstrap techniques to estimate bias and variance of high angular resolution diffusion imaging metrics. The SIMEX approach is well established in the statistics literature and uses simulation of increasingly noisy data to extrapolate back to a hypothetical case with no noise. The bias of calculated metrics can then be computed by subtracting the SIMEX estimate from the original pointwise measurement. The SIMEX technique has been studied in the context of diffusion imaging to accurately capture the bias in fractional anisotropy measurements in DTI. Herein, we extend the application of SIMEX and bootstrap approaches to characterize bias and variance in metrics obtained from a Q-ball imaging reconstruction of high angular resolution diffusion imaging data. The results demonstrate that SIMEX and bootstrap approaches provide consistent estimates of the bias and variance of generalized fractional anisotropy, respectively. The RMSE for the generalized fractional anisotropy estimates shows a 7% decrease in white matter and an 8% decrease in gray matter when compared with the observed generalized fractional anisotropy estimates. On average, the bootstrap technique results in SD estimates that are approximately 97% of the true variation in white matter, and 86% in gray matter. Both SIMEX and bootstrap methods are flexible, estimate population characteristics based on single scans, and may be extended for bias and variance estimation on a variety of high angular resolution diffusion imaging metrics. © 2018 International Society for Magnetic Resonance in Medicine.
An evaluation of population index and estimation techniques for tadpoles in desert pools
Jung, Robin E.; Dayton, Gage H.; Williamson, Stephen J.; Sauer, John R.; Droege, Sam
2002-01-01
Using visual (VI) and dip net indices (DI) and double-observer (DOE), removal (RE), and neutral red dye capture-recapture (CRE) estimates, we counted, estimated, and censused Couch's spadefoot (Scaphiopus couchii) and canyon treefrog (Hyla arenicolor) tadpole populations in Big Bend National Park, Texas. Initial dye experiments helped us determine appropriate dye concentrations and exposure times to use in mesocosm and field trials. The mesocosm study revealed higher tadpole detection rates, more accurate population estimates, and lower coefficients of variation among pools compared to those from the field study. In both mesocosm and field studies, CRE was the best method for estimating tadpole populations, followed by DOE and RE. In the field, RE, DI, and VI often underestimated populations in pools with higher tadpole numbers. DI improved with increased sampling. Larger pools supported larger tadpole populations, and tadpole detection rates in general decreased with increasing pool volume and surface area. Hence, pool size influenced bias in tadpole sampling. Across all techniques, tadpole detection rates differed among pools, indicating that sampling bias was inherent and techniques did not consistently sample the same proportion of tadpoles in each pool. Estimating bias (i.e., calculating detection rates) therefore was essential in assessing tadpole abundance. Unlike VI and DOE, DI, RE, and CRE could be used in turbid waters in which tadpoles are not visible. The tadpole population estimates we used accommodated differences in detection probabilities in simple desert pool environments but may not work in more complex habitats.
CMB EB and TB cross-spectrum estimation via pseudospectrum techniques
NASA Astrophysics Data System (ADS)
Grain, J.; Tristram, M.; Stompor, R.
2012-10-01
We discuss methods for estimating EB and TB spectra of the cosmic microwave background anisotropy maps covering limited sky area. Such odd-parity correlations are expected to vanish whenever parity is not broken. As this is indeed the case in the standard cosmologies, any evidence to the contrary would have a profound impact on our theories of the early Universe. Such correlations could also become a sensitive diagnostic of some particularly insidious instrumental systematics. In this work we introduce three different unbiased estimators based on the so-called standard and pure pseudo-spectrum techniques and later assess their performance by means of extensive Monte Carlo simulations performed for different experimental configurations. We find that a hybrid approach combining a pure estimate of B-mode multipoles with a standard one for E-mode (or T) multipoles, leads to the smallest error bars for both EB (or TB respectively) spectra as well as for the three other polarization-related angular power spectra (i.e., EE, BB, and TE). However, if both E and B multipoles are estimated using the pure technique, the loss of precision for the EB spectrum is not larger than ˜30%. Moreover, for the experimental configurations considered here, the statistical uncertainties-due to sampling variance and instrumental noise-of the pseudo-spectrum estimates is at most a factor ˜1.4 for TT, EE, and TE spectra and a factor ˜2 for BB, TB, and EB spectra, higher than the most optimistic Fisher estimate of the variance.
Updated Magmatic Flux Rate Estimates for the Hawaii Plume
NASA Astrophysics Data System (ADS)
Wessel, P.
2013-12-01
Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (<3 Ma). While they generally agree on the 1st order features, there is less agreement on the magnitude and relative size of secondary flux variations. Some of these differences arise from the use of different methodologies, but the significance of this variability is difficult to assess due to a lack of confidence bounds on the estimates obtained with these disparate methods. All methods introduce some error, but to date there has been little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.
Husak, G.J.; Marshall, M. T.; Michaelsen, J.; Pedreros, Diego; Funk, Christopher C.; Galu, G.
2008-01-01
Reliable estimates of cropped area (CA) in developing countries with chronic food shortages are essential for emergency relief and the design of appropriate market-based food security programs. Satellite interpretation of CA is an effective alternative to extensive and costly field surveys, which fail to represent the spatial heterogeneity at the country-level. Bias-corrected, texture based classifications show little deviation from actual crop inventories, when estimates derived from aerial photographs or field measurements are used to remove systematic errors in medium resolution estimates. In this paper, we demonstrate a hybrid high-medium resolution technique for Central Ethiopia that combines spatially limited unbiased estimates from IKONOS images, with spatially extensive Landsat ETM+ interpretations, land-cover, and SRTM-based topography. Logistic regression is used to derive the probability of a location being crop. These individual points are then aggregated to produce regional estimates of CA. District-level analysis of Landsat based estimates showed CA totals which supported the estimates of the Bureau of Agriculture and Rural Development. Continued work will evaluate the technique in other parts of Africa, while segmentation algorithms will be evaluated, in order to automate classification of medium resolution imagery for routine CA estimation in the future.
Reduced-rank technique for joint channel estimation in TD-SCDMA systems
NASA Astrophysics Data System (ADS)
Kamil Marzook, Ali; Ismail, Alyani; Mohd Ali, Borhanuddin; Sali, Adawati; Khatun, Sabira
2013-02-01
In time division-synchronous code division multiple access systems, increasing the system capacity by exploiting the inserting of the largest number of users in one time slot (TS) requires adding more estimation processes to estimate the joint channel matrix for the whole system. The increase in the number of channel parameters due the increase in the number of users in one TS directly affects the precision of the estimator's performance. This article presents a novel channel estimation with low complexity, which relies on reducing the rank order of the total channel matrix H. The proposed method exploits the rank deficiency of H to reduce the number of parameters that characterise this matrix. The adopted reduced-rank technique is based on truncated singular value decomposition algorithm. The algorithms for reduced-rank joint channel estimation (JCE) are derived and compared against traditional full-rank JCEs: least squares (LS) or Steiner and enhanced (LS or MMSE) algorithms. Simulation results of the normalised mean square error showed the superiority of reduced-rank estimators. In addition, the channel impulse responses founded by reduced-rank estimator for all active users offers considerable performance improvement over the conventional estimator along the channel window length.
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Koch, Grady J.
2006-01-01
The signal processing aspect of a 2-m wavelength coherent Doppler lidar system under development at NASA Langley Research Center in Virginia is investigated in this paper. The lidar system is named VALIDAR (validation lidar) and its signal processing program estimates and displays various wind parameters in real-time as data acquisition occurs. The goal is to improve the quality of the current estimates such as power, Doppler shift, wind speed, and wind direction, especially in low signal-to-noise-ratio (SNR) regime. A novel Nonlinear Adaptive Doppler Shift Estimation Technique (NADSET) is developed on such behalf and its performance is analyzed using the wind data acquired over a long period of time by VALIDAR. The quality of Doppler shift and power estimations by conventional Fourier-transform-based spectrum estimation methods deteriorates rapidly as SNR decreases. NADSET compensates such deterioration in the quality of wind parameter estimates by adaptively utilizing the statistics of Doppler shift estimate in a strong SNR range and identifying sporadic range bins where good Doppler shift estimates are found. The authenticity of NADSET is established by comparing the trend of wind parameters with and without NADSET applied to the long-period lidar return data.
NASA Astrophysics Data System (ADS)
Husak, G. J.; Marshall, M. T.; Michaelsen, J.; Pedreros, D.; Funk, C.; Galu, G.
2008-07-01
Reliable estimates of cropped area (CA) in developing countries with chronic food shortages are essential for emergency relief and the design of appropriate market-based food security programs. Satellite interpretation of CA is an effective alternative to extensive and costly field surveys, which fail to represent the spatial heterogeneity at the country-level. Bias-corrected, texture based classifications show little deviation from actual crop inventories, when estimates derived from aerial photographs or field measurements are used to remove systematic errors in medium resolution estimates. In this paper, we demonstrate a hybrid high-medium resolution technique for Central Ethiopia that combines spatially limited unbiased estimates from IKONOS images, with spatially extensive Landsat ETM+ interpretations, land-cover, and SRTM-based topography. Logistic regression is used to derive the probability of a location being crop. These individual points are then aggregated to produce regional estimates of CA. District-level analysis of Landsat based estimates showed CA totals which supported the estimates of the Bureau of Agriculture and Rural Development. Continued work will evaluate the technique in other parts of Africa, while segmentation algorithms will be evaluated, in order to automate classification of medium resolution imagery for routine CA estimation in the future.
Simple to complex modeling of breathing volume using a motion sensor.
John, Dinesh; Staudenmayer, John; Freedson, Patty
2013-06-01
To compare simple and complex modeling techniques to estimate categories of low, medium, and high ventilation (VE) from ActiGraph™ activity counts. Vertical axis ActiGraph™ GT1M activity counts, oxygen consumption and VE were measured during treadmill walking and running, sports, household chores and labor-intensive employment activities. Categories of low (<19.3 l/min), medium (19.3 to 35.4 l/min) and high (>35.4 l/min) VEs were derived from activity intensity classifications (light <2.9 METs, moderate 3.0 to 5.9 METs and vigorous >6.0 METs). We examined the accuracy of two simple techniques (multiple regression and activity count cut-point analyses) and one complex (random forest technique) modeling technique in predicting VE from activity counts. Prediction accuracy of the complex random forest technique was marginally better than the simple multiple regression method. Both techniques accurately predicted VE categories almost 80% of the time. The multiple regression and random forest techniques were more accurate (85 to 88%) in predicting medium VE. Both techniques predicted the high VE (70 to 73%) with greater accuracy than low VE (57 to 60%). Actigraph™ cut-points for light, medium and high VEs were <1381, 1381 to 3660 and >3660 cpm. There were minor differences in prediction accuracy between the multiple regression and the random forest technique. This study provides methods to objectively estimate VE categories using activity monitors that can easily be deployed in the field. Objective estimates of VE should provide a better understanding of the dose-response relationship between internal exposure to pollutants and disease. Copyright © 2013 Elsevier B.V. All rights reserved.
Convective rainfall estimation from digital GOES-1 infrared data
NASA Technical Reports Server (NTRS)
Sickler, G. L.; Thompson, A. H.
1979-01-01
An investigation was conducted to determine the feasibility of developing and objective technique for estimating convective rainfall from digital GOES-1 infrared data. The study area was a 240 km by 240 km box centered on College Station, Texas (Texas A and M University). The Scofield and Oliver (1977) rainfall estimation scheme was adapted and used with the digital geostationary satellite data. The concept of enhancement curves with respect to rainfall approximation is discussed. Raingage rainfall analyses and satellite-derived rainfall estimation analyses were compared. The correlation for the station data pairs (observed versus estimated rainfall amounts) for the convective portion of the storm was 0.92. It was demonstrated that a fairly accurate objective rainfall technique using digital geostationary infrared satellite data is feasible. The rawinsonde and some synoptic data that were used in this investigation came from NASA's Atmospheric Variability Experiment, AVE 7.
An integrated study of earth resources in the state of California using remote sensing techniques
NASA Technical Reports Server (NTRS)
Colwell, R. N. (Principal Investigator)
1975-01-01
The author has identified the following significant results. A weighted stratified double sample design using hardcopy LANDSAT-1 and ground data was utilized in developmental studies for snow water content estimation. Study results gave a correlation coefficient of 0.80 between LANDSAT sample units estimates of snow water content and ground subsamples. A basin snow water content estimate allowable error was given as 1.00 percent at the 99 percent confidence level with the same budget level utilized in conventional snow surveys. Several evapotranspiration estimation models were selected for efficient application at each level of data to be sampled. An area estimation procedure for impervious surface types of differing impermeability adjacent to stream channels was developed. This technique employs a double sample of 1:125,000 color infrared hightflight transparency data with ground or large scale photography.
Validating precision estimates in horizontal wind measurements from a Doppler lidar
Newsom, Rob K.; Brewer, W. Alan; Wilczak, James M.; ...
2017-03-30
Results from a recent field campaign are used to assess the accuracy of wind speed and direction precision estimates produced by a Doppler lidar wind retrieval algorithm. The algorithm, which is based on the traditional velocity-azimuth-display (VAD) technique, estimates the wind speed and direction measurement precision using standard error propagation techniques, assuming the input data (i.e., radial velocities) to be contaminated by random, zero-mean, errors. For this study, the lidar was configured to execute an 8-beam plan-position-indicator (PPI) scan once every 12 min during the 6-week deployment period. Several wind retrieval trials were conducted using different schemes for estimating themore » precision in the radial velocity measurements. Here, the resulting wind speed and direction precision estimates were compared to differences in wind speed and direction between the VAD algorithm and sonic anemometer measurements taken on a nearby 300 m tower.« less
Energy and maximum norm estimates for nonlinear conservation laws
NASA Technical Reports Server (NTRS)
Olsson, Pelle; Oliger, Joseph
1994-01-01
We have devised a technique that makes it possible to obtain energy estimates for initial-boundary value problems for nonlinear conservation laws. The two major tools to achieve the energy estimates are a certain splitting of the flux vector derivative f(u)(sub x), and a structural hypothesis, referred to as a cone condition, on the flux vector f(u). These hypotheses are fulfilled for many equations that occur in practice, such as the Euler equations of gas dynamics. It should be noted that the energy estimates are obtained without any assumptions on the gradient of the solution u. The results extend to weak solutions that are obtained as point wise limits of vanishing viscosity solutions. As a byproduct we obtain explicit expressions for the entropy function and the entropy flux of symmetrizable systems of conservation laws. Under certain circumstances the proposed technique can be applied repeatedly so as to yield estimates in the maximum norm.
Heart rate estimation from FBG sensors using cepstrum analysis and sensor fusion.
Zhu, Yongwei; Fook, Victor Foo Siang; Jianzhong, Emily Hao; Maniyeri, Jayachandran; Guan, Cuntai; Zhang, Haihong; Jiliang, Eugene Phua; Biswas, Jit
2014-01-01
This paper presents a method of estimating heart rate from arrays of fiber Bragg grating (FBG) sensors embedded in a mat. A cepstral domain signal analysis technique is proposed to characterize Ballistocardiogram (BCG) signals. With this technique, the average heart beat intervals can be estimated by detecting the dominant peaks in the cepstrum, and the signals of multiple sensors can be fused together to obtain higher signal to noise ratio than each individual sensor. Experiments were conducted with 10 human subjects lying on 2 different postures on a bed. The estimated heart rate from BCG was compared with heart rate ground truth from ECG, and the mean error of estimation obtained is below 1 beat per minute (BPM). The results show that the proposed fusion method can achieve promising heart rate measurement accuracy and robustness against various sensor contact conditions.
Male-Female Wage Differentials in the United States.
ERIC Educational Resources Information Center
Kiker, B. F.; Crouch, Henry L.
The primary objective of this paper is to describe a method of estimating female-male wage ratios. The estimating technique presented is two stage least squares (2SLS), in which equations are estimated for both men and women. After specifying and estimating the wage equations, the male-female wage differential is calculated that would remain if…
Practical Methods for Estimating Software Systems Fault Content and Location
NASA Technical Reports Server (NTRS)
Nikora, A.; Schneidewind, N.; Munson, J.
1999-01-01
Over the past several years, we have developed techniques to discriminate between fault-prone software modules and those that are not, to estimate a software system's residual fault content, to identify those portions of a software system having the highest estimated number of faults, and to estimate the effects of requirements changes on software quality.
Conceptual Model Evaluation using Advanced Parameter Estimation Techniques with Heat as a Tracer
NASA Astrophysics Data System (ADS)
Naranjo, R. C.; Morway, E. D.; Healy, R. W.
2016-12-01
Temperature measurements made at multiple depths beneath the sediment-water interface has proven useful for estimating seepage rates from surface-water channels and corresponding subsurface flow direction. Commonly, parsimonious zonal representations of the subsurface structure are defined a priori by interpretation of temperature envelopes, slug tests or analysis of soil cores. However, combining multiple observations into a single zone may limit the inverse model solution and does not take full advantage of the information content within the measured data. Further, simulating the correct thermal gradient, flow paths, and transient behavior of solutes may be biased by inadequacies in the spatial description of subsurface hydraulic properties. The use of pilot points in PEST offers a more sophisticated approach to estimate the structure of subsurface heterogeneity. This presentation evaluates seepage estimation in a cross-sectional model of a trapezoidal canal with intermittent flow representing four typical sedimentary environments. The recent improvements in heat as a tracer measurement techniques (i.e. multi-depth temperature probe) along with use of modern calibration techniques (i.e., pilot points) provides opportunities for improved calibration of flow models, and, subsequently, improved model predictions.
Efficient low-bit-rate adaptive mesh-based motion compensation technique
NASA Astrophysics Data System (ADS)
Mahmoud, Hanan A.; Bayoumi, Magdy A.
2001-08-01
This paper proposes a two-stage global motion estimation method using a novel quadtree block-based motion estimation technique and an active mesh model. In the first stage, motion parameters are estimated by fitting block-based motion vectors computed using a new efficient quadtree technique, that divides a frame into equilateral triangle blocks using the quad-tree structure. Arbitrary partition shapes are achieved by allowing 4-to-1, 3-to-1 and 2-1 merge/combine of sibling blocks having the same motion vector . In the second stage, the mesh is constructed using an adaptive triangulation procedure that places more triangles over areas with high motion content, these areas are estimated during the first stage. finally the motion compensation is achieved by using a novel algorithm that is carried by both the encoder and the decoder to determine the optimal triangulation of the resultant partitions followed by affine mapping at the encoder. Computer simulation results show that the proposed method gives better performance that the conventional ones in terms of the peak signal-to-noise ration (PSNR) and the compression ratio (CR).
Unsteady force estimation using a Lagrangian drift-volume approach
NASA Astrophysics Data System (ADS)
McPhaden, Cameron J.; Rival, David E.
2018-04-01
A novel Lagrangian force estimation technique for unsteady fluid flows has been developed, using the concept of a Darwinian drift volume to measure unsteady forces on accelerating bodies. The construct of added mass in viscous flows, calculated from a series of drift volumes, is used to calculate the reaction force on an accelerating circular flat plate, containing highly-separated, vortical flow. The net displacement of fluid contained within the drift volumes is, through Darwin's drift-volume added-mass proposition, equal to the added mass of the plate and provides the reaction force of the fluid on the body. The resultant unsteady force estimates from the proposed technique are shown to align with the measured drag force associated with a rapid acceleration. The critical aspects of understanding unsteady flows, relating to peak and time-resolved forces, often lie within the acceleration phase of the motions, which are well-captured by the drift-volume approach. Therefore, this Lagrangian added-mass estimation technique opens the door to fluid-dynamic analyses in areas that, until now, were inaccessible by conventional means.
Use of doubly labeled water technique in soldiers training for jungle warfare
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forbes-Ewan, C.H.; Morrissey, B.L.; Gregg, G.C.
1989-07-01
The doubly labeled water method was used to estimate the energy expended by four members of an Australian Army platoon (34 soldiers) engaged in training for jungle warfare. Each subject received an oral isotope dose sufficient to raise isotope levels by 200-250 ({sup 18}O) and 100-120 ppm ({sup 2}H). The experimental period was 7 days. Concurrently, a factorial estimate of the energy expenditure of the platoon was conducted. Also, a food intake-energy balance study was conducted for the platoon. Mean daily energy expenditure by the doubly labeled water method was 4,750 kcal (range 4,152-5,394 kcal). The factorial estimate of meanmore » daily energy expenditure was 4,535 kcal. Because of inherent inaccuracies in the food intake-energy balance technique, we were able to conclude only that energy expenditure, as measured by this method, was greater than the estimated mean daily intake of 4,040 kcal. The doubly labeled water technique was well tolerated, is noninvasive, and appears to be suitable in a wide range of field applications.« less
A Novel Rules Based Approach for Estimating Software Birthmark
Binti Alias, Norma; Anwar, Sajid
2015-01-01
Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark. PMID:25945363
Contractor Accounting, Reporting and Estimating (CARE).
Contractor Accounting Reporting and Estimating (CARE) provides check lists that may be used as guides in evaluating the accounting system, financial reporting , and cost estimating capabilities of the contractor. Experience gained from the Management Review Technique was used as a basis for the check lists. (Author)
Connecting the Dots: Linking Environmental Justice Indicators to Daily Dose Model Estimates
Many different quantitative techniques have been developed to either assess Environmental Justice (EJ) issues or estimate exposure and dose for risk assessment. However, very few approaches have been applied to link EJ factors to exposure dose estimate and identify potential impa...
Isotopic Techniques for Assessment of Groundwater Discharge to the Coastal Ocean
2003-09-30
estimates of the pore water Rn activity. The red line (based on an average groundwater concentration of 170 dpm/L) is considered our best estimate and...Isotopic Techniques For Assessment of Groundwater Discharge to the Coastal Ocean William C. Burnett Department of Oceanography Florida State...evaluating the influence of submarine groundwater discharge (SGD) into the ocean. Our long-term goal is to develop geochemical tools (e.g., radon and
Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Jennings, Esther
2013-01-01
In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.
Estimation of accuracy of earth-rotation parameters in different frequency bands
NASA Astrophysics Data System (ADS)
Vondrak, J.
1986-11-01
The accuracies of earth-rotation parameters as determined by five different observational techniques now available (i.e., optical astrometry /OA/, Doppler tracking of satellites /DTS/, satellite laser ranging /SLR/, very long-base interferometry /VLBI/ and lunar laser ranging /LLR/) are estimated. The differences between the individual techniques in all possible combinations, separated by appropriate filters into three frequency bands, were used to estimate the accuracies of the techniques for periods from 0 to 200 days, from 200 to 1000 days and longer than 1000 days. It is shown that for polar motion the most accurate results are obtained with VLBI anad SLR, especially in the short-period region; OA and DTS are less accurate, but with longer periods the differences in accuracy are less pronounced. The accuracies of UTI-UTC as determined by OA, VLBI and LLR are practically equivalent, the differences being less than 40 percent.
Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation
NASA Technical Reports Server (NTRS)
Anissipour, Amir A.; Benson, Russell A.
1989-01-01
The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.
NASA Technical Reports Server (NTRS)
Hunter, H. E.; Amato, R. A.
1972-01-01
The results are presented of the application of Avco Data Analysis and Prediction Techniques (ADAPT) to derivation of new algorithms for the prediction of future sunspot activity. The ADAPT derived algorithms show a factor of 2 to 3 reduction in the expected 2-sigma errors in the estimates of the 81-day running average of the Zurich sunspot numbers. The report presents: (1) the best estimates for sunspot cycles 20 and 21, (2) a comparison of the ADAPT performance with conventional techniques, and (3) specific approaches to further reduction in the errors of estimated sunspot activity and to recovery of earlier sunspot historical data. The ADAPT programs are used both to derive regression algorithm for prediction of the entire 11-year sunspot cycle from the preceding two cycles and to derive extrapolation algorithms for extrapolating a given sunspot cycle based on any available portion of the cycle.
Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.
Assessment of the dorsal fin spine for chimaeroid (Holocephali: Chimaeriformes) age estimation.
Barnett, L A K; Ebert, D A; Cailliet, G M
2009-10-01
Previous attempts to age chimaeroids have not rigorously tested assumptions of dorsal fin spine growth dynamics. Here, novel imaging and data-analysis techniques revealed that the dorsal fin spine of the spotted ratfish Hydrolagus colliei is an unreliable structure for age estimation. Variation among individuals in the relationship between spine width and distance from the spine tip indicated that the technique of transverse sectioning may impart imprecision and bias to age estimates. The number of growth-band pairs observed by light microscopy in the inner dentine layer was not a good predictor of body size. Mineral density gradients, indicative of growth zones, were absent in the dorsal fin spine of H. colliei, decreasing the likelihood that the bands observed by light microscopy represent a record of growth with consistent periodicity. These results indicate that the hypothesis of aseasonal growth remains plausible and it should not be assumed that chimaeroid age is quantifiable by standard techniques.
A new data assimilation engine for physics-based thermospheric density models
NASA Astrophysics Data System (ADS)
Sutton, E. K.; Henney, C. J.; Hock-Mysliwiec, R.
2017-12-01
The successful assimilation of data into physics-based coupled Ionosphere-Thermosphere models requires rethinking the filtering techniques currently employed in fields such as tropospheric weather modeling. In the realm of Ionospheric-Thermospheric modeling, the estimation of system drivers is a critical component of any reliable data assimilation technique. How to best estimate and apply these drivers, however, remains an open question and active area of research. The recently developed method of Iterative Re-Initialization, Driver Estimation and Assimilation (IRIDEA) accounts for the driver/response time-delay characteristics of the Ionosphere-Thermosphere system relative to satellite accelerometer observations. Results from two near year-long simulations are shown: (1) from a period of elevated solar and geomagnetic activity during 2003, and (2) from a solar minimum period during 2007. This talk will highlight the challenges and successes of implementing a technique suited for both solar min and max, as well as expectations for improving neutral density forecasts.
Correlation techniques to determine model form in robust nonlinear system realization/identification
NASA Technical Reports Server (NTRS)
Stry, Greselda I.; Mook, D. Joseph
1991-01-01
The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.
Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.
2017-01-01
Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883
Marquez, M-E; Deglesne, P-A; Suarez, G; Romano, E
2011-04-01
The IgV(H) mutational status of B-cell chronic lymphocytic leukemia (B-CLL) is of prognostic value. Expression of ZAP-70 in B-CLL is a surrogate marker for IgV(H) unmutated (UM). As determination of IgV(H) mutational status involves a methodology currently unavailable for most clinical laboratories, it is important to have available a reliable technique for ZAP-70 estimation in B-CLL. Flow cytometry (FC) is a convenient technique for this purpose. However, there is still no adequate way for data analysis, which would prevent the assignment of false positive or negative expression. We have modified the currently most accepted technique, which uses the ratio of the mean fluorescent index (MFI) of B-CLL to T cells. The MFI for parallel antibody isotype staining is subtracted from the ZAP-70 MFI of both B-CLL and T cells. We validated this technique comparing the results obtained for ZAP-70 expression by FC with those obtained with quantitative PCR for the same patients. We applied the technique in a series of 53 patients. With this modification, a better correlation between ZAP-70 expression and IgV(H) UM was obtained. Thus, the MFI ratio B-CLL/T cell corrected by isotype is a reliable analysis technique to estimate ZAP-70 expression in B-CLL. © 2010 Blackwell Publishing Ltd.
Wang, Li-Pen; Ochoa-Rodríguez, Susana; Simões, Nuno Eduardo; Onof, Christian; Maksimović, Cedo
2013-01-01
The applicability of the operational radar and raingauge networks for urban hydrology is insufficient. Radar rainfall estimates provide a good description of the spatiotemporal variability of rainfall; however, their accuracy is in general insufficient. It is therefore necessary to adjust radar measurements using raingauge data, which provide accurate point rainfall information. Several gauge-based radar rainfall adjustment techniques have been developed and mainly applied at coarser spatial and temporal scales; however, their suitability for small-scale urban hydrology is seldom explored. In this paper a review of gauge-based adjustment techniques is first provided. After that, two techniques, respectively based upon the ideas of mean bias reduction and error variance minimisation, were selected and tested using as case study an urban catchment (∼8.65 km(2)) in North-East London. The radar rainfall estimates of four historical events (2010-2012) were adjusted using in situ raingauge estimates and the adjusted rainfall fields were applied to the hydraulic model of the study area. The results show that both techniques can effectively reduce mean bias; however, the technique based upon error variance minimisation can in general better reproduce the spatial and temporal variability of rainfall, which proved to have a significant impact on the subsequent hydraulic outputs. This suggests that error variance minimisation based methods may be more appropriate for urban-scale hydrological applications.
A Survey of Methods for Computing Best Estimates of Endoatmospheric and Exoatmospheric Trajectories
NASA Technical Reports Server (NTRS)
Bernard, William P.
2018-01-01
Beginning with the mathematical prediction of planetary orbits in the early seventeenth century up through the most recent developments in sensor fusion methods, many techniques have emerged that can be employed on the problem of endo and exoatmospheric trajectory estimation. Although early methods were ad hoc, the twentieth century saw the emergence of many systematic approaches to estimation theory that produced a wealth of useful techniques. The broad genesis of estimation theory has resulted in an equally broad array of mathematical principles, methods and vocabulary. Among the fundamental ideas and methods that are briefly touched on are batch and sequential processing, smoothing, estimation, and prediction, sensor fusion, sensor fusion architectures, data association, Bayesian and non Bayesian filtering, the family of Kalman filters, models of the dynamics of the phases of a rocket's flight, and asynchronous, delayed, and asequent data. Along the way, a few trajectory estimation issues are addressed and much of the vocabulary is defined.
Estimating time-dependent ROC curves using data under prevalent sampling.
Li, Shanshan
2017-04-15
Prevalent sampling is frequently a convenient and economical sampling technique for the collection of time-to-event data and thus is commonly used in studies of the natural history of a disease. However, it is biased by design because it tends to recruit individuals with longer survival times. This paper considers estimation of time-dependent receiver operating characteristic curves when data are collected under prevalent sampling. To correct the sampling bias, we develop both nonparametric and semiparametric estimators using extended risk sets and the inverse probability weighting techniques. The proposed estimators are consistent and converge to Gaussian processes, while substantial bias may arise if standard estimators for right-censored data are used. To illustrate our method, we analyze data from an ovarian cancer study and estimate receiver operating characteristic curves that assess the accuracy of the composite markers in distinguishing subjects who died within 3-5 years from subjects who remained alive. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Stochastic parameter estimation in nonlinear time-delayed vibratory systems with distributed delay
NASA Astrophysics Data System (ADS)
Torkamani, Shahab; Butcher, Eric A.
2013-07-01
The stochastic estimation of parameters and states in linear and nonlinear time-delayed vibratory systems with distributed delay is explored. The approach consists of first employing a continuous time approximation to approximate the delayed integro-differential system with a large set of ordinary differential equations having stochastic excitations. Then the problem of state and parameter estimation in the resulting stochastic ordinary differential system is represented as an optimal filtering problem using a state augmentation technique. By adapting the extended Kalman-Bucy filter to the augmented filtering problem, the unknown parameters of the time-delayed system are estimated from noise-corrupted, possibly incomplete measurements of the states. Similarly, the upper bound of the distributed delay can also be estimated by the proposed technique. As an illustrative example to a practical problem in vibrations, the parameter, delay upper bound, and state estimation from noise-corrupted measurements in a distributed force model widely used for modeling machine tool vibrations in the turning operation is investigated.
Hardware Implementation of a MIMO Decoder Using Matrix Factorization Based Channel Estimation
NASA Astrophysics Data System (ADS)
Islam, Mohammad Tariqul; Numan, Mostafa Wasiuddin; Misran, Norbahiah; Ali, Mohd Alauddin Mohd; Singh, Mandeep
2011-05-01
This paper presents an efficient hardware realization of multiple-input multiple-output (MIMO) wireless communication decoder that utilizes the available resources by adopting the technique of parallelism. The hardware is designed and implemented on Xilinx Virtex™-4 XC4VLX60 field programmable gate arrays (FPGA) device in a modular approach which simplifies and eases hardware update, and facilitates testing of the various modules independently. The decoder involves a proficient channel estimation module that employs matrix factorization on least squares (LS) estimation to reduce a full rank matrix into a simpler form in order to eliminate matrix inversion. This results in performance improvement and complexity reduction of the MIMO system. Performance evaluation of the proposed method is validated through MATLAB simulations which indicate 2 dB improvement in terms of SNR compared to LS estimation. Moreover complexity comparison is performed in terms of mathematical operations, which shows that the proposed approach appreciably outperforms LS estimation at a lower complexity and represents a good solution for channel estimation technique.
Rain Volume Estimation over Areas Using Satellite and Radar Data
NASA Technical Reports Server (NTRS)
Doneaud, A. A.; Miller, J. R., Jr.; Johnson, L. R.; Vonderhaar, T. H.; Laybe, P.
1984-01-01
The application of satellite data to a recently developed radar technique used to estimate convective rain volumes over areas on a dry environment (the northern Great Plains) is discussed. The area time integral technique (ATI) provides a means of estimating total rain volumes over fixed and floating target areas of the order of 1,000 to 100,000 km(2) for clusters lasting 40 min. The basis of the method is the existence of a strong correlation between the area coverage integrated over the lifetime of the storm (ATI) and the rain volume. One key element in this technique is that it does not require the consideration of the structure of the radar intensities inside the area coverage to generate rain volumes, but only considers the rain event per se. This fact might reduce or eliminate some sources of error in applying the technique to satellite data. The second key element is that the ATI once determined can be converted to total rain volume by using a constant factor (average rain rate) for a given locale.
Is the difference between chemical and numerical estimates of baseflow meaningful?
NASA Astrophysics Data System (ADS)
Cartwright, Ian; Gilfedder, Ben; Hofmann, Harald
2014-05-01
Both chemical and numerical techniques are commonly used to calculate baseflow inputs to gaining rivers. In general the chemical methods yield lower estimates of baseflow than the numerical techniques. In part, this may be due to the techniques assuming two components (event water and baseflow) whereas there may also be multiple transient stores of water. Bank return waters, interflow, or waters stored on floodplains are delayed components that may be geochemically similar to the surface water from which they are derived; numerical techniques may record these components as baseflow whereas chemical mass balance studies are likely to aggregate them with the surface water component. This study compares baseflow estimates using chemical mass balance, local minimum methods, and recursive digital filters in the upper reaches of the Barwon River, southeast Australia. While more sophisticated techniques exist, these methods of estimating baseflow are readily applied with the available data and have been used widely elsewhere. During the early stages of high-discharge events, chemical mass balance overestimates groundwater inflows, probably due to flushing of saline water from wetlands and marshes, soils, or the unsaturated zone. Overall, however, estimates of baseflow from the local minimum and recursive digital filters are higher than those from chemical mass balance using Cl calculated from continuous electrical conductivity. Between 2001 and 2011, the baseflow contribution to the upper Barwon River calculated using chemical mass balance is between 12 and 25% of annual discharge. Recursive digital filters predict higher baseflow contributions of 19 to 52% of annual discharge. These estimates are similar to those from the local minimum method (16 to 45% of annual discharge). These differences most probably reflect how the different techniques characterise the transient water sources in this catchment. The local minimum and recursive digital filters aggregate much of the water from delayed sources as baseflow. However, as many of these delayed transient water stores (such as bank return flow, floodplain storage, or interflow) have Cl concentrations that are similar to surface runoff, chemical mass balance calculations aggregate them with the surface runoff component. The difference between the estimates is greatest following periods of high discharge in winter, implying that these transient stores of water feed the river for several weeks to months at that time. Cl vs. discharge variations during individual flow events also demonstrate that inflows of high-salinity older water occurs on the rising limbs of hydrographs followed by inflows of low-salinity water from the transient stores as discharge falls. The use of complementary techniques allows a better understanding of the different components of water that contribute to river flow, which is important for the management and protection of water resources.
Code of Federal Regulations, 2013 CFR
2013-07-01
...” include still photographs, video tapes, and motion pictures. (2) Separation of irrelevant portions... considered in the analysis, the techniques of data collection, the techniques of estimation and testing, and...
Code of Federal Regulations, 2012 CFR
2012-07-01
...” include still photographs, video tapes, and motion pictures. (2) Separation of irrelevant portions... considered in the analysis, the techniques of data collection, the techniques of estimation and testing, and...
Code of Federal Regulations, 2011 CFR
2011-07-01
...” include still photographs, video tapes, and motion pictures. (2) Separation of irrelevant portions... considered in the analysis, the techniques of data collection, the techniques of estimation and testing, and...
Code of Federal Regulations, 2014 CFR
2014-07-01
...” include still photographs, video tapes, and motion pictures. (2) Separation of irrelevant portions... considered in the analysis, the techniques of data collection, the techniques of estimation and testing, and...
Application of split window technique to TIMS data
NASA Technical Reports Server (NTRS)
Matsunaga, Tsuneo; Rokugawa, Shuichi; Ishii, Yoshinori
1992-01-01
Absorptions by the atmosphere in thermal infrared region are mainly due to water vapor, carbon dioxide, and ozone. As the content of water vapor in the atmosphere greatly changes according to weather conditions, it is important to know its amount between the sensor and the ground for atmospheric corrections of thermal Infrared Multispectral Scanner (TIMS) data (i.e. radiosonde). On the other hand, various atmospheric correction techniques were already developed for sea surface temperature estimations from satellites. Among such techniques, Split Window technique, now widely used for AVHRR (Advanced Very High Resolution Radiometer), uses no radiosonde or any kind of supplementary data but a difference between observed brightness temperatures in two channels for estimating atmospheric effects. Applications of Split Window technique to TIMS data are discussed because availability of atmospheric profile data is not clear when ASTER operates. After these theoretical discussions, the technique is experimentally applied to TIMS data at three ground targets and results are compared with atmospherically corrected data using LOWTRAN 7 with radiosonde data.
Salinet, João L; Masca, Nicholas; Stafford, Peter J; Ng, G André; Schlindwein, Fernando S
2016-03-08
Areas with high frequency activity within the atrium are thought to be 'drivers' of the rhythm in patients with atrial fibrillation (AF) and ablation of these areas seems to be an effective therapy in eliminating DF gradient and restoring sinus rhythm. Clinical groups have applied the traditional FFT-based approach to generate the three-dimensional dominant frequency (3D DF) maps during electrophysiology (EP) procedures but literature is restricted on using alternative spectral estimation techniques that can have a better frequency resolution that FFT-based spectral estimation. Autoregressive (AR) model-based spectral estimation techniques, with emphasis on selection of appropriate sampling rate and AR model order, were implemented to generate high-density 3D DF maps of atrial electrograms (AEGs) in persistent atrial fibrillation (persAF). For each patient, 2048 simultaneous AEGs were recorded for 20.478 s-long segments in the left atrium (LA) and exported for analysis, together with their anatomical locations. After the DFs were identified using AR-based spectral estimation, they were colour coded to produce sequential 3D DF maps. These maps were systematically compared with maps found using the Fourier-based approach. 3D DF maps can be obtained using AR-based spectral estimation after AEGs downsampling (DS) and the resulting maps are very similar to those obtained using FFT-based spectral estimation (mean 90.23 %). There were no significant differences between AR techniques (p = 0.62). The processing time for AR-based approach was considerably shorter (from 5.44 to 5.05 s) when lower sampling frequencies and model order values were used. Higher levels of DS presented higher rates of DF agreement (sampling frequency of 37.5 Hz). We have demonstrated the feasibility of using AR spectral estimation methods for producing 3D DF maps and characterised their differences to the maps produced using the FFT technique, offering an alternative approach for 3D DF computation in human persAF studies.
A volumetric technique for fossil body mass estimation applied to Australopithecus afarensis.
Brassey, Charlotte A; O'Mahoney, Thomas G; Chamberlain, Andrew T; Sellers, William I
2018-02-01
Fossil body mass estimation is a well established practice within the field of physical anthropology. Previous studies have relied upon traditional allometric approaches, in which the relationship between one/several skeletal dimensions and body mass in a range of modern taxa is used in a predictive capacity. The lack of relatively complete skeletons has thus far limited the potential application of alternative mass estimation techniques, such as volumetric reconstruction, to fossil hominins. Yet across vertebrate paleontology more broadly, novel volumetric approaches are resulting in predicted values for fossil body mass very different to those estimated by traditional allometry. Here we present a new digital reconstruction of Australopithecus afarensis (A.L. 288-1; 'Lucy') and a convex hull-based volumetric estimate of body mass. The technique relies upon identifying a predictable relationship between the 'shrink-wrapped' volume of the skeleton and known body mass in a range of modern taxa, and subsequent application to an articulated model of the fossil taxa of interest. Our calibration dataset comprises whole body computed tomography (CT) scans of 15 species of modern primate. The resulting predictive model is characterized by a high correlation coefficient (r 2 = 0.988) and a percentage standard error of 20%, and performs well when applied to modern individuals of known body mass. Application of the convex hull technique to A. afarensis results in a relatively low body mass estimate of 20.4 kg (95% prediction interval 13.5-30.9 kg). A sensitivity analysis on the articulation of the chest region highlights the sensitivity of our approach to the reconstruction of the trunk, and the incomplete nature of the preserved ribcage may explain the low values for predicted body mass here. We suggest that the heaviest of previous estimates would require the thorax to be expanded to an unlikely extent, yet this can only be properly tested when more complete fossils are available. Copyright © 2017 Elsevier Ltd. All rights reserved.
Estimating the magnitude of peak flows for streams in Kentucky for selected recurrence intervals
Hodgkins, Glenn A.; Martin, Gary R.
2003-01-01
This report gives estimates of, and presents techniques for estimating, the magnitude of peak flows for streams in Kentucky for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years. A flowchart in this report guides the user to the appropriate estimates and (or) estimating techniques for a site on a specific stream. Estimates of peak flows are given for 222 U.S. Geological Survey streamflow-gaging stations in Kentucky. In the development of the peak-flow estimates at gaging stations, a new generalized skew coefficient was calculated for the State. This single statewide value of 0.011 (with a standard error of prediction of 0.520) is more appropriate for Kentucky than the national skew isoline map in Bulletin 17B of the Interagency Advisory Committee on Water Data. Regression equations are presented for estimating the peak flows on ungaged, unregulated streams in rural drainage basins. The equations were developed by use of generalized-least-squares regression procedures at 187 U.S. Geological Survey gaging stations in Kentucky and 51 stations in surrounding States. Kentucky was divided into seven flood regions. Total drainage area is used in the final regression equations as the sole explanatory variable, except in Regions 1 and 4 where main-channel slope also was used. The smallest average standard errors of prediction were in Region 3 (from -13.1 to +15.0 percent) and the largest average standard errors of prediction were in Region 5 (from -37.6 to +60.3 percent). One section of this report describes techniques for estimating peak flows for ungaged sites on gaged, unregulated streams in rural drainage basins. Another section references two previous U.S. Geological Survey reports for peak-flow estimates on ungaged, unregulated, urban streams. Estimating peak flows at ungaged sites on regulated streams is beyond the scope of this report, because peak flows on regulated streams are dependent upon variable human activities.
NASA Technical Reports Server (NTRS)
Tolson, R. H.
1981-01-01
A technique is described for providing a means of evaluating the influence of spatial sampling on the determination of global mean total columnar ozone. A finite number of coefficients in the expansion are determined, and the truncated part of the expansion is shown to contribute an error to the estimate, which depends strongly on the spatial sampling and is relatively insensitive to data noise. First and second order statistics are derived for each term in a spherical harmonic expansion which represents the ozone field, and the statistics are used to estimate systematic and random errors in the estimates of total ozone.
A Technique for Measuring Rotocraft Dynamic Stability in the 40 by 80 Foot Wind Tunnel
NASA Technical Reports Server (NTRS)
Gupta, N. K.; Bohn, J. G.
1977-01-01
An on-line technique is described for the measurement of tilt rotor aircraft dynamic stability in the Ames 40- by 80-Foot Wind Tunnel. The technique is based on advanced system identification methodology and uses the instrumental variables approach. It is particulary applicable to real time estimation problems with limited amounts of noise-contaminated data. Several simulations are used to evaluate the algorithm. Estimated natural frequencies and damping ratios are compared with simulation values. The algorithm is also applied to wind tunnel data in an off-line mode. The results are used to develop preliminary guidelines for effective use of the algorithm.
A NOVEL TECHNIQUE APPLYING SPECTRAL ESTIMATION TO JOHNSON NOISE THERMOMETRY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ezell, N Dianne Bull; Britton Jr, Charles L; Roberts, Michael
Johnson noise thermometry (JNT) is one of many important measurements used to monitor the safety levels and stability in a nuclear reactor. However, this measurement is very dependent on the electromagnetic environment. Properly removing unwanted electromagnetic interference (EMI) is critical for accurate drift free temperature measurements. The two techniques developed by Oak Ridge National Laboratory (ORNL) to remove transient and periodic EMI are briefly discussed in this document. Spectral estimation is a key component in the signal processing algorithm utilized for EMI removal and temperature calculation. Applying these techniques requires the simple addition of the electronics and signal processing tomore » existing resistive thermometers.« less
Combining Relevance Vector Machines and exponential regression for bearing residual life estimation
NASA Astrophysics Data System (ADS)
Di Maio, Francesco; Tsui, Kwok Leung; Zio, Enrico
2012-08-01
In this paper we present a new procedure for estimating the bearing Residual Useful Life (RUL) by combining data-driven and model-based techniques. Respectively, we resort to (i) Relevance Vector Machines (RVMs) for selecting a low number of significant basis functions, called Relevant Vectors (RVs), and (ii) exponential regression to compute and continuously update residual life estimations. The combination of these techniques is developed with reference to partially degraded thrust ball bearings and tested on real world vibration-based degradation data. On the case study considered, the proposed procedure outperforms other model-based methods, with the added value of an adequate representation of the uncertainty associated to the estimates of the quantification of the credibility of the results by the Prognostic Horizon (PH) metric.
Comparative assessment of techniques for initial pose estimation using monocular vision
NASA Astrophysics Data System (ADS)
Sharma, Sumant; D`Amico, Simone
2016-06-01
This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.
Improved dichotomous search frequency offset estimator for burst-mode continuous phase modulation
NASA Astrophysics Data System (ADS)
Zhai, Wen-Chao; Li, Zan; Si, Jiang-Bo; Bai, Jun
2015-11-01
A data-aided technique for carrier frequency offset estimation with continuous phase modulation (CPM) in burst-mode transmission is presented. The proposed technique first exploits a special pilot sequence, or training sequence, to form a sinusoidal waveform. Then, an improved dichotomous search frequency offset estimator is introduced to determine the frequency offset using the sinusoid. Theoretical analysis and simulation results indicate that our estimator is noteworthy in the following aspects. First, the estimator can operate independently of timing recovery. Second, it has relatively low outlier, i.e., the minimum signal-to-noise ratio (SNR) required to guarantee estimation accuracy. Finally, the most important property is that our estimator is complexity-reduced compared to the existing dichotomous search methods: it eliminates the need for fast Fourier transform (FFT) and modulation removal, and exhibits faster convergence rate without accuracy degradation. Project supported by the National Natural Science Foundation of China (Grant No. 61301179), the Doctorial Programs Foundation of the Ministry of Education, China (Grant No. 20110203110011), and the Programme of Introducing Talents of Discipline to Universities, China (Grant No. B08038).
Power system frequency estimation based on an orthogonal decomposition method
NASA Astrophysics Data System (ADS)
Lee, Chih-Hung; Tsai, Men-Shen
2018-06-01
In recent years, several frequency estimation techniques have been proposed by which to estimate the frequency variations in power systems. In order to properly identify power quality issues under asynchronously-sampled signals that are contaminated with noise, flicker, and harmonic and inter-harmonic components, a good frequency estimator that is able to estimate the frequency as well as the rate of frequency changes precisely is needed. However, accurately estimating the fundamental frequency becomes a very difficult task without a priori information about the sampling frequency. In this paper, a better frequency evaluation scheme for power systems is proposed. This method employs a reconstruction technique in combination with orthogonal filters, which may maintain the required frequency characteristics of the orthogonal filters and improve the overall efficiency of power system monitoring through two-stage sliding discrete Fourier transforms. The results showed that this method can accurately estimate the power system frequency under different conditions, including asynchronously sampled signals contaminated by noise, flicker, and harmonic and inter-harmonic components. The proposed approach also provides high computational efficiency.
Modified ADALINE algorithm for harmonic estimation and selective harmonic elimination in inverters
NASA Astrophysics Data System (ADS)
Vasumathi, B.; Moorthi, S.
2011-11-01
In digital signal processing, algorithms are very well developed for the estimation of harmonic components. In power electronic applications, an objective like fast response of a system is of primary importance. An effective method for the estimation of instantaneous harmonic components, along with conventional harmonic elimination technique, is presented in this article. The primary function is to eliminate undesirable higher harmonic components from the selected signal (current or voltage) and it requires only the knowledge of the frequency of the component to be eliminated. A signal processing technique using modified ADALINE algorithm has been proposed for harmonic estimation. The proposed method stays effective as it converges to a minimum error and brings out a finer estimation. A conventional control based on pulse width modulation for selective harmonic elimination is used to eliminate harmonic components after its estimation. This method can be applied to a wide range of equipment. The validity of the proposed method to estimate and eliminate voltage harmonics is proved with a dc/ac inverter as a simulation example. Then, the results are compared with existing ADALINE algorithm for illustrating its effectiveness.
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Scheid, R. E., Jr.
1986-01-01
This paper outlines methods for modeling, identification and estimation for static determination of flexible structures. The shape estimation schemes are based on structural models specified by (possibly interconnected) elliptic partial differential equations. The identification techniques provide approximate knowledge of parameters in elliptic systems. The techniques are based on the method of maximum-likelihood that finds parameter values such that the likelihood functional associated with the system model is maximized. The estimation methods are obtained by means of a function-space approach that seeks to obtain the conditional mean of the state given the data and a white noise characterization of model errors. The solutions are obtained in a batch-processing mode in which all the data is processed simultaneously. After methods for computing the optimal estimates are developed, an analysis of the second-order statistics of the estimates and of the related estimation error is conducted. In addition to outlining the above theoretical results, the paper presents typical flexible structure simulations illustrating performance of the shape determination methods.
A forester's look at the application of image manipulation techniques to multitemporal Landsat data
NASA Technical Reports Server (NTRS)
Williams, D. L.; Stauffer, M. L.; Leung, K. C.
1979-01-01
Registered, multitemporal Landsat data of a study area in central Pennsylvania were analyzed to detect and assess changes in the forest canopy resulting from insect defoliation. Images taken July 19, 1976, and June 27, 1977, were chosen specifically to represent forest canopy conditions before and after defoliation, respectively. Several image manipulation and data transformation techniques, developed primarily for estimating agricultural and rangeland standing green biomass, were applied to these data. The applicability of each technique for estimating the severity of forest canopy defoliation was then evaluated. All techniques tested had highly correlated results. In all cases, heavy defoliation was discriminated from healthy forest. Areas of moderate defoliation were confused with healthy forest on northwest (NW) aspects, but were distinct from healthy forest conditions on southeast (SE)-facing slopes.
Methods for trend analysis: Examples with problem/failure data
NASA Technical Reports Server (NTRS)
Church, Curtis K.
1989-01-01
Statistics are emphasized as an important role in quality control and reliability. Consequently, Trend Analysis Techniques recommended a variety of statistical methodologies that could be applied to time series data. The major goal of the working handbook, using data from the MSFC Problem Assessment System, is to illustrate some of the techniques in the NASA standard, some different techniques, and to notice patterns of data. Techniques for trend estimation used are: regression (exponential, power, reciprocal, straight line) and Kendall's rank correlation coefficient. The important details of a statistical strategy for estimating a trend component are covered in the examples. However, careful analysis and interpretation is necessary because of small samples and frequent zero problem reports in a given time period. Further investigations to deal with these issues are being conducted.
Real-time shear velocity imaging using sonoelastographic techniques.
Hoyt, Kenneth; Parker, Kevin J; Rubens, Deborah J
2007-07-01
In this paper, a novel sonoelastographic technique for estimating local shear velocities from propagating shear wave interference patterns (termed crawling waves) is introduced. A relationship between the local crawling wave spatial phase derivatives and local shear wave velocity is derived with phase derivatives estimated using an autocorrelation technique. Results from homogeneous phantoms demonstrate the ability of sonoelastographic shear velocity imaging to quantify the true underlying shear velocity distributions as verified using time-of-flight measurements. Heterogeneous phantom results reveal the capacity for lesion detection and shear velocity quantification as validated from mechanical measurements on phantom samples. Experimental results obtained from a prostate specimen illustrated feasibility for shear velocity imaging in tissue. More importantly, high-contrast visualization of focal carcinomas was demonstrated introducing the clinical potential of this novel sonoelastographic imaging technique.
NASA Astrophysics Data System (ADS)
Karimi, Milad; Moradlou, Fridoun; Hajipour, Mojtaba
2018-10-01
This paper is concerned with a backward heat conduction problem with time-dependent thermal diffusivity factor in an infinite "strip". This problem is drastically ill-posed which is caused by the amplified infinitely growth in the frequency components. A new regularization method based on the Meyer wavelet technique is developed to solve the considered problem. Using the Meyer wavelet technique, some new stable estimates are proposed in the Hölder and Logarithmic types which are optimal in the sense of given by Tautenhahn. The stability and convergence rate of the proposed regularization technique are proved. The good performance and the high-accuracy of this technique is demonstrated through various one and two dimensional examples. Numerical simulations and some comparative results are presented.
A technology roadmap of smart biosensors from conventional glucose monitoring systems.
Shende, Pravin; Sahu, Pratiksha; Gaud, Ram
2017-06-01
The objective of this review article is to focus on technology roadmap of smart biosensors from a conventional glucose monitoring system. The estimation of glucose with commercially available devices involves analysis of blood samples that are obtained by pricking finger or extracting blood from the forearm. Since pain and discomfort are associated with invasive methods, the non-invasive measurement techniques have been investigated. The non-invasive methods show advantages like non-exposure to sharp objects such as needles and syringes, due to which there is an increase in testing frequency, improved control of glucose concentration and absence of pain and biohazard materials. This review study is aimed to describe recent invasive techniques and major noninvasive techniques, viz. biosensors, optical techniques and sensor-embedded contact lenses for glucose estimation.
Evaluation of wind field statistics near and inside clouds using a coherent Doppler lidar
NASA Astrophysics Data System (ADS)
Lottman, Brian Todd
1998-09-01
This work proposes advanced techniques for measuring the spatial wind field statistics near and inside clouds using a vertically pointing solid state coherent Doppler lidar on a fixed ground based platform. The coherent Doppler lidar is an ideal instrument for high spatial and temporal resolution velocity estimates. The basic parameters of lidar are discussed, including a complete statistical description of the Doppler lidar signal. This description is extended to cases with simple functional forms for aerosol backscatter and velocity. An estimate for the mean velocity over a sensing volume is produced by estimating the mean spectra. There are many traditional spectral estimators, which are useful for conditions with slowly varying velocity and backscatter. A new class of estimators (novel) is introduced that produces reliable velocity estimates for conditions with large variations in aerosol backscatter and velocity with range, such as cloud conditions. Performance of traditional and novel estimators is computed for a variety of deterministic atmospheric conditions using computer simulated data. Wind field statistics are produced for actual data for a cloud deck, and for multi- layer clouds. Unique results include detection of possible spectral signatures for rain, estimates for the structure function inside a cloud deck, reliable velocity estimation techniques near and inside thin clouds, and estimates for simple wind field statistics between cloud layers.
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...
2017-08-25
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
The scale invariant generator technique for quantifying anisotropic scale invariance
NASA Astrophysics Data System (ADS)
Lewis, G. M.; Lovejoy, S.; Schertzer, D.; Pecknold, S.
1999-11-01
Scale invariance is rapidly becoming a new paradigm for geophysics. However, little attention has been paid to the anisotropy that is invariably present in geophysical fields in the form of differential stratification and rotation, texture and morphology. In order to account for scaling anisotropy, the formalism of generalized scale invariance (GSI) was developed. Until now there has existed only a single fairly ad hoc GSI analysis technique valid for studying differential rotation. In this paper, we use a two-dimensional representation of the linear approximation to generalized scale invariance, to obtain a much improved technique for quantifying anisotropic scale invariance called the scale invariant generator technique (SIG). The accuracy of the technique is tested using anisotropic multifractal simulations and error estimates are provided for the geophysically relevant range of parameters. It is found that the technique yields reasonable estimates for simulations with a diversity of anisotropic and statistical characteristics. The scale invariant generator technique can profitably be applied to the scale invariant study of vertical/horizontal and space/time cross-sections of geophysical fields as well as to the study of the texture/morphology of fields.
Jacquemin, Bénédicte; Lepeule, Johanna; Boudier, Anne; Arnould, Caroline; Benmerad, Meriem; Chappaz, Claire; Ferran, Joane; Kauffmann, Francine; Morelli, Xavier; Pin, Isabelle; Pison, Christophe; Rios, Isabelle; Temam, Sofia; Künzli, Nino; Slama, Rémy
2013-01-01
Background: Errors in address geocodes may affect estimates of the effects of air pollution on health. Objective: We investigated the impact of four geocoding techniques on the association between urban air pollution estimated with a fine-scale (10 m × 10 m) dispersion model and lung function in adults. Methods: We measured forced expiratory volume in 1 sec (FEV1) and forced vital capacity (FVC) in 354 adult residents of Grenoble, France, who were participants in two well-characterized studies, the Epidemiological Study on the Genetics and Environment on Asthma (EGEA) and the European Community Respiratory Health Survey (ECRHS). Home addresses were geocoded using individual building matching as the reference approach and three spatial interpolation approaches. We used a dispersion model to estimate mean PM10 and nitrogen dioxide concentrations at each participant’s address during the 12 months preceding their lung function measurements. Associations between exposures and lung function parameters were adjusted for individual confounders and same-day exposure to air pollutants. The geocoding techniques were compared with regard to geographical distances between coordinates, exposure estimates, and associations between the estimated exposures and health effects. Results: Median distances between coordinates estimated using the building matching and the three interpolation techniques were 26.4, 27.9, and 35.6 m. Compared with exposure estimates based on building matching, PM10 concentrations based on the three interpolation techniques tended to be overestimated. When building matching was used to estimate exposures, a one-interquartile range increase in PM10 (3.0 μg/m3) was associated with a 3.72-point decrease in FVC% predicted (95% CI: –0.56, –6.88) and a 3.86-point decrease in FEV1% predicted (95% CI: –0.14, –3.24). The magnitude of associations decreased when other geocoding approaches were used [e.g., for FVC% predicted –2.81 (95% CI: –0.26, –5.35) using NavTEQ, or 2.08 (95% CI –4.63, 0.47, p = 0.11) using Google Maps]. Conclusions: Our findings suggest that the choice of geocoding technique may influence estimated health effects when air pollution exposures are estimated using a fine-scale exposure model. Citation: Jacquemin B, Lepeule J, Boudier A, Arnould C, Benmerad M, Chappaz C, Ferran J, Kauffmann F, Morelli X, Pin I, Pison C, Rios I, Temam S, Künzli N, Slama R, Siroux V. 2013. Impact of geocoding methods on associations between long-term exposure to urban air pollution and lung function. Environ Health Perspect 121:1054–1060; http://dx.doi.org/10.1289/ehp.1206016 PMID:23823697
NASA Technical Reports Server (NTRS)
Huffman, George J.; Adler, Robert F.; Rudolf, Bruno; Schneider, Udo; Keehn, Peter R.
1995-01-01
The 'satellite-gauge model' (SGM) technique is described for combining precipitation estimates from microwave satellite data, infrared satellite data, rain gauge analyses, and numerical weather prediction models into improved estimates of global precipitation. Throughout, monthly estimates on a 2.5 degrees x 2.5 degrees lat-long grid are employed. First, a multisatellite product is developed using a combination of low-orbit microwave and geosynchronous-orbit infrared data in the latitude range 40 degrees N - 40 degrees S (the adjusted geosynchronous precipitation index) and low-orbit microwave data alone at higher latitudes. Then the rain gauge analysis is brougth in, weighting each field by its inverse relative error variance to produce a nearly global, observationally based precipitation estimate. To produce a complete global estimate, the numerical model results are used to fill data voids in the combined satellite-gauge estimate. Our sequential approach to combining estimates allows a user to select the multisatellite estimate, the satellite-gauge estimate, or the full SGM estimate (observationally based estimates plus the model information). The primary limitation in the method is imperfections in the estimation of relative error for the individual fields. The SGM results for one year of data (July 1987 to June 1988) show important differences from the individual estimates, including model estimates as well as climatological estimates. In general, the SGM results are drier in the subtropics than the model and climatological results, reflecting the relatively dry microwave estimates that dominate the SGM in oceanic regions.
Comparisons of Monthly Oceanic Rainfall Derived from TMI and SSM/I
NASA Technical Reports Server (NTRS)
Chang, A. T. C.; Chiu, L. S.; Meng, J.; Wilheit, T. T.; Kummerow, C. D.
1999-01-01
A technique for estimating monthly oceanic rainfall rate using multi-channel microwave measurements has been developed. There are three prominent features of this algorithm. First, the knowledge of the form of the rainfall intensity probability density function used to augment the measurements. Second, utilizing a linear combination of the 19.35 and 22.235 GHz channels to de-emphasize the effect of water vapor. Third, an objective technique has been developed to estimate the rain layer thickness from the 19.35 and 22.235 GHz brightness temperature histograms. This technique is applied to the SSM/I data since 1987 to infer monthly rainfall for the Global Precipitation Climatology Project (GPCP). A modified version of this algorithm is now being applied to the TRMM Microwave Imager (TMI) data. TMI data with better spatial resolution and 24 hour sampling (vs. sun-synchronized sampling, which is limited to two narrow intervals of local solar time for DMSP satellites) prompt us to study the similarity and difference between these two rainfall estimates. Six months of rainfall data (January to June 1998) are used in this study. Means and standard deviations are calculated. Paired student t-tests are administrated to evaluate the differences between rainfall estimates from SSM/I and TMI data. Their differences are discussed in the context of global satellite rainfall estimation.
Control algorithms for aerobraking in the Martian atmosphere
NASA Technical Reports Server (NTRS)
Ward, Donald T.; Shipley, Buford W., Jr.
1991-01-01
The Analytic Predictor Corrector (APC) and Energy Controller (EC) atmospheric guidance concepts were adapted to control an interplanetary vehicle aerobraking in the Martian atmosphere. Changes are made to the APC to improve its robustness to density variations. These changes include adaptation of a new exit phase algorithm, an adaptive transition velocity to initiate the exit phase, refinement of the reference dynamic pressure calculation and two improved density estimation techniques. The modified controller with the hybrid density estimation technique is called the Mars Hybrid Predictor Corrector (MHPC), while the modified controller with a polynomial density estimator is called the Mars Predictor Corrector (MPC). A Lyapunov Steepest Descent Controller (LSDC) is adapted to control the vehicle. The LSDC lacked robustness, so a Lyapunov tracking exit phase algorithm is developed to guide the vehicle along a reference trajectory. This algorithm, when using the hybrid density estimation technique to define the reference path, is called the Lyapunov Hybrid Tracking Controller (LHTC). With the polynomial density estimator used to define the reference trajectory, the algorithm is called the Lyapunov Tracking Controller (LTC). These four new controllers are tested using a six degree of freedom computer simulation to evaluate their robustness. The MHPC, MPC, LHTC, and LTC show dramatic improvements in robustness over the APC and EC.
Parameter estimation in a structural acoustic system with fully nonlinear coupling conditions
NASA Technical Reports Server (NTRS)
Banks, H. T.; Smith, Ralph C.
1994-01-01
A methodology for estimating physical parameters in a class of structural acoustic systems is presented. The general model under consideration consists of an interior cavity which is separated from an exterior noise source by an enclosing elastic structure. Piezoceramic patches are bonded to or embedded in the structure; these can be used both as actuators and sensors in applications ranging from the control of interior noise levels to the determination of structural flaws through nondestructive evaluation techniques. The presence and excitation of patches, however, changes the geometry and material properties of the structure as well as involves unknown patch parameters, thus necessitating the development of parameter estimation techniques which are applicable in this coupled setting. In developing a framework for approximation, parameter estimation and implementation, strong consideration is given to the fact that the input operator is unbonded due to the discrete nature of the patches. Moreover, the model is weakly nonlinear. As a result of the coupling mechanism between the structural vibrations and the interior acoustic dynamics. Within this context, an illustrating model is given, well-posedness and approximations results are discussed and an applicable parameter estimation methodology is presented. The scheme is then illustrated through several numerical examples with simulations modeling a variety of commonly used structural acoustic techniques for systems excitations and data collection.
A three-microphone acoustic reflection technique using transmitted acoustic waves in the airway.
Fujimoto, Yuki; Huang, Jyongsu; Fukunaga, Toshiharu; Kato, Ryo; Higashino, Mari; Shinomiya, Shohei; Kitadate, Shoko; Takahara, Yutaka; Yamaya, Atsuyo; Saito, Masatoshi; Kobayashi, Makoto; Kojima, Koji; Oikawa, Taku; Nakagawa, Ken; Tsuchihara, Katsuma; Iguchi, Masaharu; Takahashi, Masakatsu; Mizuno, Shiro; Osanai, Kazuhiro; Toga, Hirohisa
2013-10-15
The acoustic reflection technique noninvasively measures airway cross-sectional area vs. distance functions and uses a wave tube with a constant cross-sectional area to separate incidental and reflected waves introduced into the mouth or nostril. The accuracy of estimated cross-sectional areas gets worse in the deeper distances due to the nature of marching algorithms, i.e., errors of the estimated areas in the closer distances accumulate to those in the further distances. Here we present a new technique of acoustic reflection from measuring transmitted acoustic waves in the airway with three microphones and without employing a wave tube. Using miniaturized microphones mounted on a catheter, we estimated reflection coefficients among the microphones and separated incidental and reflected waves. A model study showed that the estimated cross-sectional area vs. distance function was coincident with the conventional two-microphone method, and it did not change with altered cross-sectional areas at the microphone position, although the estimated cross-sectional areas are relative values to that at the microphone position. The pharyngeal cross-sectional areas including retropalatal and retroglossal regions and the closing site during sleep was visualized in patients with obstructive sleep apnea. The method can be applicable to larger or smaller bronchi to evaluate the airspace and function in these localized airways.
Roemer, R B; Booth, D; Bhavsar, A A; Walter, G H; Terry, L I
2012-12-21
A mathematical model based on conservation of energy has been developed and used to simulate the temperature responses of cones of the Australian cycads Macrozamia lucida and Macrozamia. macleayi during their daily thermogenic cycle. These cones generate diel midday thermogenic temperature increases as large as 12 °C above ambient during their approximately two week pollination period. The cone temperature response model is shown to accurately predict the cones' temperatures over multiple days as based on simulations of experimental results from 28 thermogenic events from 3 different cones, each simulated for either 9 or 10 sequential days. The verified model is then used as the foundation of a new, parameter estimation based technique (termed inverse calorimetry) that estimates the cones' daily metabolic heating rates from temperature measurements alone. The inverse calorimetry technique's predictions of the major features of the cones' thermogenic metabolism compare favorably with the estimates from conventional respirometry (indirect calorimetry). Because the new technique uses only temperature measurements, and does not require measurements of oxygen consumption, it provides a simple, inexpensive and portable complement to conventional respirometry for estimating metabolic heating rates. It thus provides an additional tool to facilitate field and laboratory investigations of the bio-physics of thermogenic plants. Copyright © 2012 Elsevier Ltd. All rights reserved.
A novel technique for fetal heart rate estimation from Doppler ultrasound signal
2011-01-01
Background The currently used fetal monitoring instrumentation that is based on Doppler ultrasound technique provides the fetal heart rate (FHR) signal with limited accuracy. It is particularly noticeable as significant decrease of clinically important feature - the variability of FHR signal. The aim of our work was to develop a novel efficient technique for processing of the ultrasound signal, which could estimate the cardiac cycle duration with accuracy comparable to a direct electrocardiography. Methods We have proposed a new technique which provides the true beat-to-beat values of the FHR signal through multiple measurement of a given cardiac cycle in the ultrasound signal. The method consists in three steps: the dynamic adjustment of autocorrelation window, the adaptive autocorrelation peak detection and determination of beat-to-beat intervals. The estimated fetal heart rate values and calculated indices describing variability of FHR, were compared to the reference data obtained from the direct fetal electrocardiogram, as well as to another method for FHR estimation. Results The results revealed that our method increases the accuracy in comparison to currently used fetal monitoring instrumentation, and thus enables to calculate reliable parameters describing the variability of FHR. Relating these results to the other method for FHR estimation we showed that in our approach a much lower number of measured cardiac cycles was rejected as being invalid. Conclusions The proposed method for fetal heart rate determination on a beat-to-beat basis offers a high accuracy of the heart interval measurement enabling reliable quantitative assessment of the FHR variability, at the same time reducing the number of invalid cardiac cycle measurements. PMID:21999764
NASA Astrophysics Data System (ADS)
Cherry, M.; Dierken, J.; Boehnlein, T.; Pilchak, A.; Sathish, S.; Grandhi, R.
2018-01-01
A new technique for performing quantitative scanning acoustic microscopy imaging of Rayleigh surface wave (RSW) velocity was developed based on b-scan processing. In this technique, the focused acoustic beam is moved through many defocus distances over the sample and excited with an impulse excitation, and advanced algorithms based on frequency filtering and the Hilbert transform are used to post-process the b-scans to estimate the Rayleigh surface wave velocity. The new method was used to estimate the RSW velocity on an optically flat E6 glass sample, and the velocity was measured at ±2 m/s and the scanning time per point was on the order of 1.0 s, which are both improvement from the previous two-point defocus method. The new method was also applied to the analysis of two titanium samples, and the velocity was estimated with very low standard deviation in certain large grains on the sample. A new behavior was observed with the b-scan analysis technique where the amplitude of the surface wave decayed dramatically on certain crystallographic orientations. The new technique was also compared with previous results, and the new technique has been found to be much more reliable and to have higher contrast than previously possible with impulse excitation.