Sample records for system general estimates

  1. Traffic safety facts 1997 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    1998-11-01

    In this annual report, Traffic Safety Facts 1997: A Compilation of Motor Vehicle Crash Data from the Fatality Analysis Reporting System and the General Estimates System, the National Highway Traffic Safety Administration (NHTSA) presents descriptive ...

  2. Traffic safety facts 2007 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    2007-01-01

    In this annual report, Traffic Safety Facts 2007: A Compilation of Motor Vehicle Crash Data from the Fatality : Analysis Reporting System and the General Estimates System, the National Highway Traffic Safety Administration : (NHTSA) presents descript...

  3. Traffic safety facts 2008 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    2008-01-01

    In this annual report, Traffic Safety Facts 2008: A Compilation of Motor Vehicle Crash Data from the Fatality Analysis Reporting System and the General Estimates System, the National Highway Traffic Safety Administration (NHTSA) presents descriptive ...

  4. Traffic safety facts 2009 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    2009-01-01

    In this annual report, Traffic Safety Facts 2009: A Compilation of Motor Vehicle Crash Data from the Fatality Analysis Reporting System and the General Estimates System, the National Highway Traffic Safety Administration (NHTSA) presents descriptive ...

  5. Fatality Analysis Reporting System, General Estimates System: 2001 Data Summary.

    ERIC Educational Resources Information Center

    2003

    The Fatality Analysis Reporting System (FARS), which became operational in 1975, contains data on a census of fatal traffic crashes within the 50 states, the District of Columbia, and Puerto Rico. The General Estimates System (GES), which began in 1988, provides data from a nationally representative probability sample selected from all…

  6. General theory of remote gaze estimation using the pupil center and corneal reflections.

    PubMed

    Guestrin, Elias Daniel; Eizenman, Moshe

    2006-06-01

    This paper presents a general theory for the remote estimation of the point-of-gaze (POG) from the coordinates of the centers of the pupil and corneal reflections. Corneal reflections are produced by light sources that illuminate the eye and the centers of the pupil and corneal reflections are estimated in video images from one or more cameras. The general theory covers the full range of possible system configurations. Using one camera and one light source, the POG can be estimated only if the head is completely stationary. Using one camera and multiple light sources, the POG can be estimated with free head movements, following the completion of a multiple-point calibration procedure. When multiple cameras and multiple light sources are used, the POG can be estimated following a simple one-point calibration procedure. Experimental and simulation results suggest that the main sources of gaze estimation errors are the discrepancy between the shape of real corneas and the spherical corneal shape assumed in the general theory, and the noise in the estimation of the centers of the pupil and corneal reflections. A detailed example of a system that uses the general theory to estimate the POG on a computer screen is presented.

  7. Computer-Aided Reliability Estimation

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.; Stiffler, J. J.; Bryant, L. A.; Petersen, P. L.

    1986-01-01

    CARE III (Computer-Aided Reliability Estimation, Third Generation) helps estimate reliability of complex, redundant, fault-tolerant systems. Program specifically designed for evaluation of fault-tolerant avionics systems. However, CARE III general enough for use in evaluation of other systems as well.

  8. Optimal reference polarization states for the calibration of general Stokes polarimeters in the presence of noise

    NASA Astrophysics Data System (ADS)

    Mu, Tingkui; Bao, Donghao; Zhang, Chunmin; Chen, Zeyu; Song, Jionghui

    2018-07-01

    During the calibration of the system matrix of a Stokes polarimeter using reference polarization states (RPSs) and pseudo-inversion estimation method, the measurement intensities are usually noised by the signal-independent additive Gaussian noise or signal-dependent Poisson shot noise, the precision of the estimated system matrix is degraded. In this paper, we present a paradigm for selecting RPSs to improve the precision of the estimated system matrix in the presence of both types of noise. The analytical solution of the precision of the system matrix estimated with the RPSs are derived. Experimental measurements from a general Stokes polarimeter show that accurate system matrix is estimated with the optimal RPSs, which are generated using two rotating quarter-wave plates. The advantage of using optimal RPSs is a reduction in measurement time with high calibration precision.

  9. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    PubMed

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Wie, N.H.

    An overview of the UCC-ND system for computer-aided cost estimating is provided. The program is generally utilized in the preparation of construction cost estimates for projects costing $25,000,000 or more. The advantages of the system to the manager and the estimator are discussed, and examples of the product are provided. 19 figures, 1 table.

  11. General Aviation Aircraft Reliability Study

    NASA Technical Reports Server (NTRS)

    Pettit, Duane; Turnbull, Andrew; Roelant, Henk A. (Technical Monitor)

    2001-01-01

    This reliability study was performed in order to provide the aviation community with an estimate of Complex General Aviation (GA) Aircraft System reliability. To successfully improve the safety and reliability for the next generation of GA aircraft, a study of current GA aircraft attributes was prudent. This was accomplished by benchmarking the reliability of operational Complex GA Aircraft Systems. Specifically, Complex GA Aircraft System reliability was estimated using data obtained from the logbooks of a random sample of the Complex GA Aircraft population.

  12. Generalized Path Analysis and Generalized Simultaneous Equations Model for Recursive Systems with Responses of Mixed Types

    ERIC Educational Resources Information Center

    Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang

    2006-01-01

    This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…

  13. First-Order System Least Squares for the Stokes Equations, with Application to Linear Elasticity

    NASA Technical Reports Server (NTRS)

    Cai, Z.; Manteuffel, T. A.; McCormick, S. F.

    1996-01-01

    Following our earlier work on general second-order scalar equations, here we develop a least-squares functional for the two- and three-dimensional Stokes equations, generalized slightly by allowing a pressure term in the continuity equation. By introducing a velocity flux variable and associated curl and trace equations, we are able to establish ellipticity in an H(exp 1) product norm appropriately weighted by the Reynolds number. This immediately yields optimal discretization error estimates for finite element spaces in this norm and optimal algebraic convergence estimates for multiplicative and additive multigrid methods applied to the resulting discrete systems. Both estimates are uniform in the Reynolds number. Moreover, our pressure-perturbed form of the generalized Stokes equations allows us to develop an analogous result for the Dirichlet problem for linear elasticity with estimates that are uniform in the Lame constants.

  14. Control of Distributed Parameter Systems

    DTIC Science & Technology

    1990-08-01

    vari- ant of the general Lotka - Volterra model for interspecific competition. The variant described the emergence of one subpopulation from another as a...distribut ion unlimited. I&. ARSTRACT (MAUMUnw2O1 A unified arioroximation framework for Parameter estimation In general linear POE models has been completed...unified approximation framework for parameter estimation in general linear PDE models. This framework has provided the theoretical basis for a number of

  15. Spacecraft drag-free technology development: On-board estimation and control synthesis

    NASA Technical Reports Server (NTRS)

    Key, R. W.; Mettler, E.; Milman, M. H.; Schaechter, D. B.

    1982-01-01

    Estimation and control methods for a Drag-Free spacecraft are discussed. The functional and analytical synthesis of on-board estimators and controllers for an integrated attitude and translation control system is represented. The framework for detail definition and design of the baseline drag-free system is created. The techniques for solution of self-gravity and electrostatic charging problems are applicable generally, as is the control system development.

  16. Community BMI Surveillance Using an Existing Immunization Registry in San Diego, California.

    PubMed

    Ratigan, Amanda R; Lindsay, Suzanne; Lemus, Hector; Chambers, Christina D; Anderson, Cheryl A M; Cronan, Terry A; Browner, Deirdre K; Wooten, Wilma J

    2017-06-01

    This study examines the demographic representativeness of the County of San Diego Body Mass Index (BMI) Surveillance System to determine if the BMI estimates being obtained from this convenience sample of individuals who visited their healthcare provider for outpatient services can be generalized to the general population of San Diego. Height and weight were transmitted from electronic health records systems to the San Diego Immunization Registry (SDIR). Age, gender, and race/ethnicity of this sample are compared to general population estimates by sub-regional area (SRA) (n = 41) to account for regional demographic differences. A < 10% difference (calculated as the ratio of the differences between the frequencies of a sub-group in this sample and general population estimates obtained from the U.S. Census Bureau) was used to determine representativeness. In 2011, the sample consisted of 352,924 residents aged 2-100 years. The younger age groups (2-11, 12-17 years) and the oldest age group (≥65 years) were representative in 90, 75, and 85% of SRAs, respectively. Furthermore, at least one of the five racial/ethnic groups was represented in 71% of SRAs. This BMI Surveillance System was found to demographically represent some SRAs well, suggesting that this registry-based surveillance system may be useful in estimating and monitoring neighborhood-level BMI data.

  17. Reliability estimation of a N- M-cold-standby redundancy system in a multicomponent stress-strength model with generalized half-logistic distribution

    NASA Astrophysics Data System (ADS)

    Liu, Yiming; Shi, Yimin; Bai, Xuchao; Zhan, Pei

    2018-01-01

    In this paper, we study the estimation for the reliability of a multicomponent system, named N- M-cold-standby redundancy system, based on progressive Type-II censoring sample. In the system, there are N subsystems consisting of M statistically independent distributed strength components, and only one of these subsystems works under the impact of stresses at a time and the others remain as standbys. Whenever the working subsystem fails, one from the standbys takes its place. The system fails when the entire subsystems fail. It is supposed that the underlying distributions of random strength and stress both belong to the generalized half-logistic distribution with different shape parameter. The reliability of the system is estimated by using both classical and Bayesian statistical inference. Uniformly minimum variance unbiased estimator and maximum likelihood estimator for the reliability of the system are derived. Under squared error loss function, the exact expression of the Bayes estimator for the reliability of the system is developed by using the Gauss hypergeometric function. The asymptotic confidence interval and corresponding coverage probabilities are derived based on both the Fisher and the observed information matrices. The approximate highest probability density credible interval is constructed by using Monte Carlo method. Monte Carlo simulations are performed to compare the performances of the proposed reliability estimators. A real data set is also analyzed for an illustration of the findings.

  18. Developing a Fundamental Model for an Integrated GPS/INS State Estimation System with Kalman Filtering

    NASA Technical Reports Server (NTRS)

    Canfield, Stephen

    1999-01-01

    This work will demonstrate the integration of sensor and system dynamic data and their appropriate models using an optimal filter to create a robust, adaptable, easily reconfigurable state (motion) estimation system. This state estimation system will clearly show the application of fundamental modeling and filtering techniques. These techniques are presented at a general, first principles level, that can easily be adapted to specific applications. An example of such an application is demonstrated through the development of an integrated GPS/INS navigation system. This system acquires both global position data and inertial body data, to provide optimal estimates of current position and attitude states. The optimal states are estimated using a Kalman filter. The state estimation system will include appropriate error models for the measurement hardware. The results of this work will lead to the development of a "black-box" state estimation system that supplies current motion information (position and attitude states) that can be used to carry out guidance and control strategies. This black-box state estimation system is developed independent of the vehicle dynamics and therefore is directly applicable to a variety of vehicles. Issues in system modeling and application of Kalman filtering techniques are investigated and presented. These issues include linearized models of equations of state, models of the measurement sensors, and appropriate application and parameter setting (tuning) of the Kalman filter. The general model and subsequent algorithm is developed in Matlab for numerical testing. The results of this system are demonstrated through application to data from the X-33 Michael's 9A8 mission and are presented in plots and simple animations.

  19. Development of a technique for estimating noise covariances using multiple observers

    NASA Technical Reports Server (NTRS)

    Bundick, W. Thomas

    1988-01-01

    Friedland's technique for estimating the unknown noise variances of a linear system using multiple observers has been extended by developing a general solution for the estimates of the variances, developing the statistics (mean and standard deviation) of these estimates, and demonstrating the solution on two examples.

  20. Potential effects of climate change on ground water in Lansing, Michigan

    USGS Publications Warehouse

    Croley, T.E.; Luukkonen, C.L.

    2003-01-01

    Computer simulations involving general circulation models, a hydrologic modeling system, and a ground water flow model indicate potential impacts of selected climate change projections on ground water levels in the Lansing, Michigan, area. General circulation models developed by the Canadian Climate Centre and the Hadley Centre generated meteorology estimates for 1961 through 1990 (as a reference condition) and for the 20 years centered on 2030 (as a changed climate condition). Using these meteorology estimates, the Great Lakes Environmental Research Laboratory's hydrologic modeling system produced corresponding period streamflow simulations. Ground water recharge was estimated from the streamflow simulations and from variables derived from the general circulation models. The U.S. Geological Survey developed a numerical ground water flow model of the Saginaw and glacial aquifers in the Tri-County region surrounding Lansing, Michigan. Model simulations, using the ground water recharge estimates, indicate changes in ground water levels. Within the Lansing area, simulated ground water levels in the Saginaw aquifer declined under the Canadian predictions and increased under the Hadley.

  1. Administrative Costs Associated With Physician Billing and Insurance-Related Activities at an Academic Health Care System.

    PubMed

    Tseng, Phillip; Kaplan, Robert S; Richman, Barak D; Shah, Mahek A; Schulman, Kevin A

    2018-02-20

    Administrative costs in the US health care system are an important component of total health care spending, and a substantial proportion of these costs are attributable to billing and insurance-related activities. To examine and estimate the administrative costs associated with physician billing activities in a large academic health care system with a certified electronic health record system. This study used time-driven activity-based costing. Interviews were conducted with 27 health system administrators and 34 physicians in 2016 and 2017 to construct a process map charting the path of an insurance claim through the revenue cycle management process. These data were used to calculate the cost for each major billing and insurance-related activity and were aggregated to estimate the health system's total cost of processing an insurance claim. Estimated time required to perform billing and insurance-related activities, based on interviews with management personnel and physicians. Estimated billing and insurance-related costs for 5 types of patient encounters: primary care visits, discharged emergency department visits, general medicine inpatient stays, ambulatory surgical procedures, and inpatient surgical procedures. Estimated processing time and total costs for billing and insurance-related activities were 13 minutes and $20.49 for a primary care visit, 32 minutes and $61.54 for a discharged emergency department visit, 73 minutes and $124.26 for a general inpatient stay, 75 minutes and $170.40 for an ambulatory surgical procedure, and 100 minutes and $215.10 for an inpatient surgical procedure. Of these totals, time and costs for activities carried out by physicians were estimated at a median of 3 minutes or $6.36 for a primary care visit, 3 minutes or $10.97 for an emergency department visit, 5 minutes or $13.29 for a general inpatient stay, 15 minutes or $51.20 for an ambulatory surgical procedure, and 15 minutes or $51.20 for an inpatient surgical procedure. Of professional revenue, professional billing costs were estimated to represent 14.5% for primary care visits, 25.2% for emergency department visits, 8.0% for general medicine inpatient stays, 13.4% for ambulatory surgical procedures, and 3.1% for inpatient surgical procedures. In a time-driven activity-based costing study in a large academic health care system with a certified electronic health record system, the estimated costs of billing and insurance-related activities ranged from $20 for a primary care visit to $215 for an inpatient surgical procedure. Knowledge of how specific billing and insurance-related activities contribute to administrative costs may help inform policy solutions to reduce these expenses.

  2. General multiyear aggregation technology: Methodology and software documentation. [estimating seasonal crop acreage proportions

    NASA Technical Reports Server (NTRS)

    Baker, T. C. (Principal Investigator)

    1982-01-01

    A general methodology is presented for estimating a stratum's at-harvest crop acreage proportion for a given crop year (target year) from the crop's estimated acreage proportion for sample segments from within the stratum. Sample segments from crop years other than the target year are (usually) required for use in conjunction with those from the target year. In addition, the stratum's (identifiable) crop acreage proportion may be estimated for times other than at-harvest in some situations. A by-product of the procedure is a methodology for estimating the change in the stratum's at-harvest crop acreage proportion from crop year to crop year. An implementation of the proposed procedure as a statistical analysis system routine using the system's matrix language module, PROC MATRIX, is described and documented. Three examples illustrating use of the methodology and algorithm are provided.

  3. Regularity estimates up to the boundary for elliptic systems of difference equations

    NASA Technical Reports Server (NTRS)

    Strikwerda, J. C.; Wade, B. A.; Bube, K. P.

    1986-01-01

    Regularity estimates up to the boundary for solutions of elliptic systems of finite difference equations were proved. The regularity estimates, obtained for boundary fitted coordinate systems on domains with smooth boundary, involve discrete Sobolev norms and are proved using pseudo-difference operators to treat systems with variable coefficients. The elliptic systems of difference equations and the boundary conditions which are considered are very general in form. The regularity of a regular elliptic system of difference equations was proved equivalent to the nonexistence of eigensolutions. The regularity estimates obtained are analogous to those in the theory of elliptic systems of partial differential equations, and to the results of Gustafsson, Kreiss, and Sundstrom (1972) and others for hyperbolic difference equations.

  4. Estimation and Analysis of Nonlinear Stochastic Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Marcus, S. I.

    1975-01-01

    The algebraic and geometric structures of certain classes of nonlinear stochastic systems were exploited in order to obtain useful stability and estimation results. The class of bilinear stochastic systems (or linear systems with multiplicative noise) was discussed. The stochastic stability of bilinear systems driven by colored noise was considered. Approximate methods for obtaining sufficient conditions for the stochastic stability of bilinear systems evolving on general Lie groups were discussed. Two classes of estimation problems involving bilinear systems were considered. It was proved that, for systems described by certain types of Volterra series expansions or by certain bilinear equations evolving on nilpotent or solvable Lie groups, the optimal conditional mean estimator consists of a finite dimensional nonlinear set of equations. The theory of harmonic analysis was used to derive suboptimal estimators for bilinear systems driven by white noise which evolve on compact Lie groups or homogeneous spaces.

  5. Identification of open quantum systems from observable time traces

    DOE PAGES

    Zhang, Jun; Sarovar, Mohan

    2015-05-27

    Estimating the parameters that dictate the dynamics of a quantum system is an important task for quantum information processing and quantum metrology, as well as fundamental physics. In our paper we develop a method for parameter estimation for Markovian open quantum systems using a temporal record of measurements on the system. Furthermore, the method is based on system realization theory and is a generalization of our previous work on identification of Hamiltonian parameters.

  6. Lockheed L-1101 avionic flight control redundant systems

    NASA Technical Reports Server (NTRS)

    Throndsen, E. O.

    1976-01-01

    The Lockheed L-1011 automatic flight control systems - yaw stability augmentation and automatic landing - are described in terms of their redundancies. The reliability objectives for these systems are discussed and related to in-service experience. In general, the availability of the stability augmentation system is higher than the original design requirement, but is commensurate with early estimates. The in-service experience with automatic landing is not sufficient to provide verification of Category 3 automatic landing system estimated availability.

  7. Ensemble-Based Parameter Estimation in a Coupled General Circulation Model

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-09-10

    Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less

  8. Evaluation of modulation transfer function of optical lens system by support vector regression methodologies - A comparative study

    NASA Astrophysics Data System (ADS)

    Petković, Dalibor; Shamshirband, Shahaboddin; Saboohi, Hadi; Ang, Tan Fong; Anuar, Nor Badrul; Rahman, Zulkanain Abdul; Pavlović, Nenad T.

    2014-07-01

    The quantitative assessment of image quality is an important consideration in any type of imaging system. The modulation transfer function (MTF) is a graphical description of the sharpness and contrast of an imaging system or of its individual components. The MTF is also known and spatial frequency response. The MTF curve has different meanings according to the corresponding frequency. The MTF of an optical system specifies the contrast transmitted by the system as a function of image size, and is determined by the inherent optical properties of the system. In this study, the polynomial and radial basis function (RBF) are applied as the kernel function of Support Vector Regression (SVR) to estimate and predict estimate MTF value of the actual optical system according to experimental tests. Instead of minimizing the observed training error, SVR_poly and SVR_rbf attempt to minimize the generalization error bound so as to achieve generalized performance. The experimental results show that an improvement in predictive accuracy and capability of generalization can be achieved by the SVR_rbf approach in compare to SVR_poly soft computing methodology.

  9. Enhancing Retrieval with Hyperlinks: A General Model Based on Propositional Argumentation Systems.

    ERIC Educational Resources Information Center

    Picard, Justin; Savoy, Jacques

    2003-01-01

    Discusses the use of hyperlinks for improving information retrieval on the World Wide Web and proposes a general model for using hyperlinks based on Probabilistic Argumentation Systems. Topics include propositional logic, knowledge, and uncertainty; assumptions; using hyperlinks to modify document score and rank; and estimating the popularity of a…

  10. Estimation and Control of Distributed Models for Certain Elastic Systems Arising in Large Space Structures.

    DTIC Science & Technology

    1987-09-30

    igennfy by ""aU numiir,) PIAL GROUP Sue. Go. RCI (Cm, inve o owuera Ineeemerv 4R an~ b-, bloca number) The goal of this research was to study...estimation and control of elastic systems compoited of beams and plates. Specifically, the research con- sidered the problem of lcating the optimal placement...estimation and control of elastic systems com- posed of beams and plates. This general goal has served as a guide for our research over the last several

  11. User's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A user's manual for the FORTRAN IV computer program MMLE3 is described. It is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The theory and use of the program is described. The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program.

  12. Utility Estimates of Disease-Specific Health States in Prostate Cancer from Three Different Perspectives.

    PubMed

    Gries, Katharine S; Regier, Dean A; Ramsey, Scott D; Patrick, Donald L

    2017-06-01

    To develop a statistical model generating utility estimates for prostate cancer specific health states, using preference weights derived from the perspectives of prostate cancer patients, men at risk for prostate cancer, and society. Utility estimate values were calculated using standard gamble (SG) methodology. Study participants valued 18 prostate-specific health states with the five attributes: sexual function, urinary function, bowel function, pain, and emotional well-being. Appropriateness of model (linear regression, mixed effects, or generalized estimating equation) to generate prostate cancer utility estimates was determined by paired t-tests to compare observed and predicted values. Mixed-corrected standard SG utility estimates to account for loss aversion were calculated based on prospect theory. 132 study participants assigned values to the health states (n = 40 men at risk for prostate cancer; n = 43 men with prostate cancer; n = 49 general population). In total, 792 valuations were elicited (six health states for each 132 participants). The most appropriate model for the classification system was a mixed effects model; correlations between the mean observed and predicted utility estimates were greater than 0.80 for each perspective. Developing a health-state classification system with preference weights for three different perspectives demonstrates the relative importance of main effects between populations. The predicted values for men with prostate cancer support the hypothesis that patients experiencing the disease state assign higher utility estimates to health states and there is a difference in valuations made by patients and the general population.

  13. Optimizing Spectral Wave Estimates with Adjoint-Based Sensitivity Maps

    DTIC Science & Technology

    2014-02-18

    J, Orzech MD, Ngodock HE (2013) Validation of a wave data assimilation system based on SWAN. Geophys Res Abst, (15), EGU2013-5951-1, EGU General ...surface wave spectra. Sensitivity maps are generally constructed for a selected system indicator (e.g., vorticity) by computing the differential of...spectral action balance Eq. 2, generally initialized at the off- shore boundary with spectral wave and other outputs from regional models such as

  14. Program for computer aided reliability estimation

    NASA Technical Reports Server (NTRS)

    Mathur, F. P. (Inventor)

    1972-01-01

    A computer program for estimating the reliability of self-repair and fault-tolerant systems with respect to selected system and mission parameters is presented. The computer program is capable of operation in an interactive conversational mode as well as in a batch mode and is characterized by maintenance of several general equations representative of basic redundancy schemes in an equation repository. Selected reliability functions applicable to any mathematical model formulated with the general equations, used singly or in combination with each other, are separately stored. One or more system and/or mission parameters may be designated as a variable. Data in the form of values for selected reliability functions is generated in a tabular or graphic format for each formulated model.

  15. Implementation Costs for Educational Technology Systems. Issue Trak: A CEFPI Brief on Educational Facility Issues.

    ERIC Educational Resources Information Center

    Meeks, Glenn E.; Fisher, Ricki; Loveless, Warren

    Personnel involved in planning or developing schools lack the costing tools that will enable them to determine educational technology costs. This report presents an overview of the technology costing process and the general costs used in estimating educational technology systems on a macro-budget basis, along with simple cost estimates for…

  16. A General Simulator Using State Estimation for a Space Tug Navigation System. [computerized simulation, orbital position estimation and flight mechanics

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1975-01-01

    A general simulation program is presented (GSP) involving nonlinear state estimation for space vehicle flight navigation systems. A complete explanation of the iterative guidance mode guidance law, derivation of the dynamics, coordinate frames, and state estimation routines are given so as to fully clarify the assumptions and approximations involved so that simulation results can be placed in their proper perspective. A complete set of computer acronyms and their definitions as well as explanations of the subroutines used in the GSP simulator are included. To facilitate input/output, a complete set of compatable numbers, with units, are included to aid in data development. Format specifications, output data phrase meanings and purposes, and computer card data input are clearly spelled out. A large number of simulation and analytical studies were used to determine the validity of the simulator itself as well as various data runs.

  17. Theoretical foundations for traditional and generalized sensitivity functions for nonlinear delay differential equations.

    PubMed

    Banks, H Thomas; Robbins, Danielle; Sutton, Karyn L

    2013-01-01

    In this paper we present new results for differentiability of delay systems with respect to initial conditions and delays. After motivating our results with a wide range of delay examples arising in biology applications, we further note the need for sensitivity functions (both traditional and generalized sensitivity functions), especially in control and estimation problems. We summarize general existence and uniqueness results before turning to our main results on differentiation with respect to delays, etc. Finally we discuss use of our results in the context of estimation problems.

  18. Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.

    1981-01-01

    The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.

  19. Care 3 phase 2 report, maintenance manual

    NASA Technical Reports Server (NTRS)

    Bryant, L. A.; Stiffler, J. J.

    1982-01-01

    CARE 3 (Computer-Aided Reliability Estimation, version three) is a computer program designed to help estimate the reliability of complex, redundant systems. Although the program can model a wide variety of redundant structures, it was developed specifically for fault-tolerant avionics systems--systems distinguished by the need for extremely reliable performance since a system failure could well result in the loss of human life. It substantially generalizes the class of redundant configurations that could be accommodated, and includes a coverage model to determine the various coverage probabilities as a function of the applicable fault recovery mechanisms (detection delay, diagnostic scheduling interval, isolation and recovery delay, etc.). CARE 3 further generalizes the class of system structures that can be modeled and greatly expands the coverage model to take into account such effects as intermittent and transient faults, latent faults, error propagation, etc.

  20. Sequential state estimation of nonlinear/non-Gaussian systems with stochastic input for turbine degradation estimation

    NASA Astrophysics Data System (ADS)

    Hanachi, Houman; Liu, Jie; Banerjee, Avisekh; Chen, Ying

    2016-05-01

    Health state estimation of inaccessible components in complex systems necessitates effective state estimation techniques using the observable variables of the system. The task becomes much complicated when the system is nonlinear/non-Gaussian and it receives stochastic input. In this work, a novel sequential state estimation framework is developed based on particle filtering (PF) scheme for state estimation of general class of nonlinear dynamical systems with stochastic input. Performance of the developed framework is then validated with simulation on a Bivariate Non-stationary Growth Model (BNGM) as a benchmark. In the next step, three-year operating data of an industrial gas turbine engine (GTE) are utilized to verify the effectiveness of the developed framework. A comprehensive thermodynamic model for the GTE is therefore developed to formulate the relation of the observable parameters and the dominant degradation symptoms of the turbine, namely, loss of isentropic efficiency and increase of the mass flow. The results confirm the effectiveness of the developed framework for simultaneous estimation of multiple degradation symptoms in complex systems with noisy measured inputs.

  1. Implementation of Kalman filter algorithm on models reduced using singular pertubation approximation method and its application to measurement of water level

    NASA Astrophysics Data System (ADS)

    Rachmawati, Vimala; Khusnul Arif, Didik; Adzkiya, Dieky

    2018-03-01

    The systems contained in the universe often have a large order. Thus, the mathematical model has many state variables that affect the computation time. In addition, generally not all variables are known, so estimations are needed to measure the magnitude of the system that cannot be measured directly. In this paper, we discuss the model reduction and estimation of state variables in the river system to measure the water level. The model reduction of a system is an approximation method of a system with a lower order without significant errors but has a dynamic behaviour that is similar to the original system. The Singular Perturbation Approximation method is one of the model reduction methods where all state variables of the equilibrium system are partitioned into fast and slow modes. Then, The Kalman filter algorithm is used to estimate state variables of stochastic dynamic systems where estimations are computed by predicting state variables based on system dynamics and measurement data. Kalman filters are used to estimate state variables in the original system and reduced system. Then, we compare the estimation results of the state and computational time between the original and reduced system.

  2. Estimating Dynamical Systems: Derivative Estimation Hints From Sir Ronald A. Fisher.

    PubMed

    Deboeck, Pascal R

    2010-08-06

    The fitting of dynamical systems to psychological data offers the promise of addressing new and innovative questions about how people change over time. One method of fitting dynamical systems is to estimate the derivatives of a time series and then examine the relationships between derivatives using a differential equation model. One common approach for estimating derivatives, Local Linear Approximation (LLA), produces estimates with correlated errors. Depending on the specific differential equation model used, such correlated errors can lead to severely biased estimates of differential equation model parameters. This article shows that the fitting of dynamical systems can be improved by estimating derivatives in a manner similar to that used to fit orthogonal polynomials. Two applications using simulated data compare the proposed method and a generalized form of LLA when used to estimate derivatives and when used to estimate differential equation model parameters. A third application estimates the frequency of oscillation in observations of the monthly deaths from bronchitis, emphysema, and asthma in the United Kingdom. These data are publicly available in the statistical program R, and functions in R for the method presented are provided.

  3. Identifying Behaviors and Situations Associated With Increased Crash Risk for Older Drivers

    DOT National Transportation Integrated Search

    2009-06-01

    This report reviews published literature and analyzes the most recent Fatality Analysis Reporting : System (FARS) and National Automotive Sampling System (NASS)/General Estimates System : (GES) data to identify specific driving behaviors (performance...

  4. Highway infrastructure : FHWA's model for estimating highway needs is generally reasonable, despite limitations

    DOT National Transportation Integrated Search

    2000-06-01

    The Highway Economic Requirements System (HERS) computer model estimates investment requirements for the nation's highways by adding together the costs of highway improvements that the model's benefit-cost analyses indicate are warranted. In making i...

  5. Estimating optical imaging system performance for space applications

    NASA Technical Reports Server (NTRS)

    Sinclair, K. F.

    1972-01-01

    The critical system elements of an optical imaging system are identified and a method for an initial assessment of system performance is presented. A generalized imaging system is defined. A system analysis is considered, followed by a component analysis. An example of the method is given using a film imaging system.

  6. Comments on new classification, treatment algorithm and prognosis-estimating systems for sigmoid volvulus and ileosigmoid knotting: necessity and utility.

    PubMed

    Aksungur, N; Korkut, E

    2018-05-24

    We read Atamanalp classification, treatment algorithm and prognosis-estimating systems for sigmoid volvulus (SV) and ileosigmoid knotting (ISK) in Colorectal Disease [1,2]. Our comments relate to necessity and utility of these new classification systems. Classification or staging systems are generally used in malignant or premalignant pathologies such as colorectal cancers [3] or polyps [4]. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  7. An information system design for watershed-wide modeling of water loss to the atmosphere using remote sensing techniques

    NASA Technical Reports Server (NTRS)

    Khorram, S.

    1977-01-01

    Results are presented of a study intended to develop a general location-specific remote-sensing procedure for watershed-wide estimation of water loss to the atmosphere by evaporation and transpiration. The general approach involves a stepwise sequence of required information definition (input data), appropriate sample design, mathematical modeling, and evaluation of results. More specifically, the remote sensing-aided system developed to evaluate evapotranspiration employs a basic two-stage two-phase sample of three information resolution levels. Based on the discussed design, documentation, and feasibility analysis to yield timely, relatively accurate, and cost-effective evapotranspiration estimates on a watershed or subwatershed basis, work is now proceeding to implement this remote sensing-aided system.

  8. Distributed weighted least-squares estimation with fast convergence for large-scale systems.

    PubMed

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.

  9. Nonlinear adaptive control system design with asymptotically stable parameter estimation error

    NASA Astrophysics Data System (ADS)

    Mishkov, Rumen; Darmonski, Stanislav

    2018-01-01

    The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.

  10. A Mixed Kijima Model Using the Weibull-Based Generalized Renewal Processes

    PubMed Central

    2015-01-01

    Generalized Renewal Processes are useful for approaching the rejuvenation of dynamical systems resulting from planned or unplanned interventions. We present new perspectives for the Generalized Renewal Processes in general and for the Weibull-based Generalized Renewal Processes in particular. Disregarding from literature, we present a mixed Generalized Renewal Processes approach involving Kijima Type I and II models, allowing one to infer the impact of distinct interventions on the performance of the system under study. The first and second theoretical moments of this model are introduced as well as its maximum likelihood estimation and random sampling approaches. In order to illustrate the usefulness of the proposed Weibull-based Generalized Renewal Processes model, some real data sets involving improving, stable, and deteriorating systems are used. PMID:26197222

  11. Model-Based Prognostics of Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil; Bregon, Anibal

    2015-01-01

    Model-based prognostics has become a popular approach to solving the prognostics problem. However, almost all work has focused on prognostics of systems with continuous dynamics. In this paper, we extend the model-based prognostics framework to hybrid systems models that combine both continuous and discrete dynamics. In general, most systems are hybrid in nature, including those that combine physical processes with software. We generalize the model-based prognostics formulation to hybrid systems, and describe the challenges involved. We present a general approach for modeling hybrid systems, and overview methods for solving estimation and prediction in hybrid systems. As a case study, we consider the problem of conflict (i.e., loss of separation) prediction in the National Airspace System, in which the aircraft models are hybrid dynamical systems.

  12. Crash data and rates for age-sex groups of drivers, 1996

    DOT National Transportation Integrated Search

    1998-01-01

    The results of this research note are based on 1996data for fatal crashes, driver licenses, and estimates of total crashes based upon data obtained from the nationally representative sample of crashes gathered in the General Estimates System (GES). T...

  13. Application of ANFIS to Phase Estimation for Multiple Phase Shift Keying

    NASA Technical Reports Server (NTRS)

    Drake, Jeffrey T.; Prasad, Nadipuram R.

    2000-01-01

    The paper discusses a novel use of Adaptive Neuro-Fuzzy Inference Systems (ANFIS) for estimating phase in Multiple Phase Shift Keying (M-PSK) modulation. A brief overview of communications phase estimation is provided. The modeling of both general open-loop, and closed-loop phase estimation schemes for M-PSK symbols with unknown structure are discussed. Preliminary performance results from simulation of the above schemes are presented.

  14. Characteristic Energy Scales of Quantum Systems.

    ERIC Educational Resources Information Center

    Morgan, Michael J.; Jakovidis, Greg

    1994-01-01

    Provides a particle-in-a-box model to help students understand and estimate the magnitude of the characteristic energy scales of a number of quantum systems. Also discusses the mathematics involved with general computations. (MVL)

  15. Loop transfer recovery for general nonminimum phase discrete time systems. I - Analysis

    NASA Technical Reports Server (NTRS)

    Chen, Ben M.; Saberi, Ali; Sannuti, Peddapullaiah; Shamash, Yacov

    1992-01-01

    A complete analysis of loop transfer recovery (LTR) for general nonstrictly proper, not necessarily minimum phase discrete time systems is presented. Three different observer-based controllers, namely, `prediction estimator' and full or reduced-order type `current estimator' based controllers, are used. The analysis corresponding to all these three controllers is unified into a single mathematical framework. The LTR analysis given here focuses on three fundamental issues: (1) the recoverability of a target loop when it is arbitrarily given, (2) the recoverability of a target loop while taking into account its specific characteristics, and (3) the establishment of necessary and sufficient conditions on the given system so that it has at least one recoverable target loop transfer function or sensitivity function. Various differences that arise in LTR analysis of continuous and discrete systems are pointed out.

  16. LS Channel Estimation and Signal Separation for UHF RFID Tag Collision Recovery on the Physical Layer.

    PubMed

    Duan, Hanjun; Wu, Haifeng; Zeng, Yu; Chen, Yuebin

    2016-03-26

    In a passive ultra-high frequency (UHF) radio-frequency identification (RFID) system, tag collision is generally resolved on a medium access control (MAC) layer. However, some of collided tag signals could be recovered on a physical (PHY) layer and, thus, enhance the identification efficiency of the RFID system. For the recovery on the PHY layer, channel estimation is a critical issue. Good channel estimation will help to recover the collided signals. Existing channel estimates work well for two collided tags. When the number of collided tags is beyond two, however, the existing estimates have more estimation errors. In this paper, we propose a novel channel estimate for the UHF RFID system. It adopts an orthogonal matrix based on the information of preambles which is known for a reader and applies a minimum-mean-square-error (MMSE) criterion to estimate channels. From the estimated channel, we could accurately separate the collided signals and recover them. By means of numerical results, we show that the proposed estimate has lower estimation errors and higher separation efficiency than the existing estimates.

  17. Polarization-based index of refraction and reflection angle estimation for remote sensing applications.

    PubMed

    Thilak, Vimal; Voelz, David G; Creusere, Charles D

    2007-10-20

    A passive-polarization-based imaging system records the polarization state of light reflected by objects that are illuminated with an unpolarized and generally uncontrolled source. Such systems can be useful in many remote sensing applications including target detection, object segmentation, and material classification. We present a method to jointly estimate the complex index of refraction and the reflection angle (reflected zenith angle) of a target from multiple measurements collected by a passive polarimeter. An expression for the degree of polarization is derived from the microfacet polarimetric bidirectional reflectance model for the case of scattering in the plane of incidence. Using this expression, we develop a nonlinear least-squares estimation algorithm for extracting an apparent index of refraction and the reflection angle from a set of polarization measurements collected from multiple source positions. Computer simulation results show that the estimation accuracy generally improves with an increasing number of source position measurements. Laboratory results indicate that the proposed method is effective for recovering the reflection angle and that the estimated index of refraction provides a feature vector that is robust to the reflection angle.

  18. Polarization-based index of refraction and reflection angle estimation for remote sensing applications

    NASA Astrophysics Data System (ADS)

    Thilak, Vimal; Voelz, David G.; Creusere, Charles D.

    2007-10-01

    A passive-polarization-based imaging system records the polarization state of light reflected by objects that are illuminated with an unpolarized and generally uncontrolled source. Such systems can be useful in many remote sensing applications including target detection, object segmentation, and material classification. We present a method to jointly estimate the complex index of refraction and the reflection angle (reflected zenith angle) of a target from multiple measurements collected by a passive polarimeter. An expression for the degree of polarization is derived from the microfacet polarimetric bidirectional reflectance model for the case of scattering in the plane of incidence. Using this expression, we develop a nonlinear least-squares estimation algorithm for extracting an apparent index of refraction and the reflection angle from a set of polarization measurements collected from multiple source positions. Computer simulation results show that the estimation accuracy generally improves with an increasing number of source position measurements. Laboratory results indicate that the proposed method is effective for recovering the reflection angle and that the estimated index of refraction provides a feature vector that is robust to the reflection angle.

  19. Side-by-side ANFIS as a useful tool for estimating correlated thermophysical properties

    NASA Astrophysics Data System (ADS)

    Grieu, Stéphane; Faugeroux, Olivier; Traoré, Adama; Claudet, Bernard; Bodnar, Jean-Luc

    2015-12-01

    In the present paper, an artificial intelligence-based approach dealing with the estimation of correlated thermophysical properties is designed and evaluated. This new and "intelligent" approach makes use of photothermal responses obtained when homogeneous materials are subjected to a light flux. Commonly, gradient-based algorithms are used as parameter estimation techniques. Unfortunately, such algorithms show instabilities leading to non-convergence in case of correlated properties to be estimated from a rebuilt impulse response. So, the main objective of the present work was to simultaneously estimate both the thermal diffusivity and conductivity of homogeneous materials, from front-face or rear-face photothermal responses to pseudo random binary signals. To this end, we used side-by-side neuro-fuzzy systems (adaptive network-based fuzzy inference systems) trained with a hybrid algorithm. We focused on the impact on generalization of both the examples used during training and the fuzzification process. In addition, computation time was a key point to consider. That is why the developed algorithm is computationally tractable and allows both the thermal diffusivity and conductivity of homogeneous materials to be simultaneously estimated with very good accuracy (the generalization error ranges between 4.6% and 6.2%).

  20. 48 CFR 1352.215-76 - Cost or pricing data.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...: The offeror shall list the categories of professional or technical personnel required to perform the....—should be discussed. (3) Overhead Costs. Generally, the offeror's accounting system and estimating... in accordance with generally accepted accounting principles, will be accepted. Proposed overhead...

  1. 48 CFR 1352.215-76 - Cost or pricing data.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...: The offeror shall list the categories of professional or technical personnel required to perform the....—should be discussed. (3) Overhead Costs. Generally, the offeror's accounting system and estimating... in accordance with generally accepted accounting principles, will be accepted. Proposed overhead...

  2. 48 CFR 1352.215-76 - Cost or pricing data.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...: The offeror shall list the categories of professional or technical personnel required to perform the....—should be discussed. (3) Overhead Costs. Generally, the offeror's accounting system and estimating... in accordance with generally accepted accounting principles, will be accepted. Proposed overhead...

  3. 48 CFR 1352.215-76 - Cost or pricing data.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...: The offeror shall list the categories of professional or technical personnel required to perform the....—should be discussed. (3) Overhead Costs. Generally, the offeror's accounting system and estimating... in accordance with generally accepted accounting principles, will be accepted. Proposed overhead...

  4. 48 CFR 1352.215-76 - Cost or pricing data.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...: The offeror shall list the categories of professional or technical personnel required to perform the....—should be discussed. (3) Overhead Costs. Generally, the offeror's accounting system and estimating... in accordance with generally accepted accounting principles, will be accepted. Proposed overhead...

  5. Robust estimation of simulated urinary volume from camera images under bathroom illumination.

    PubMed

    Honda, Chizuru; Bhuiyan, Md Shoaib; Kawanaka, Haruki; Watanabe, Eiichi; Oguri, Koji

    2016-08-01

    General uroflowmetry method involves the risk of nosocomial infections or time and effort of the recording. Medical institutions, therefore, need to measure voided volume simply and hygienically. Multiple cylindrical model that can estimate the fluid flow rate from the photographed image using camera has been proposed in an earlier study. This study implemented a flow rate estimation by using a general-purpose camera system (Raspberry Pi Camera Module) and the multiple cylindrical model. However, large amounts of noise in extracting liquid region are generated by the variation of the illumination when performing measurements in the bathroom. So the estimation error gets very large. In other words, the specifications of the previous study's camera setup regarding the shutter type and the frame rate was too strict. In this study, we relax the specifications to achieve a flow rate estimation using a general-purpose camera. In order to determine the appropriate approximate curve, we propose a binarizing method using background subtraction at each scanning row and a curve approximation method using RANSAC. Finally, by evaluating the estimation accuracy of our experiment and by comparing it with the earlier study's results, we show the effectiveness of our proposed method for flow rate estimation.

  6. Extended Kalman Filter for Estimation of Parameters in Nonlinear State-Space Models of Biochemical Networks

    PubMed Central

    Sun, Xiaodian; Jin, Li; Xiong, Momiao

    2008-01-01

    It is system dynamics that determines the function of cells, tissues and organisms. To develop mathematical models and estimate their parameters are an essential issue for studying dynamic behaviors of biological systems which include metabolic networks, genetic regulatory networks and signal transduction pathways, under perturbation of external stimuli. In general, biological dynamic systems are partially observed. Therefore, a natural way to model dynamic biological systems is to employ nonlinear state-space equations. Although statistical methods for parameter estimation of linear models in biological dynamic systems have been developed intensively in the recent years, the estimation of both states and parameters of nonlinear dynamic systems remains a challenging task. In this report, we apply extended Kalman Filter (EKF) to the estimation of both states and parameters of nonlinear state-space models. To evaluate the performance of the EKF for parameter estimation, we apply the EKF to a simulation dataset and two real datasets: JAK-STAT signal transduction pathway and Ras/Raf/MEK/ERK signaling transduction pathways datasets. The preliminary results show that EKF can accurately estimate the parameters and predict states in nonlinear state-space equations for modeling dynamic biochemical networks. PMID:19018286

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiao, Hongzhu; Rao, N.S.V.; Protopopescu, V.

    Regression or function classes of Euclidean type with compact support and certain smoothness properties are shown to be PAC learnable by the Nadaraya-Watson estimator based on complete orthonormal systems. While requiring more smoothness properties than typical PAC formulations, this estimator is computationally efficient, easy to implement, and known to perform well in a number of practical applications. The sample sizes necessary for PAC learning of regressions or functions under sup norm cost are derived for a general orthonormal system. The result covers the widely used estimators based on Haar wavelets, trignometric functions, and Daubechies wavelets.

  8. Influence of the optimization methods on neural state estimation quality of the drive system with elasticity.

    PubMed

    Orlowska-Kowalska, Teresa; Kaminski, Marcin

    2014-01-01

    The paper deals with the implementation of optimized neural networks (NNs) for state variable estimation of the drive system with an elastic joint. The signals estimated by NNs are used in the control structure with a state-space controller and additional feedbacks from the shaft torque and the load speed. High estimation quality is very important for the correct operation of a closed-loop system. The precision of state variables estimation depends on the generalization properties of NNs. A short review of optimization methods of the NN is presented. Two techniques typical for regularization and pruning methods are described and tested in detail: the Bayesian regularization and the Optimal Brain Damage methods. Simulation results show good precision of both optimized neural estimators for a wide range of changes of the load speed and the load torque, not only for nominal but also changed parameters of the drive system. The simulation results are verified in a laboratory setup.

  9. Distributed weighted least-squares estimation with fast convergence for large-scale systems☆

    PubMed Central

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976

  10. Sparse image reconstruction for molecular imaging.

    PubMed

    Ting, Michael; Raich, Raviv; Hero, Alfred O

    2009-06-01

    The application that motivates this paper is molecular imaging at the atomic level. When discretized at subatomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. This paper, therefore, does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.

  11. Parametric study of transport aircraft systems cost and weight

    NASA Technical Reports Server (NTRS)

    Beltramo, M. N.; Trapp, D. L.; Kimoto, B. W.; Marsh, D. P.

    1977-01-01

    The results of a NASA study to develop production cost estimating relationships (CERs) and weight estimating relationships (WERs) for commercial and military transport aircraft at the system level are presented. The systems considered correspond to the standard weight groups defined in Military Standard 1374 and are listed. These systems make up a complete aircraft exclusive of engines. The CER for each system (or CERs in several cases) utilize weight as the key parameter. Weights may be determined from detailed weight statements, if available, or by using the WERs developed, which are based on technical and performance characteristics generally available during preliminary design. The CERs that were developed provide a very useful tool for making preliminary estimates of the production cost of an aircraft. Likewise, the WERs provide a very useful tool for making preliminary estimates of the weight of aircraft based on conceptual design information.

  12. Estimating The Rate of Technology Adoption for Cockpit Weather Information Systems

    NASA Technical Reports Server (NTRS)

    Kauffmann, Paul; Stough, H. P.

    2000-01-01

    In February 1997, President Clinton announced a national goal to reduce the weather related fatal accident rate for aviation by 80% in ten years. To support that goal, NASA established an Aviation Weather Information Distribution and Presentation Project to develop technologies that will provide timely and intuitive information to pilots, dispatchers, and air traffic controllers. This information should enable the detection and avoidance of atmospheric hazards and support an improvement in the fatal accident rate related to weather. A critical issue in the success of NASA's weather information program is the rate at which the market place will adopt this new weather information technology. This paper examines that question by developing estimated adoption curves for weather information systems in five critical aviation segments: commercial, commuter, business, general aviation, and rotorcraft. The paper begins with development of general product descriptions. Using this data, key adopters are surveyed and estimates of adoption rates are obtained. These estimates are regressed to develop adoption curves and equations for weather related information systems. The paper demonstrates the use of adoption rate curves in product development and research planning to improve managerial decision processes and resource allocation.

  13. Analysis of lane change crashes

    DOT National Transportation Integrated Search

    2003-03-01

    This report defines the problem of lane change crashes in the United States (U.S.) based on data from the 1999 National Automotive Sampling System/General Estimates System (GES) crash database of the National Highway Traffic Safety Administration. Th...

  14. PREDICTION OF RELIABILITY IN BIOGRAPHICAL QUESTIONNAIRES.

    ERIC Educational Resources Information Center

    STARRY, ALLAN R.

    THE OBJECTIVES OF THIS STUDY WERE (1) TO DEVELOP A GENERAL CLASSIFICATION SYSTEM FOR LIFE HISTORY ITEMS, (2) TO DETERMINE TEST-RETEST RELIABILITY ESTIMATES, AND (3) TO ESTIMATE RESISTANCE TO EXAMINEE FAKING, FOR REPRESENTATIVE BIOGRAPHICAL QUESTIONNAIRES. TWO 100-ITEM QUESTIONNAIRES WERE CONSTRUCTED THROUGH RANDOM ASSIGNMENT BY CONTENT AREA OF 200…

  15. Empirical Estimates of 0Day Vulnerabilities in Control Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miles A. McQueen; Wayne F. Boyer; Sean M. McBride

    2009-01-01

    We define a 0Day vulnerability to be any vulnerability, in deployed software, which has been discovered by at least one person but has not yet been publicly announced or patched. These 0Day vulnerabilities are of particular interest when assessing the risk to well managed control systems which have already effectively mitigated the publicly known vulnerabilities. In these well managed systems the risk contribution from 0Days will have proportionally increased. To aid understanding of how great a risk 0Days may pose to control systems, an estimate of how many are in existence is needed. Consequently, using the 0Day definition given above,more » we developed and applied a method for estimating how many 0Day vulnerabilities are in existence on any given day. The estimate is made by: empirically characterizing the distribution of the lifespans, measured in days, of 0Day vulnerabilities; determining the number of vulnerabilities publicly announced each day; and applying a novel method for estimating the number of 0Day vulnerabilities in existence on any given day using the number of vulnerabilities publicly announced each day and the previously derived distribution of 0Day lifespans. The method was first applied to a general set of software applications by analyzing the 0Day lifespans of 491 software vulnerabilities and using the daily rate of vulnerability announcements in the National Vulnerability Database. This led to a conservative estimate that in the worst year there were, on average, 2500 0Day software related vulnerabilities in existence on any given day. Using a smaller but intriguing set of 15 0Day software vulnerability lifespans representing the actual time from discovery to public disclosure, we then made a more aggressive estimate. In this case, we estimated that in the worst year there were, on average, 4500 0Day software vulnerabilities in existence on any given day. We then proceeded to identify the subset of software applications likely to be used in some control systems, analyzed the associated subset of vulnerabilities, and characterized their lifespans. Using the previously developed method of analysis, we very conservatively estimated 250 control system related 0Day vulnerabilities in existence on any given day. While reasonable, this first order estimate for control systems is probably far more conservative than those made for general software systems since the estimate did not include vulnerabilities unique to control system specific components. These control system specific vulnerabilities were unable to be included in the estimate for a variety of reasons with the most problematic being that the public announcement of unique control system vulnerabilities is very sparse. Consequently, with the intent to improve the above 0Day estimate for control systems, we first identified the additional, unique to control systems, vulnerability estimation constraints and then investigated new mechanisms which may be useful for estimating the number of unique 0Day software vulnerabilities found in control system components. We proceeded to identify a number of new mechanisms and approaches for estimating and incorporating control system specific vulnerabilities into an improved 0Day estimation method. These new mechanisms and approaches appear promising and will be more rigorously evaluated during the course of the next year.« less

  16. Analysis of technology requirements and potential demand for general aviation avionics systems in the 1980's. [technology assessment and technological forecasting of the aircraft industry

    NASA Technical Reports Server (NTRS)

    Cohn, D. M.; Kayser, J. H.; Senko, G. M.; Glenn, D. R.

    1974-01-01

    The trend for the increasing need for aircraft-in-general as a major source of transportation in the United States is presented (military and commercial aircraft are excluded). Social, political, and economic factors that affect the aircraft industry are considered, and cost estimates are given. Aircraft equipment and navigation systems are discussed.

  17. The General Mission Analysis Tool (GMAT): Current Features And Adding Custom Functionality

    NASA Technical Reports Server (NTRS)

    Conway, Darrel J.; Hughes, Steven P.

    2010-01-01

    The General Mission Analysis Tool (GMAT) is a software system for trajectory optimization, mission analysis, trajectory estimation, and prediction developed by NASA, the Air Force Research Lab, and private industry. GMAT's design and implementation are based on four basic principles: open source visibility for both the source code and design documentation; platform independence; modular design; and user extensibility. The system, released under the NASA Open Source Agreement, runs on Windows, Mac and Linux. User extensions, loaded at run time, have been built for optimization, trajectory visualization, force model extension, and estimation, by parties outside of GMAT's development group. The system has been used to optimize maneuvers for the Lunar Crater Observation and Sensing Satellite (LCROSS) and ARTEMIS missions and is being used for formation design and analysis for the Magnetospheric Multiscale Mission (MMS).

  18. ASSESSMENT OF HIGH-TEMPERATURE GEOTHERMAL RESOURCES IN HYDROTHERMAL CONVECTION SYSTEMS IN THE UNITED STATES.

    USGS Publications Warehouse

    Nathenson, Manuel

    1984-01-01

    The amount of thermal energy in high-temperature geothermal systems (>150 degree C) in the United States has been calculated by estimating the temperature, area, and thickness of each identified system. These data, along with a general model for recoverability of geothermal energy and a calculation that takes account of the conversion of thermal energy to electricity, yield a resource estimate of 23,000 MWe for 30 years. The undiscovered component was estimated based on multipliers of the identified resource as either 72,000 or 127,000 MWe for 30 years depending on the model chosen for the distribution of undiscovered energy as a function of temperature.

  19. Reusable Reentry Satellite (RRS) system design study: System cost estimates document

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The Reusable Reentry Satellite (RRS) program was initiated to provide life science investigators relatively inexpensive, frequent access to space for extended periods of time with eventual satellite recovery on earth. The RRS will provide an on-orbit laboratory for research on biological and material processes, be launched from a number of expendable launch vehicles, and operate in Low-Altitude Earth Orbit (LEO) as a free-flying unmanned laboratory. SAIC's design will provide independent atmospheric reentry and soft landing in the continental U.S., orbit for a maximum of 60 days, and will sustain three flights per year for 10 years. The Reusable Reentry Vehicle (RRV) will be 3-axis stabilized with artificial gravity up to 1.5g's, be rugged and easily maintainable, and have a modular design to accommodate a satellite bus and separate modular payloads (e.g., rodent module, general biological module, ESA microgravity botany facility, general botany module). The purpose of this System Cost Estimate Document is to provide a Life Cycle Cost Estimate (LCCE) for a NASA RRS Program using SAIC's RRS design. The estimate includes development, procurement, and 10 years of operations and support (O&S) costs for NASA's RRS program. The estimate does not include costs for other agencies which may track or interface with the RRS program (e.g., Air Force tracking agencies or individual RRS experimenters involved with special payload modules (PM's)). The life cycle cost estimate extends over the 10 year operation and support period FY99-2008.

  20. Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space

    PubMed Central

    Chen, Min; Hashimoto, Koichi

    2017-01-01

    Motivated by biological interests in analyzing navigation behaviors of flying animals, we attempt to build a system measuring their motion states. To do this, in this paper, we build a vision system to detect unknown fast moving objects within a given space, calculating their motion parameters represented by positions and poses. We proposed a novel method to detect reliable interest points from images of moving objects, which can be hardly detected by general purpose interest point detectors. 3D points reconstructed using these interest points are then grouped and maintained for detected objects, according to a careful schedule, considering appearance and perspective changes. In the estimation step, a method is introduced to adapt the robust estimation procedure used for dense point set to the case for sparse set, reducing the potential risk of greatly biased estimation. Experiments are conducted against real scenes, showing the capability of the system of detecting multiple unknown moving objects and estimating their positions and poses. PMID:29206189

  1. A vehicle health monitoring system for the Space Shuttle Reaction Control System during reentry. M.S. Thesis - Massachusetts Inst. of Technology

    NASA Technical Reports Server (NTRS)

    Rosello, Anthony David

    1995-01-01

    A general two tier framework for vehicle health monitoring of Guidance Navigation and Control (GN&C) system actuators, effectors, and propulsion devices is presented. In this context, a top level monitor that estimates jet thrust is designed for the Space Shuttle Reaction Control System (RCS) during the reentry phase of flight. Issues of importance for the use of estimation technologies in vehicle health monitoring are investigated and quantified for the Shuttle RCS demonstration application. These issues include rate of convergence, robustness to unmodeled dynamics, sensor quality, sensor data rates, and information recording objectives. Closed loop simulations indicate that a Kalman filter design is sensitive to modeling error and robust estimators may reduce this sensitivity. Jet plume interaction with the aerodynamic flowfield is shown to be a significant effect adversely impacting the ability to accurately estimate thrust.

  2. Implicit Particle Filter for Power System State Estimation with Large Scale Renewable Power Integration.

    NASA Astrophysics Data System (ADS)

    Uzunoglu, B.; Hussaini, Y.

    2017-12-01

    Implicit Particle Filter is a sequential Monte Carlo method for data assimilation that guides the particles to the high-probability by an implicit step . It optimizes a nonlinear cost function which can be inherited from legacy assimilation routines . Dynamic state estimation for almost real-time applications in power systems are becomingly increasingly more important with integration of variable wind and solar power generation. New advanced state estimation tools that will replace the old generation state estimation in addition to having a general framework of complexities should be able to address the legacy software and able to integrate the old software in a mathematical framework while allowing the power industry need for a cautious and evolutionary change in comparison to a complete revolutionary approach while addressing nonlinearity and non-normal behaviour. This work implements implicit particle filter as a state estimation tool for the estimation of the states of a power system and presents the first implicit particle filter application study on a power system state estimation. The implicit particle filter is introduced into power systems and the simulations are presented for a three-node benchmark power system . The performance of the filter on the presented problem is analyzed and the results are presented.

  3. Application of super-twisting observers to the estimation of state and unknown inputs in an anaerobic digestion system.

    PubMed

    Sbarciog, M; Moreno, J A; Vande Wouwer, A

    2014-01-01

    This paper presents the estimation of the unknown states and inputs of an anaerobic digestion system characterized by a two-step reaction model. The estimation is based on the measurement of the two substrate concentrations and of the outflow rate of biogas and relies on the use of an observer, consisting of three parts. The first is a generalized super-twisting observer, which estimates a linear combination of the two input concentrations. The second is an asymptotic observer, which provides one of the two biomass concentrations, whereas the third is a super-twisting observer for one of the input concentrations and the second biomass concentration.

  4. Fuzzy neural network for flow estimation in sewer systems during wet weather.

    PubMed

    Shen, Jun; Shen, Wei; Chang, Jian; Gong, Ning

    2006-02-01

    Estimation of the water flow from rainfall intensity during storm events is important in hydrology, sewer system control, and environmental protection. The runoff-producing behavior of a sewer system changes from one storm event to another because rainfall loss depends not only on rainfall intensities, but also on the state of the soil and vegetation, the general condition of the climate, and so on. As such, it would be difficult to obtain a precise flowrate estimation without sufficient a priori knowledge of these factors. To establish a model for flow estimation, one can also use statistical methods, such as the neural network STORMNET, software developed at Lyonnaise des Eaux, France, analyzing the relation between rainfall intensity and flowrate data of the known storm events registered in the past for a given sewer system. In this study, the authors propose a fuzzy neural network to estimate the flowrate from rainfall intensity. The fuzzy neural network combines four STORMNETs and fuzzy deduction to better estimate the flowrates. This study's system for flow estimation can be calibrated automatically by using known storm events; no data regarding the physical characteristics of the drainage basins are required. Compared with the neural network STORMNET, this method reduces the mean square error of the flow estimates by approximately 20%. Experimental results are reported herein.

  5. Remote sensing-aided systems for snow qualification, evapotranspiration estimation, and their application in hydrologic models

    NASA Technical Reports Server (NTRS)

    Korram, S.

    1977-01-01

    The design of general remote sensing-aided methodologies was studied to provide the estimates of several important inputs to water yield forecast models. These input parameters are snow area extent, snow water content, and evapotranspiration. The study area is Feather River Watershed (780,000 hectares), Northern California. The general approach involved a stepwise sequence of identification of the required information, sample design, measurement/estimation, and evaluation of results. All the relevent and available information types needed in the estimation process are being defined. These include Landsat, meteorological satellite, and aircraft imagery, topographic and geologic data, ground truth data, and climatic data from ground stations. A cost-effective multistage sampling approach was employed in quantification of all the required parameters. The physical and statistical models for both snow quantification and evapotranspiration estimation was developed. These models use the information obtained by aerial and ground data through appropriate statistical sampling design.

  6. Proceedings of the Workshop on Applications of Distributed System Theory to the Control of Large Space Structures

    NASA Technical Reports Server (NTRS)

    Rodriguez, G. (Editor)

    1983-01-01

    Two general themes in the control of large space structures are addressed: control theory for distributed parameter systems and distributed control for systems requiring spatially-distributed multipoint sensing and actuation. Topics include modeling and control, stabilization, and estimation and identification.

  7. Network origin-destination demand estimation using limited link traffic counts : strategic deployment of vehicle detectors through an integrated corridor management framework.

    DOT National Transportation Integrated Search

    2009-10-15

    In typical road traffic corridors, freeway systems are generally well-equipped with traffic surveillance systems such as vehicle detector (VD) and/or closed circuit television (CCTV) systems in order to gather timely traffic information for traffic c...

  8. INTEGRATION OF THE BIOGENIC EMISSIONS INVENTORY SYSTEM (BEIS3) INTO THE COMMUNITY MULTISCALE AIR QUALITY MODELING SYSTEM

    EPA Science Inventory

    The importance of biogenic emissions for regional air quality modeling is generally recognized [Guenther et al., 2000]. Since the 1980s, biogenic emission estimates have been derived from algorithms such as the Biogenic Emissions Inventory System (BEIS) [Pierce et. al., 1998]....

  9. Aerodynamic design guidelines and computer program for estimation of subsonic wind tunnel performance

    NASA Technical Reports Server (NTRS)

    Eckert, W. T.; Mort, K. W.; Jope, J.

    1976-01-01

    General guidelines are given for the design of diffusers, contractions, corners, and the inlets and exits of non-return tunnels. A system of equations, reflecting the current technology, has been compiled and assembled into a computer program (a user's manual for this program is included) for determining the total pressure losses. The formulation presented is applicable to compressible flow through most closed- or open-throat, single-, double-, or non-return wind tunnels. A comparison of estimated performance with that actually achieved by several existing facilities produced generally good agreement.

  10. Inventorying recreation use

    Treesearch

    George A. James

    1971-01-01

    Part I is a general discussion about the estimation of recreation use, with descriptions of selected sampling techniques for estimating recreation use on a wide variety of different sites and areas. Part II is a brief discussion of an operational computer oriented information system designed and developed by the USDA Forest Service to fully utilize the inventories of...

  11. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include realtime, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify & other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  12. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include real-time, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identity other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  13. Real-time Detection of Moving Objects from Moving Vehicles Using Dense Stereo and Optical Flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time. dense stereo system to include realtime. dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop. computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  14. A Complementary Note to 'A Lag-1 Smoother Approach to System-Error Estimation': The Intrinsic Limitations of Residual Diagnostics

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2015-01-01

    Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.

  15. Interconnection Guidelines

    EPA Pesticide Factsheets

    The Interconnection Guidelines provide general guidance on the steps involved with connecting biogas recovery systems to the utility electrical power grid. Interconnection best practices including time and cost estimates are discussed.

  16. HRV based health&sport markers using video from the face.

    PubMed

    Capdevila, Lluis; Moreno, Jordi; Movellan, Javier; Parrado, Eva; Ramos-Castro, Juan

    2012-01-01

    Heart Rate Variability (HRV) is an indicator of health status in the general population and of adaptation to stress in athletes. In this paper we compare the performance of two systems to measure HRV: (1) A commercial system based on recording the physiological cardiac signal with (2) A computer vision system that uses a standard video images of the face to estimate RR from changes in skin color of the face. We show that the computer vision system performs surprisingly well. It estimates individual RR intervals in a non-invasive manner and with error levels comparable to those achieved by the physiological based system.

  17. Optimal post-experiment estimation of poorly modeled dynamic systems

    NASA Technical Reports Server (NTRS)

    Mook, D. Joseph

    1988-01-01

    Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.

  18. Stability and delay sensitivity of neutral fractional-delay systems.

    PubMed

    Xu, Qi; Shi, Min; Wang, Zaihua

    2016-08-01

    This paper generalizes the stability test method via integral estimation for integer-order neutral time-delay systems to neutral fractional-delay systems. The key step in stability test is the calculation of the number of unstable characteristic roots that is described by a definite integral over an interval from zero to a sufficient large upper limit. Algorithms for correctly estimating the upper limits of the integral are given in two concise ways, parameter dependent or independent. A special feature of the proposed method is that it judges the stability of fractional-delay systems simply by using rough integral estimation. Meanwhile, the paper shows that for some neutral fractional-delay systems, the stability is extremely sensitive to the change of time delays. Examples are given for demonstrating the proposed method as well as the delay sensitivity.

  19. Estimation of some transducer parameters in a broadband piezoelectric transmitter by using an artificial intelligence technique.

    PubMed

    Ruíz, A; Ramos, A; San Emeterio, J L

    2004-04-01

    An estimation procedure to efficiently find approximate values of internal parameters in ultrasonic transducers intended for broadband operation would be a valuable tool to discover internal construction data. This information is necessary in the modelling and simulation of acoustic and electrical behaviour related to ultrasonic systems containing commercial transducers. There is not a general solution for this generic problem of parameter estimation in the case of broadband piezoelectric probes. In this paper, this general problem is briefly analysed for broadband conditions. The viability of application in this field of an artificial intelligence technique supported on the modelling of the transducer internal components is studied. A genetic algorithm (GA) procedure is presented and applied to the estimation of different parameters, related to two transducers which are working as pulsed transmitters. The efficiency of this GA technique is studied, considering the influence of the number and variation range of the estimated parameters. Estimation results are experimentally ratified.

  20. Stochastic stability of sigma-point Unscented Predictive Filter.

    PubMed

    Cao, Lu; Tang, Yu; Chen, Xiaoqian; Zhao, Yong

    2015-07-01

    In this paper, the Unscented Predictive Filter (UPF) is derived based on unscented transformation for nonlinear estimation, which breaks the confine of conventional sigma-point filters by employing Kalman filter as subject investigated merely. In order to facilitate the new method, the algorithm flow of UPF is given firstly. Then, the theoretical analyses demonstrate that the estimate accuracy of the model error and system for the UPF is higher than that of the conventional PF. Moreover, the authors analyze the stochastic boundedness and the error behavior of Unscented Predictive Filter (UPF) for general nonlinear systems in a stochastic framework. In particular, the theoretical results present that the estimation error remains bounded and the covariance keeps stable if the system׳s initial estimation error, disturbing noise terms as well as the model error are small enough, which is the core part of the UPF theory. All of the results have been demonstrated by numerical simulations for a nonlinear example system. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Conservation laws with coinciding smooth solutions but different conserved variables

    NASA Astrophysics Data System (ADS)

    Colombo, Rinaldo M.; Guerra, Graziano

    2018-04-01

    Consider two hyperbolic systems of conservation laws in one space dimension with the same eigenvalues and (right) eigenvectors. We prove that solutions to Cauchy problems with the same initial data differ at third order in the total variation of the initial datum. As a first application, relying on the classical Glimm-Lax result (Glimm and Lax in Decay of solutions of systems of nonlinear hyperbolic conservation laws. Memoirs of the American Mathematical Society, No. 101. American Mathematical Society, Providence, 1970), we obtain estimates improving those in Saint-Raymond (Arch Ration Mech Anal 155(3):171-199, 2000) on the distance between solutions to the isentropic and non-isentropic inviscid compressible Euler equations, under general equations of state. Further applications are to the general scalar case, where rather precise estimates are obtained, to an approximation by Di Perna of the p-system and to a traffic model.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y.; Liu, Z.; Zhang, S.

    Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less

  3. Increasing Investment in Higher Education: The Role of a Graduate Tax.

    ERIC Educational Resources Information Center

    Lincoln, Ian; Walker, Arthur

    1993-01-01

    Proposes a remodeled funding system for UK higher education that incorporates a graduate tax system and reduces dependency on general taxation. Estimates this system's effects on available resources, economic attractiveness to students, and implications for public expenditure. Proposal offers a non-means-tested grant for students, combined with an…

  4. A generalized system of models forecasting Central States tree growth.

    Treesearch

    Stephen R. Shifley

    1987-01-01

    Describes the development and testing of a system of individual tree-based growth projection models applicable to species in Indiana, Missouri, and Ohio. Annual tree basal area growth is estimated as a function of tree size, crown ratio, stand density, and site index. Models are compatible with the STEMS and TWIGS Projection System.

  5. Estimation of Promotion, Repetition and Dropout Rates for Learners in South African Schools

    ERIC Educational Resources Information Center

    Uys, Daniël Wilhelm; Alant, Edward John Thomas

    2015-01-01

    A new procedure for estimating promotion, repetition and dropout rates for learners in South African schools is proposed. The procedure uses three different data sources: data from the South African General Household survey, data from the Education Management Information Systems, and data from yearly reports published by the Department of Basic…

  6. Optimum data weighting and error calibration for estimation of gravitational parameters

    NASA Technical Reports Server (NTRS)

    Lerch, Francis J.

    1989-01-01

    A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least-squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 Goddard Earth Model-T1 (GEM-T1) were employed toward application of this technique for gravity field parameters. Also GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized. The method employs subset solutions of the data associated with the complete solution to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

  7. National automotive sampling system (NASS) general estimates system (GES) : analytical user's manual, 1988-1999

    DOT National Transportation Integrated Search

    2002-01-01

    One of the primary objectives of the National Highway Traffic Safety Administration (NHTSA) is to reduce : the staggering human toll and property damage that motor vehicle traffic crashes impose on our society. : Crashes each year result in thousands...

  8. National automotive sampling system (NASS) general estimates system (GES) : analytical user's manual, 1988-2000

    DOT National Transportation Integrated Search

    2001-07-01

    One of the primary objectives of the National Highway Traffic Safety Administration (NHTSA) is to reduce the staggering human toll and property damage that motor vehicle traffic crashes impose on our society. Crashes each year result in thousands of ...

  9. National Automotive Sampling System (NASS) General Estimates System (GES) : analytical user's manual, 1988-1997

    DOT National Transportation Integrated Search

    2000-01-01

    One of the primary objectives of the National Highway Traffic Safety Administration (NHTSA) is : to reduce the staggering human toll and property damage that motor vehicle traffic crashes impose : on our society. Crashes each year result in thousands...

  10. Estimation of the limit of detection using information theory measures.

    PubMed

    Fonollosa, Jordi; Vergara, Alexander; Huerta, Ramón; Marco, Santiago

    2014-01-31

    Definitions of the limit of detection (LOD) based on the probability of false positive and/or false negative errors have been proposed over the past years. Although such definitions are straightforward and valid for any kind of analytical system, proposed methodologies to estimate the LOD are usually simplified to signals with Gaussian noise. Additionally, there is a general misconception that two systems with the same LOD provide the same amount of information on the source regardless of the prior probability of presenting a blank/analyte sample. Based upon an analogy between an analytical system and a binary communication channel, in this paper we show that the amount of information that can be extracted from an analytical system depends on the probability of presenting the two different possible states. We propose a new definition of LOD utilizing information theory tools that deals with noise of any kind and allows the introduction of prior knowledge easily. Unlike most traditional LOD estimation approaches, the proposed definition is based on the amount of information that the chemical instrumentation system provides on the chemical information source. Our findings indicate that the benchmark of analytical systems based on the ability to provide information about the presence/absence of the analyte (our proposed approach) is a more general and proper framework, while converging to the usual values when dealing with Gaussian noise. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. State estimation improves prospects for ocean research

    NASA Astrophysics Data System (ADS)

    Stammer, Detlef; Wunsch, C.; Fukumori, I.; Marshall, J.

    Rigorous global ocean state estimation methods can now be used to produce dynamically consistent time-varying model/data syntheses, the results of which are being used to study a variety of important scientific problems. Figure 1 shows a schematic of a complete ocean observing and synthesis system that includes global observations and state-of-the-art ocean general circulation models (OGCM) run on modern computer platforms. A global observing system is described in detail in Smith and Koblinsky [2001],and the present status of ocean modeling and anticipated improvements are addressed by Griffies et al. [2001]. Here, the focus is on the third component of state estimation: the synthesis of the observations and a model into a unified, dynamically consistent estimate.

  12. Decentralized Estimation and Control for Preserving the Strong Connectivity of Directed Graphs.

    PubMed

    Sabattini, Lorenzo; Secchi, Cristian; Chopra, Nikhil

    2015-10-01

    In order to accomplish cooperative tasks, decentralized systems are required to communicate among each other. Thus, maintaining the connectivity of the communication graph is a fundamental issue. Connectivity maintenance has been extensively studied in the last few years, but generally considering undirected communication graphs. In this paper, we introduce a decentralized control and estimation strategy to maintain the strong connectivity property of directed communication graphs. In particular, we introduce a hierarchical estimation procedure that implements power iteration in a decentralized manner, exploiting an algorithm for balancing strongly connected directed graphs. The output of the estimation system is then utilized for guaranteeing preservation of the strong connectivity property. The control strategy is validated by means of analytical proofs and simulation results.

  13. Adaptive enhanced sampling by force-biasing using neural networks

    NASA Astrophysics Data System (ADS)

    Guo, Ashley Z.; Sevgen, Emre; Sidky, Hythem; Whitmer, Jonathan K.; Hubbell, Jeffrey A.; de Pablo, Juan J.

    2018-04-01

    A machine learning assisted method is presented for molecular simulation of systems with rugged free energy landscapes. The method is general and can be combined with other advanced sampling techniques. In the particular implementation proposed here, it is illustrated in the context of an adaptive biasing force approach where, rather than relying on discrete force estimates, one can resort to a self-regularizing artificial neural network to generate continuous, estimated generalized forces. By doing so, the proposed approach addresses several shortcomings common to adaptive biasing force and other algorithms. Specifically, the neural network enables (1) smooth estimates of generalized forces in sparsely sampled regions, (2) force estimates in previously unexplored regions, and (3) continuous force estimates with which to bias the simulation, as opposed to biases generated at specific points of a discrete grid. The usefulness of the method is illustrated with three different examples, chosen to highlight the wide range of applicability of the underlying concepts. In all three cases, the new method is found to enhance considerably the underlying traditional adaptive biasing force approach. The method is also found to provide improvements over previous implementations of neural network assisted algorithms.

  14. Estimation of nonlinear pilot model parameters including time delay.

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Roland, V. R.; Wells, W. R.

    1972-01-01

    Investigation of the feasibility of using a Kalman filter estimator for the identification of unknown parameters in nonlinear dynamic systems with a time delay. The problem considered is the application of estimation theory to determine the parameters of a family of pilot models containing delayed states. In particular, the pilot-plant dynamics are described by differential-difference equations of the retarded type. The pilot delay, included as one of the unknown parameters to be determined, is kept in pure form as opposed to the Pade approximations generally used for these systems. Problem areas associated with processing real pilot response data are included in the discussion.

  15. HIGH-TEMPERATURE GEOTHERMAL RESOURCES IN HYDROTHERMAL CONVECTION SYSTEMS IN THE UNITED STATES.

    USGS Publications Warehouse

    Nathenson, Manuel

    1983-01-01

    The calculation of high-temperature geothermal resources ( greater than 150 degree C) in the United States has been done by estimating the temperature, area, and thickness of each identified system. These data, along with a general model for recoverability of geothermal energy and a calculation that takes account of the conversion of thermal energy to electricity, yielded an estimate of 23,000 MW//e for 30 years. The undiscovered component was estimated based on multipliers of the identified resource as either 72,000 or 127,000 MW//e for 30 years depending on the model chosen for the distribution of undiscovered energy as a function of temperature.

  16. mBEEF-vdW: Robust fitting of error estimation density functionals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes

    Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less

  17. mBEEF-vdW: Robust fitting of error estimation density functionals

    DOE PAGES

    Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; ...

    2016-06-15

    Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less

  18. Sequential Feedback Scheme Outperforms the Parallel Scheme for Hamiltonian Parameter Estimation.

    PubMed

    Yuan, Haidong

    2016-10-14

    Measurement and estimation of parameters are essential for science and engineering, where the main quest is to find the highest achievable precision with the given resources and design schemes to attain it. Two schemes, the sequential feedback scheme and the parallel scheme, are usually studied in the quantum parameter estimation. While the sequential feedback scheme represents the most general scheme, it remains unknown whether it can outperform the parallel scheme for any quantum estimation tasks. In this Letter, we show that the sequential feedback scheme has a threefold improvement over the parallel scheme for Hamiltonian parameter estimations on two-dimensional systems, and an order of O(d+1) improvement for Hamiltonian parameter estimation on d-dimensional systems. We also show that, contrary to the conventional belief, it is possible to simultaneously achieve the highest precision for estimating all three components of a magnetic field, which sets a benchmark on the local precision limit for the estimation of a magnetic field.

  19. Application of the Generalized Nonlinear Complementary Relationship for Estimating Evaporation in North China

    NASA Astrophysics Data System (ADS)

    Yu, M.; Wu, B.

    2017-12-01

    As an important part of the coupled Eco-Hydrological processes, evaporation is the bond for exchange of energy and heat between the surface and the atmosphere. However, the estimation of evaporation remains a challenge compared with other main hydrological factors in water cycle. The complementary relationship which proposed by Bouchet (1963) has laid the foundation for various approaches to estimate evaporation from land surfaces, the essence of the principle is a relationship between three types of evaporation in the environment. It can simply implemented with routine meteorological data without the need for resistance parameters of the vegetation and bare land, which are difficult to observed and complicated to estimate in most surface flux models. On this basis the generalized nonlinear formulation was proposed by Brutsaert (2015). The daily evaporation can be estimated once the potential evaporation (Epo) and apparent potential evaporation (Epa) are known. The new formulation has a strong physical basis and can be expected to perform better under natural water stress conditions, nevertheless, the model has not been widely validated over different climate types and underlying surface patterns. In this study, we attempted to apply the generalized nonlinear complementary relationship in North China, three flux stations in North China are used for testing the universality and accuracy of this model against observed evaporation over different vegetation types, including Guantao Site, Miyun Site and Huailai Site. Guantao Site has double-cropping systems and crop rotations with summer maize and winter wheat; the other two sites are dominated by spring maize. Detailed measurements of meteorological factors at certain heights above ground surface from automatic weather stations offered necessary parameters for daily evaporation estimation. Using the Bowen ratio, the surface energy measured by the eddy covariance systems at the flux stations is adjusted on a daily scale to satisfy the surface energy closure. After calibration the estimated daily evaporation are in good agreement with EC-measured flux data with a mean correlation coefficient in excess of 0.85. The results indicate that the generalized nonlinear complementary relationship can be applied in plant growing and non-growing season in North China.

  20. System Identification for Nonlinear Control Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Stengel, Robert F.; Linse, Dennis J.

    1990-01-01

    An approach to incorporating artificial neural networks in nonlinear, adaptive control systems is described. The controller contains three principal elements: a nonlinear inverse dynamic control law whose coefficients depend on a comprehensive model of the plant, a neural network that models system dynamics, and a state estimator whose outputs drive the control law and train the neural network. Attention is focused on the system identification task, which combines an extended Kalman filter with generalized spline function approximation. Continual learning is possible during normal operation, without taking the system off line for specialized training. Nonlinear inverse dynamic control requires smooth derivatives as well as function estimates, imposing stringent goals on the approximating technique.

  1. Initial guidelines and estimates for a power system with inertial (flywheel) energy storage

    NASA Technical Reports Server (NTRS)

    Slifer, L. W., Jr.

    1980-01-01

    The starting point for the assessment of a spacecraft power system utilizing inertial (flywheel) energy storage. Both general and specific guidelines are defined for the assessment of a modular flywheel system, operationally similar to but with significantly greater capability than the multimission modular spacecraft (MMS) power system. Goals for the flywheel system are defined in terms of efficiently train estimates and mass estimates for the system components. The inertial storage power system uses a 5 kw-hr flywheel storage component at 50 percent depth of discharge (DOD). It is capable of supporting an average load of 3 kw, including a peak load of 7.5 kw for 10 percent of the duty cycle, in low earth orbit operation. The specific power goal for the system is 10 w/kg, consisting of a 56w/kg (end of life) solar array, a 21.7 w-hr/kg (at 50 percent DOD) flywheel, and 43 w/kg power processing (conditioning, control and distribution).

  2. On a Formal Tool for Reasoning About Flight Software Cost Analysis

    NASA Technical Reports Server (NTRS)

    Spagnuolo, John N., Jr.; Stukes, Sherry A.

    2013-01-01

    A report focuses on the development of flight software (FSW) cost estimates for 16 Discovery-class missions at JPL. The techniques and procedures developed enabled streamlining of the FSW analysis process, and provided instantaneous confirmation that the data and processes used for these estimates were consistent across all missions. The research provides direction as to how to build a prototype rule-based system for FSW cost estimation that would provide (1) FSW cost estimates, (2) explanation of how the estimates were arrived at, (3) mapping of costs, (4) mathematical trend charts with explanations of why the trends are what they are, (5) tables with ancillary FSW data of interest to analysts, (6) a facility for expert modification/enhancement of the rules, and (7) a basis for conceptually convenient expansion into more complex, useful, and general rule-based systems.

  3. 75 FR 38187 - Proposed Collection; Comment Request for Regulation Project

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-01

    ... the Accelerated Cost Recovery System (Sec. 1.168(i)-1). DATES: Written comments should be received on... . SUPPLEMENTARY INFORMATION: Title: General Asset Accounts under the Accelerated Cost Recovery System. OMB Number... approved collection. Affected Public: Business or other for-profit organizations and Farms. Estimated...

  4. Analysis of light vehicle crashes and pre-crash scenarios based on the 2000 General Estimates System

    DOT National Transportation Integrated Search

    2003-02-01

    This report analyzes the problem of light vehicle crashes in the United States to support the development and assessment of effective crash avoidance systems as part of the U.S. Department of Transportation's Intelligent Vehicle Initiative. The analy...

  5. Nonlinear Quantum Metrology of Many-Body Open Systems

    NASA Astrophysics Data System (ADS)

    Beau, M.; del Campo, A.

    2017-07-01

    We introduce general bounds for the parameter estimation error in nonlinear quantum metrology of many-body open systems in the Markovian limit. Given a k -body Hamiltonian and p -body Lindblad operators, the estimation error of a Hamiltonian parameter using a Greenberger-Horne-Zeilinger state as a probe is shown to scale as N-[k -(p /2 )], surpassing the shot-noise limit for 2 k >p +1 . Metrology equivalence between initial product states and maximally entangled states is established for p ≥1 . We further show that one can estimate the system-environment coupling parameter with precision N-(p /2 ), while many-body decoherence enhances the precision to N-k in the noise-amplitude estimation of a fluctuating k -body Hamiltonian. For the long-range Ising model, we show that the precision of this parameter beats the shot-noise limit when the range of interactions is below a threshold value.

  6. Assessment of geothermal resources of the United States, 1975

    USGS Publications Warehouse

    White, Donald Edward; Williams, David L.

    1975-01-01

    This assessment of geothermal resources of the United States consists of two major parts: (1) estimates of total heat in the ground to a depth of 10 km and (2) estimates of the part of this total heat that is recoverable with present technology, regardless of price. No attempt has been made to consider most aspects of the legal, environmental, and institutional limitations in exploiting these resouces. In general, the average heat content of rocks is considerably higher in the Western United States than in the East. This also helps to explain why the most favorable hydrothermal convection systems and the hot young igneous systems occur in the West. Resources of the most attractive identified convection systems (excluding national parks) with predicted reservoir temperatures above 150 deg C have an estimated electrical production potential of about 8,000 megawatt century, or about 26,000 megawatt for 30 years. Assumptions in this conversion are: (1) one-half of the volume of the heat reservoirs is porous and permeable, (2) one-half of the heat of the porous, permeable parts is recoverable in fluids at the wellheads, and (3) the conversion efficiency of heat in wellhead fluids to electricity ranges from about 8 to 20 percent , depending on temperature and kind of fluid (hot water or steam). The estimated overall efficiency of conversion of heat in the ground to electrical energy generally ranges from less than 2 to 5 percent, depending on type of system and reservoir temperature. (See also W77-07477) (Woodard-USGS)

  7. Alternative techniques for high-resolution spectral estimation of spectrally encoded endoscopy

    NASA Astrophysics Data System (ADS)

    Mousavi, Mahta; Duan, Lian; Javidi, Tara; Ellerbee, Audrey K.

    2015-09-01

    Spectrally encoded endoscopy (SEE) is a minimally invasive optical imaging modality capable of fast confocal imaging of internal tissue structures. Modern SEE systems use coherent sources to image deep within the tissue and data are processed similar to optical coherence tomography (OCT); however, standard processing of SEE data via the Fast Fourier Transform (FFT) leads to degradation of the axial resolution as the bandwidth of the source shrinks, resulting in a well-known trade-off between speed and axial resolution. Recognizing the limitation of FFT as a general spectral estimation algorithm to only take into account samples collected by the detector, in this work we investigate alternative high-resolution spectral estimation algorithms that exploit information such as sparsity and the general region position of the bulk sample to improve the axial resolution of processed SEE data. We validate the performance of these algorithms using bothMATLAB simulations and analysis of experimental results generated from a home-built OCT system to simulate an SEE system with variable scan rates. Our results open a new door towards using non-FFT algorithms to generate higher quality (i.e., higher resolution) SEE images at correspondingly fast scan rates, resulting in systems that are more accurate and more comfortable for patients due to the reduced image time.

  8. Velocity-Aided Attitude Estimation for Helicopter Aircraft Using Microelectromechanical System Inertial-Measurement Units.

    PubMed

    Lee, Sang Cheol; Hong, Sung Kyung

    2016-12-11

    This paper presents an algorithm for velocity-aided attitude estimation for helicopter aircraft using a microelectromechanical system inertial-measurement unit. In general, high- performance gyroscopes are used for estimating the attitude of a helicopter, but this type of sensor is very expensive. When designing a cost-effective attitude system, attitude can be estimated by fusing a low cost accelerometer and a gyro, but the disadvantage of this method is its relatively low accuracy. The accelerometer output includes a component that occurs primarily as the aircraft turns, as well as the gravitational acceleration. When estimating attitude, the accelerometer measurement terms other than gravitational ones can be considered as disturbances. Therefore, errors increase in accordance with the flight dynamics. The proposed algorithm is designed for using velocity as an aid for high accuracy at low cost. It effectively eliminates the disturbances of accelerometer measurements using the airspeed. The algorithm was verified using helicopter experimental data. The algorithm performance was confirmed through a comparison with an attitude estimate obtained from an attitude heading reference system based on a high accuracy optic gyro, which was employed as core attitude equipment in the helicopter.

  9. Velocity-Aided Attitude Estimation for Helicopter Aircraft Using Microelectromechanical System Inertial-Measurement Units

    PubMed Central

    Lee, Sang Cheol; Hong, Sung Kyung

    2016-01-01

    This paper presents an algorithm for velocity-aided attitude estimation for helicopter aircraft using a microelectromechanical system inertial-measurement unit. In general, high- performance gyroscopes are used for estimating the attitude of a helicopter, but this type of sensor is very expensive. When designing a cost-effective attitude system, attitude can be estimated by fusing a low cost accelerometer and a gyro, but the disadvantage of this method is its relatively low accuracy. The accelerometer output includes a component that occurs primarily as the aircraft turns, as well as the gravitational acceleration. When estimating attitude, the accelerometer measurement terms other than gravitational ones can be considered as disturbances. Therefore, errors increase in accordance with the flight dynamics. The proposed algorithm is designed for using velocity as an aid for high accuracy at low cost. It effectively eliminates the disturbances of accelerometer measurements using the airspeed. The algorithm was verified using helicopter experimental data. The algorithm performance was confirmed through a comparison with an attitude estimate obtained from an attitude heading reference system based on a high accuracy optic gyro, which was employed as core attitude equipment in the helicopter. PMID:27973429

  10. General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.

    PubMed

    Marmarelis, V Z; Berger, T W

    2005-07-01

    This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.

  11. Wavefront Curvature Sensing from Image Projections

    DTIC Science & Technology

    2006-09-01

    entrance pupil. The generalized pupil function, denoted P, provides a basic 1-7 mathematical model for the optical �eld at the system pupil: P(x; y...pupil or aperture radius, RP , may be included in Zernike functions and windowing functions to give the notation more generality . Given some ...promises a much faster read out time from the CCD along with some amount of information useful for estimating pupil phase. A General Image Projection

  12. Adaptive Elastic Net for Generalized Methods of Moments.

    PubMed

    Caner, Mehmet; Zhang, Hao Helen

    2014-01-30

    Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.

  13. Estimation of Theaflavins (TF) and Thearubigins (TR) Ratio in Black Tea Liquor Using Electronic Vision System

    NASA Astrophysics Data System (ADS)

    Akuli, Amitava; Pal, Abhra; Ghosh, Arunangshu; Bhattacharyya, Nabarun; Bandhopadhyya, Rajib; Tamuly, Pradip; Gogoi, Nagen

    2011-09-01

    Quality of black tea is generally assessed using organoleptic tests by professional tea tasters. They determine the quality of black tea based on its appearance (in dry condition and during liquor formation), aroma and taste. Variation in the above parameters is actually contributed by a number of chemical compounds like, Theaflavins (TF), Thearubigins (TR), Caffeine, Linalool, Geraniol etc. Among the above, TF and TR are the most important chemical compounds, which actually contribute to the formation of taste, colour and brightness in tea liquor. Estimation of TF and TR in black tea is generally done using a spectrophotometer instrument. But, the analysis technique undergoes a rigorous and time consuming effort for sample preparation; also the operation of costly spectrophotometer requires expert manpower. To overcome above problems an Electronic Vision System based on digital image processing technique has been developed. The system is faster, low cost, repeatable and can estimate the amount of TF and TR ratio for black tea liquor with accuracy. The data analysis is done using Principal Component Analysis (PCA), Multiple Linear Regression (MLR) and Multiple Discriminate Analysis (MDA). A correlation has been established between colour of tea liquor images and TF, TR ratio. This paper describes the newly developed E-Vision system, experimental methods, data analysis algorithms and finally, the performance of the E-Vision System as compared to the results of traditional spectrophotometer.

  14. 48 CFR 552.270-18 - Default in Delivery-Time Extensions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Default in Delivery-Time Extensions. 552.270-18 Section 552.270-18 Federal Acquisition Regulations System GENERAL SERVICES... leases, in excess of the aggregate rent and estimated real estate tax and operating cost adjustments for...

  15. 48 CFR 552.270-18 - Default in Delivery-Time Extensions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Default in Delivery-Time Extensions. 552.270-18 Section 552.270-18 Federal Acquisition Regulations System GENERAL SERVICES... leases, in excess of the aggregate rent and estimated real estate tax and operating cost adjustments for...

  16. 48 CFR 552.270-18 - Default in Delivery-Time Extensions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Default in Delivery-Time Extensions. 552.270-18 Section 552.270-18 Federal Acquisition Regulations System GENERAL SERVICES... leases, in excess of the aggregate rent and estimated real estate tax and operating cost adjustments for...

  17. 48 CFR 552.270-18 - Default in Delivery-Time Extensions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Default in Delivery-Time Extensions. 552.270-18 Section 552.270-18 Federal Acquisition Regulations System GENERAL SERVICES... leases, in excess of the aggregate rent and estimated real estate tax and operating cost adjustments for...

  18. 20180311 - Variability of LD50 Values from Rat Oral Acute Toxicity Studies: Implications for Alternative Model Development (SOT)

    EPA Science Inventory

    Alternative models developed for estimating acute systemic toxicity are generally evaluated using in vivo LD50 values. However, in vivo acute systemic toxicity studies can produce variable results, even when conducted according to accepted test guidelines. This variability can ma...

  19. Statistical Aspects of Reliability, Maintainability, and Availability.

    DTIC Science & Technology

    1987-10-01

    A total of 33 research reports were issued, and 35 papers were published in scientific journals or are in press. Research topics included optimal assembly of systems, multistate system theory , testing whether new is better than used nonparameter survival function estimation measuring information in censored models, generalizations of total positively and

  20. Electrically heated particulate filter restart strategy

    DOEpatents

    Gonze, Eugene V [Pinckney, MI; Ament, Frank [Troy, MI

    2011-07-12

    A control system that controls regeneration of a particulate filter is provided. The system generally includes a propagation module that estimates a propagation status of combustion of particulate matter in the particulate filter. A regeneration module controls current to the particulate filter to re-initiate regeneration based on the propagation status.

  1. Variability of LD50 Values from Rat Oral Acute Toxicity Studies: Implications for Alternative Model Development

    EPA Science Inventory

    Alternative models developed for estimating acute systemic toxicity are generally evaluated using in vivo LD50 values. However, in vivo acute systemic toxicity studies can produce variable results, even when conducted according to accepted test guidelines. This variability can ma...

  2. 48 CFR 552.270-18 - Default in Delivery-Time Extensions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Default in Delivery-Time Extensions. 552.270-18 Section 552.270-18 Federal Acquisition Regulations System GENERAL SERVICES... leases, in excess of the aggregate rent and estimated real estate tax and operating cost adjustments for...

  3. A Systems Approach to the Estimation of Ecosystem and Human Health Stressors in Air, Land and Water

    EPA Science Inventory

    A model linkage paradigm, based on the nitrogen cascade, is introduced. This general paradigm is then adapted to specific multi-media nitrogen issues and specific models to be linked. An example linked modeling system addressing potential nitrogen responses to biofuel-driven co...

  4. Validation of a novel air toxic risk model with air monitoring.

    PubMed

    Pratt, Gregory C; Dymond, Mary; Ellickson, Kristie; Thé, Jesse

    2012-01-01

    Three modeling systems were used to estimate human health risks from air pollution: two versions of MNRiskS (for Minnesota Risk Screening), and the USEPA National Air Toxics Assessment (NATA). MNRiskS is a unique cumulative risk modeling system used to assess risks from multiple air toxics, sources, and pathways on a local to a state-wide scale. In addition, ambient outdoor air monitoring data were available for estimation of risks and comparison with the modeled estimates of air concentrations. Highest air concentrations and estimated risks were generally found in the Minneapolis-St. Paul metropolitan area and lowest risks in undeveloped rural areas. Emissions from mobile and area (nonpoint) sources created greater estimated risks than emissions from point sources. Highest cancer risks were via ingestion pathway exposures to dioxins and related compounds. Diesel particles, acrolein, and formaldehyde created the highest estimated inhalation health impacts. Model-estimated air concentrations were generally highest for NATA and lowest for the AERMOD version of MNRiskS. This validation study showed reasonable agreement between available measurements and model predictions, although results varied among pollutants, and predictions were often lower than measurements. The results increased confidence in identifying pollutants, pathways, geographic areas, sources, and receptors of potential concern, and thus provide a basis for informing pollution reduction strategies and focusing efforts on specific pollutants (diesel particles, acrolein, and formaldehyde), geographic areas (urban centers), and source categories (nonpoint sources). The results heighten concerns about risks from food chain exposures to dioxins and PAHs. Risk estimates were sensitive to variations in methodologies for treating emissions, dispersion, deposition, exposure, and toxicity. © 2011 Society for Risk Analysis.

  5. Solid rocket motor cost model

    NASA Technical Reports Server (NTRS)

    Harney, A. G.; Raphael, L.; Warren, S.; Yakura, J. K.

    1972-01-01

    A systematic and standardized procedure for estimating life cycle costs of solid rocket motor booster configurations. The model consists of clearly defined cost categories and appropriate cost equations in which cost is related to program and hardware parameters. Cost estimating relationships are generally based on analogous experience. In this model the experience drawn on is from estimates prepared by the study contractors. Contractors' estimates are derived by means of engineering estimates for some predetermined level of detail of the SRM hardware and program functions of the system life cycle. This method is frequently referred to as bottom-up. A parametric cost analysis is a useful technique when rapid estimates are required. This is particularly true during the planning stages of a system when hardware designs and program definition are conceptual and constantly changing as the selection process, which includes cost comparisons or trade-offs, is performed. The use of cost estimating relationships also facilitates the performance of cost sensitivity studies in which relative and comparable cost comparisons are significant.

  6. EDIN0613P weight estimating program. [for launch vehicles

    NASA Technical Reports Server (NTRS)

    Hirsch, G. N.

    1976-01-01

    The weight estimating relationships and program developed for space power system simulation are described. The program was developed to size a two-stage launch vehicle for the space power system. The program is actually part of an overall simulation technique called EDIN (Engineering Design and Integration) system. The program sizes the overall vehicle, generates major component weights and derives a large amount of overall vehicle geometry. The program is written in FORTRAN V and is designed for use on the Univac Exec 8 (1110). By utilizing the flexibility of this program while remaining cognizant of the limits imposed upon output depth and accuracy by utilization of generalized input, this program concept can be a useful tool for estimating purposes at the conceptual design stage of a launch vehicle.

  7. Performance estimates for the Space Station power system Brayton Cycle compressor and turbine

    NASA Technical Reports Server (NTRS)

    Cummings, Robert L.

    1989-01-01

    The methods which have been used by the NASA Lewis Research Center for predicting Brayton Cycle compressor and turbine performance for different gases and flow rates are described. These methods were developed by NASA Lewis during the early days of Brayton cycle component development and they can now be applied to the task of predicting the performance of the Closed Brayton Cycle (CBC) Space Station Freedom power system. Computer programs are given for performing these calculations and data from previous NASA Lewis Brayton Compressor and Turbine tests is used to make accurate estimates of the compressor and turbine performance for the CBC power system. Results of these calculations are also given. In general, calculations confirm that the CBC Brayton Cycle contractor has made realistic compressor and turbine performance estimates.

  8. Software cost/resource modeling: Deep space network software cost estimation model

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. J.

    1980-01-01

    A parametric software cost estimation model prepared for JPL deep space network (DSN) data systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models, such as those of the General Research Corporation, Doty Associates, IBM (Walston-Felix), Rome Air Force Development Center, University of Maryland, and Rayleigh-Norden-Putnam. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software lifecycle statistics. The estimation model output scales a standard DSN work breakdown structure skeleton, which is then input to a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.

  9. Maximum a posteriori decoder for digital communications

    NASA Technical Reports Server (NTRS)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  10. Excitation, response, and fatigue life estimation methods for the structural design of externally blown flaps

    NASA Technical Reports Server (NTRS)

    Ungar, E. E.; Chandiramani, K. L.; Barger, J. E.

    1972-01-01

    Means for predicting the fluctuating pressures acting on externally blown flap surfaces are developed on the basis of generalizations derived from non-dimensionalized empirical data. Approaches for estimation of the fatigue lives of skin-stringer and honeycomb-core sandwich flap structures are derived from vibration response analyses and panel fatigue data. Approximate expressions for fluctuating pressures, structural response, and fatigue life are combined to reveal the important parametric dependences. The two-dimensional equations of motion of multi-element flap systems are derived in general form, so that they can be specialized readily for any particular system. An introduction is presented of an approach to characterizing the excitation pressures and structural responses which makes use of space-time spectral concepts and promises to provide useful insights, as well as experimental and analytical savings.

  11. Using parallel banded linear system solvers in generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Moss, William F.

    1993-01-01

    Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.

  12. Neuro-fuzzy decoding of sensory information from ensembles of simultaneously recorded dorsal root ganglion neurons for functional electrical stimulation applications

    NASA Astrophysics Data System (ADS)

    Rigosa, J.; Weber, D. J.; Prochazka, A.; Stein, R. B.; Micera, S.

    2011-08-01

    Functional electrical stimulation (FES) is used to improve motor function after injury to the central nervous system. Some FES systems use artificial sensors to switch between finite control states. To optimize FES control of the complex behavior of the musculo-skeletal system in activities of daily life, it is highly desirable to implement feedback control. In theory, sensory neural signals could provide the required control signals. Recent studies have demonstrated the feasibility of deriving limb-state estimates from the firing rates of primary afferent neurons recorded in dorsal root ganglia (DRG). These studies used multiple linear regression (MLR) methods to generate estimates of limb position and velocity based on a weighted sum of firing rates in an ensemble of simultaneously recorded DRG neurons. The aim of this study was to test whether the use of a neuro-fuzzy (NF) algorithm (the generalized dynamic fuzzy neural networks (GD-FNN)) could improve the performance, robustness and ability to generalize from training to test sets compared to the MLR technique. NF and MLR decoding methods were applied to ensemble DRG recordings obtained during passive and active limb movements in anesthetized and freely moving cats. The GD-FNN model provided more accurate estimates of limb state and generalized better to novel movement patterns. Future efforts will focus on implementing these neural recording and decoding methods in real time to provide closed-loop control of FES using the information extracted from sensory neurons.

  13. [Geographical coverage of the Mexican Healthcare System and a spatial analysis of utilization of its General Hospitals in 1998].

    PubMed

    Hernández-Avila, Juan E; Rodríguez, Mario H; Rodríguez, Norma E; Santos, René; Morales, Evangelina; Cruz, Carlos; Sepúlveda-Amor, Jaime

    2002-01-01

    To describe the geographical coverage of the Mexican Healthcare System (MHS) services and to assess the utilization of its General Hospitals. A Geographic Information System (GIS) was used to include sociodemographic data by locality, the geographical location of all MHS healthcare services, and data on hospital discharge records. A maximum likelihood estimation model was developed to assess the utilization levels of 217 MHS General Hospitals. The model included data on human resources, additional infrastructure, and the population within a 25 km radius. In 1998, 10,806 localities with 72 million inhabitants had at least one public healthcare unit, and 97.2% of the population lived within 50 km of a healthcare unit; however, over 18 million people lived in rural localities without a healthcare unit. The mean annual hospital occupation rate was 48.5 +/- 28.5 per 100 bed/years, with high variability within and between states. Hospital occupation was significantly associated with the number of physicians in the unit, and in the Mexican Institute of Social Security units utilization was associated with additional health infrastructure, and with the population's poverty index. GIS analysis allows improved estimation of the coverage and utilization of MHS hospitals.

  14. System health monitoring using multiple-model adaptive estimation techniques

    NASA Astrophysics Data System (ADS)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.

  15. mBEEF-vdW: Robust fitting of error estimation density functionals

    NASA Astrophysics Data System (ADS)

    Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; Jacobsen, Karsten W.; Bligaard, Thomas

    2016-06-01

    We propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework [J. Wellendorff et al., Phys. Rev. B 85, 235149 (2012), 10.1103/PhysRevB.85.235149; J. Wellendorff et al., J. Chem. Phys. 140, 144107 (2014), 10.1063/1.4870397]. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator over the training datasets. Using this estimator, we show that the robust loss function leads to a 10 % improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.

  16. A unified framework for approximation in inverse problems for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1988-01-01

    A theoretical framework is presented that can be used to treat approximation techniques for very general classes of parameter estimation problems involving distributed systems that are either first or second order in time. Using the approach developed, one can obtain both convergence and stability (continuous dependence of parameter estimates with respect to the observations) under very weak regularity and compactness assumptions on the set of admissible parameters. This unified theory can be used for many problems found in the recent literature and in many cases offers significant improvements to existing results.

  17. Estimating millet production for famine early warning: An application of crop simulation modelling using satellite and ground-based data in Burkina Faso

    USGS Publications Warehouse

    Thornton, P. K.; Bowen, W. T.; Ravelo, A.C.; Wilkens, P. W.; Farmer, G.; Brock, J.; Brink, J. E.

    1997-01-01

    Early warning of impending poor crop harvests in highly variable environments can allow policy makers the time they need to take appropriate action to ameliorate the effects of regional food shortages on vulnerable rural and urban populations. Crop production estimates for the current season can be obtained using crop simulation models and remotely sensed estimates of rainfall in real time, embedded in a geographic information system that allows simple analysis of simulation results. A prototype yield estimation system was developed for the thirty provinces of Burkina Faso. It is based on CERES-Millet, a crop simulation model of the growth and development of millet (Pennisetum spp.). The prototype was used to estimate millet production in contrasting seasons and to derive production anomaly estimates for the 1986 season. Provincial yields simulated halfway through the growing season were generally within 15% of their final (end-of-season) values. Although more work is required to produce an operational early warning system of reasonable credibility, the methodology has considerable potential for providing timely estimates of regional production of the major food crops in countries of sub-Saharan Africa.

  18. Basin Scale Estimates of Evapotranspiration Using GRACE and other Observations

    NASA Technical Reports Server (NTRS)

    Rodell, M.; Famiglietti, J. S.; Chen, J.; Seneviratne, S. I.; Viterbo, P.; Holl, S.; Wilson, C. R.

    2004-01-01

    Evapotranspiration is integral to studies of the Earth system, yet it is difficult to measure on regional scales. One estimation technique is a terrestrial water budget, i.e., total precipitation minus the sum of evapotranspiration and net runoff equals the change in water storage. Gravity Recovery and Climate Experiment (GRACE) satellite gravity observations are now enabling closure of this equation by providing the terrestrial water storage change. Equations are presented here for estimating evapotranspiration using observation based information, taking into account the unique nature of GRACE observations. GRACE water storage changes are first substantiated by comparing with results from a land surface model and a combined atmospheric-terrestrial water budget approach. Evapotranspiration is then estimated for 14 time periods over the Mississippi River basin and compared with output from three modeling systems. The GRACE estimates generally lay in the middle of the models and may provide skill in evaluating modeled evapotranspiration.

  19. A Bayesian approach to reliability and confidence

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1989-01-01

    The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.

  20. Tuning the control system of a nonlinear inverted pendulum by means of the new method of Lyapunov exponents estimation

    NASA Astrophysics Data System (ADS)

    Balcerzak, Marek; Dąbrowski, Artur; Pikunov, Danylo

    2018-01-01

    This paper presents a practical application of a new, simplified method of Lyapunov exponents estimation. The method has been applied to optimization of a real, nonlinear inverted pendulum system. Authors presented how the algorithm of the Largest Lyapunov Exponent (LLE) estimation can be applied to evaluate control systems performance. The new LLE-based control performance index has been proposed. Equations of the inverted pendulum system of the fourth order have been found. The nonlinear friction of the regulation object has been identified by means of the nonlinear least squares method. Three different friction models have been tested: linear, cubic and Coulomb model. The Differential Evolution (DE) algorithm has been used to search for the best set of parameters of the general linear regulator. This work proves that proposed method is efficient and results in faster perturbation rejection, especially when disturbances are significant.

  1. Predicting bunching costs for the Radio Horse 9 winch

    Treesearch

    Chris B. LeDoux; Bruce W. Kling; Patrice A. Harou; Patrice A. Harou

    1987-01-01

    Data from field studies and a prebunching cost simulator have been assembled and converted into a general equation that can be used to estimate the prebunching cost of the Radio Horse 9 winch. The methods can be used to estimate prebunching cost for bunching under the skyline corridor for swinging with cable systems, for bunching to skid trail edge to be picked up by a...

  2. Estimation of the Level of Cognitive Development of a Preschool Child Using the System of Situations with Mathematical Contents

    ERIC Educational Resources Information Center

    Gorev, Pavel M.; Bichurina, Svetlana Y.; Yakupova, Rufiya M.; Khairova, Irina V.

    2016-01-01

    Cognitive development of personality can be considered as one of the key directions of preschool education presented in the world practice, where preschool programs are educational ones, and preschool education is the first level of the general education. Thereby the purpose of the research is to create a model of reliable estimation of cognitive…

  3. Improving estimations of greenhouse gas transfer velocities by atmosphere-ocean couplers in Earth-System and regional models

    NASA Astrophysics Data System (ADS)

    Vieira, V. M. N. C. S.; Sahlée, E.; Jurus, P.; Clementi, E.; Pettersson, H.; Mateus, M.

    2015-09-01

    Earth-System and regional models, forecasting climate change and its impacts, simulate atmosphere-ocean gas exchanges using classical yet too simple generalizations relying on wind speed as the sole mediator while neglecting factors as sea-surface agitation, atmospheric stability, current drag with the bottom, rain and surfactants. These were proved fundamental for accurate estimates, particularly in the coastal ocean, where a significant part of the atmosphere-ocean greenhouse gas exchanges occurs. We include several of these factors in a customizable algorithm proposed for the basis of novel couplers of the atmospheric and oceanographic model components. We tested performances with measured and simulated data from the European coastal ocean, having found our algorithm to forecast greenhouse gas exchanges largely different from the forecasted by the generalization currently in use. Our algorithm allows calculus vectorization and parallel processing, improving computational speed roughly 12× in a single cpu core, an essential feature for Earth-System models applications.

  4. Smoothing Motion Estimates for Radar Motion Compensation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doerry, Armin W.

    2017-07-01

    Simple motion models for complex motion environments are often not adequate for keeping radar data coherent. Eve n perfect motion samples appli ed to imperfect models may lead to interim calculations e xhibiting errors that lead to degraded processing results. Herein we discuss a specific i ssue involving calculating motion for groups of pulses, with measurements only available at pulse-group boundaries. - 4 - Acknowledgements This report was funded by General A tomics Aeronautical Systems, Inc. (GA-ASI) Mission Systems under Cooperative Re search and Development Agre ement (CRADA) SC08/01749 between Sandia National Laboratories and GA-ASI. General Atomics Aeronautical Systems, Inc.more » (GA-ASI), an affilia te of privately-held General Atomics, is a leading manufacturer of Remotely Piloted Aircraft (RPA) systems, radars, and electro-optic and rel ated mission systems, includin g the Predator(r)/Gray Eagle(r)-series and Lynx(r) Multi-mode Radar.« less

  5. General practice funding underpins the persistence of the inverse care law: cross-sectional study in Scotland.

    PubMed

    McLean, Gary; Guthrie, Bruce; Mercer, Stewart W; Watt, Graham C M

    2015-12-01

    Universal access to health care, as provided in the NHS, does not ensure that patients' needs are met. To explore the relationships between multimorbidity, general practice funding, and workload by deprivation in a national healthcare system. Cross-sectional study using routine data from 956 general practices in Scotland. Estimated numbers of patients with multimorbidity, estimated numbers of consultations per 1000 patients, and payments to practices per patient are presented and analysed by deprivation decile at practice level. Levels of multimorbidity rose with practice deprivation. Practices in the most deprived decile had 38% more patients with multimorbidity compared with the least deprived (222.8 per 1000 patients versus 161.1; P<0.001) and over 120% more patients with combined mental-physical multimorbidity (113.0 per 1000 patients versus 51.5; P<0.001). Practices in the most deprived decile had 20% more consultations per annum compared with the least deprived (4616 versus 3846, P<0.001). There was no association between total practice funding and deprivation (Spearman ρ -0.09; P = 0.03). Although consultation rates increased with deprivation, the social gradients in multimorbidity were much steeper. There was no association between consultation rates and levels of funding. No evidence was found that general practice funding matches clinical need, as estimated by different definitions of multimorbidity. Consultation rates provide only a partial estimate of the work involved in addressing clinical needs and are poorly related to the prevalence of multimorbidity. In these circumstances, general practice is unlikely to mitigate health inequalities and may increase them. © British Journal of General Practice 2015.

  6. Feasibility of using LANDSAT images of vegetation cover to estimate effective hydraulic properties of soils

    NASA Technical Reports Server (NTRS)

    Eagleson, P. S.

    1985-01-01

    Research activities conducted from February 1, 1985 to July 31, 1985 and preliminary conclusions regarding research objectives are summarized. The objective is to determine the feasibility of using LANDSAT data to estimate effective hydraulic properties of soils. The general approach is to apply the climatic-climax hypothesis (Ealgeson, 1982) to natural water-limited vegetation systems using canopy cover estimated from LANDSAT data. Natural water-limited systems typically consist of inhomogeneous vegetation canopies interspersed with bare soils. The ground resolution associated with one pixel from LANDSAT MSS (or TM) data is generally greater than the scale of the plant canopy or canopy clusters. Thus a method for resolving percent canopy cover at a subpixel level must be established before the Eagleson hypothesis can be tested. Two formulations are proposed which extend existing methods of analyzing mixed pixels to naturally vegetated landscapes. The first method involves use of the normalized vegetation index. The second approach is a physical model based on radiative transfer principles. Both methods are to be analyzed for their feasibility on selected sites.

  7. Gravity-darkening exponents in semi-detached binary systems from their photometric observations. II.

    NASA Astrophysics Data System (ADS)

    Djurašević, G.; Rovithis-Livaniou, H.; Rovithis, P.; Georgiades, N.; Erkapić, S.; Pavlović, R.

    2006-01-01

    This second part of our study concerning gravity-darkening presents the results for 8 semi-detached close binary systems. From the light-curve analysis of these systems the exponent of the gravity-darkening (GDE) for the Roche lobe filling components has been empirically derived. The method used for the light-curve analysis is based on Roche geometry, and enables simultaneous estimation of the systems' parameters and the gravity-darkening exponents. Our analysis is restricted to the black-body approximation which can influence in some degree the parameter estimation. The results of our analysis are: 1) For four of the systems, namely: TX UMa, β Per, AW Cam and TW Cas, there is a very good agreement between empirically estimated and theoretically predicted values for purely convective envelopes. 2) For the AI Dra system, the estimated value of gravity-darkening exponent is greater, and for UX Her, TW And and XZ Pup lesser than corresponding theoretical predictions, but for all mentioned systems the obtained values of the gravity-darkening exponent are quite close to the theoretically expected values. 3) Our analysis has proved generally that with the correction of the previously estimated mass ratios of the components within some of the analysed systems, the theoretical predictions of the gravity-darkening exponents for stars with convective envelopes are highly reliable. The anomalous values of the GDE found in some earlier studies of these systems can be considered as the consequence of the inappropriate method used to estimate the GDE. 4) The empirical estimations of GDE given in Paper I and in the present study indicate that in the light-curve analysis one can apply the recent theoretical predictions of GDE with high confidence for stars with both convective and radiative envelopes.

  8. Traffic safety facts 1996 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    1997-12-01

    This annual report presents descriptive statistics about traffic crashes of all severities, from those that result in property damage to those that result in the loss of human life. Information from two of the National Highway Traffic Safety Administ...

  9. Traffic safety facts 2005 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    2006-01-01

    This annual report presents descriptive statistics about traffic crashes of all severities, from those that result in property damage to those that result in the loss of human life. Information from two of the National Highway Traffic Safety Administ...

  10. Traffic safety facts 2006 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    2007-01-01

    This annual report presents descriptive statistics about traffic crashes of all severities, from those that result in property damage to those that result in the loss of human life. Information from two of the National Highway Traffic Safety Administ...

  11. Traffic safety facts 2000 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    2001-12-01

    This annual report presents descriptive statistics about traffic crashes of all severities, from those that result in property damage to those that result in the loss of human life. Information from two of the National Highway Traffic Safety Administ...

  12. Traffic safety facts 2001 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    2002-12-01

    This annual report presents descriptive statistics about traffic crashes of all severities, from those that result in property damage to those that result in the loss of human life. Information from two of the National Highway Traffic Safety Administ...

  13. Traffic safety facts 1998 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    1999-10-01

    This annual report presents descriptive statistics about traffic crashes of all severities, from those that result in property damage to those that result in the loss of human life. Information from two of the National Highway Traffic Safety Administ...

  14. Traffic safety facts 2002 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    2004-01-01

    This annual report presents descriptive statistics about traffic crashes of all severities, from those that result in property damage to those that result in the loss of human life. Information from two of the National Highway Traffic Safety Administ...

  15. Traffic safety facts 2003 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    2005-01-01

    This annual report presents descriptive statistics about traffic crashes of all severities, from those that result in property damage to those that result in the loss of human life. Information from two of the National Highway Traffic Safety Administ...

  16. Traffic safety facts 1999 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    2000-12-01

    This annual report presents descriptive statistics about traffic crashes of all severities, from those that result in property damage to those that result in the loss of human life. Information from two of the National Highway Traffic Safety Administ...

  17. Traffic safety facts 1994 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    1995-08-01

    This annual report presents descriptive statistics about traffic crashes of all severities, from those that result in property damage to those that result in the loss of human life. Information from two of the National Highway Traffic Safety Administ...

  18. Utility of 222Rn as a passive tracer of subglacial distributed system drainage

    NASA Astrophysics Data System (ADS)

    Linhoff, Benjamin S.; Charette, Matthew A.; Nienow, Peter W.; Wadham, Jemma L.; Tedstone, Andrew J.; Cowton, Thomas

    2017-03-01

    Water flow beneath the Greenland Ice Sheet (GrIS) has been shown to include slow-inefficient (distributed) and fast-efficient (channelized) drainage systems, in response to meltwater delivery to the bed via both moulins and surface lake drainage. This partitioning between channelized and distributed drainage systems is difficult to quantify yet it plays an important role in bulk meltwater chemistry and glacial velocity, and thus subglacial erosion. Radon-222, which is continuously produced via the decay of 226Ra, accumulates in meltwater that has interacted with rock and sediment. Hence, elevated concentrations of 222Rn should be indicative of meltwater that has flowed through a distributed drainage system network. In the spring and summer of 2011 and 2012, we made hourly 222Rn measurements in the proglacial river of a large outlet glacier of the GrIS (Leverett Glacier, SW Greenland). Radon-222 activities were highest in the early melt season (10-15 dpm L-1), decreasing by a factor of 2-5 (3-5 dpm L-1) following the onset of widespread surface melt. Using a 222Rn mass balance model, we estimate that, on average, greater than 90% of the river 222Rn was sourced from distributed system meltwater. The distributed system 222Rn flux varied on diurnal, weekly, and seasonal time scales with highest fluxes generally occurring on the falling limb of the hydrograph and during expansion of the channelized drainage system. Using laboratory based estimates of distributed system 222Rn, the distributed system water flux generally ranged between 1-5% of the total proglacial river discharge for both seasons. This study provides a promising new method for hydrograph separation in glacial watersheds and for estimating the timing and magnitude of distributed system fluxes expelled at ice sheet margins.

  19. Structural Properties and Estimation of Delay Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kwong, R. H. S.

    1975-01-01

    Two areas in the theory of delay systems were studied: structural properties and their applications to feedback control, and optimal linear and nonlinear estimation. The concepts of controllability, stabilizability, observability, and detectability were investigated. The property of pointwise degeneracy of linear time-invariant delay systems is considered. Necessary and sufficient conditions for three dimensional linear systems to be made pointwise degenerate by delay feedback were obtained, while sufficient conditions for this to be possible are given for higher dimensional linear systems. These results were applied to obtain solvability conditions for the minimum time output zeroing control problem by delay feedback. A representation theorem is given for conditional moment functionals of general nonlinear stochastic delay systems, and stochastic differential equations are derived for conditional moment functionals satisfying certain smoothness properties.

  20. A computer decision aid for medical prevention: a pilot qualitative study of the Personalized Estimate of Risks (EsPeR) system

    PubMed Central

    Colombet, Isabelle; Dart, Thierry; Leneveut, Laurence; Zunino, Sylvain; Ménard, Joël; Chatellier, Gilles

    2003-01-01

    Background Many preventable diseases such as ischemic heart diseases and breast cancer prevail at a large scale in the general population. Computerized decision support systems are one of the solutions for improving the quality of prevention strategies. Methods The system called EsPeR (Personalised Estimate of Risks) combines calculation of several risks with computerisation of guidelines (cardiovascular prevention, screening for breast cancer, colorectal cancer, uterine cervix cancer, and prostate cancer, diagnosis of depression and suicide risk). We present a qualitative evaluation of its ergonomics, as well as it's understanding and acceptance by a group of general practitioners. We organised four focus groups each including 6–11 general practitioners. Physicians worked on several structured clinical scenari os with the help of EsPeR, and three senior investigators leaded structured discussion sessions. Results The initial sessions identified several ergonomic flaws of the system that were easily corrected. Both clinical scenarios and discussion sessions identified several problems related to the insufficient comprehension (expression of risks, definition of familial history of disease), and difficulty for the physicians to accept some of the recommendations. Conclusion Educational, socio-professional and organisational components (i.e. time constraints for training and use of the EsPeR system during consultation) as well as acceptance of evidence-based decision-making should be taken into account before launching computerised decision support systems, or their application in randomised trials. PMID:14641924

  1. An eye model for uncalibrated eye gaze estimation under variable head pose

    NASA Astrophysics Data System (ADS)

    Hnatow, Justin; Savakis, Andreas

    2007-04-01

    Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.

  2. Generalized Distributed Consensus-based Algorithms for Uncertain Systems and Networks

    DTIC Science & Technology

    2010-01-01

    time linear systems with markovian jumping parameters and additive disturbances. SIAM Journal on Control and Optimization, 44(4):1165– 1191, 2005... time marko- vian jump linear systems , in the presence of delayed mode observations. Proceed- ings of the 2008 IEEE American Control Conference, pages...Markovian Jump Linear System state estimation . . . . 147 6 Conclusions 152 A Discrete- Time Coupled Matrix Equations 156 A.1 Properties of a special

  3. Time and frequency applications.

    PubMed

    Hellwig, H

    1993-01-01

    An overview is given of the capabilities of atomic clocks and quartz crystal oscillators in terms of available precision of time and frequency signals. The generation, comparison, and dissemination of time and frequency is then discussed. The principal focus is to survey uses of time and frequency in navigation, communication, and science. The examples given include the Global Positioning System, a satellite-based global navigation system, and general and dedicated communication networks, as well as experiments in general relativity and radioastronomy. The number of atomic clocks and crystal oscillators that are in actual use worldwide is estimated.

  4. Manufacturing cost analysis of a parabolic dish concentrator (General Electric design) for solar thermal electric power systems in selected production volumes

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The manufacturing cost of a General Electric 12 meter diameter concentrator was estimated. This parabolic dish concentrator for solar thermal system was costed in annual production volumes of 100 - 1,000 - 5,000 - 10,000 - 50,000 100,000 - 400,000 and 1,000,000 units. Presented for each volume are the costs of direct labor, material, burden, tooling, capital equipment and buildings. Also presented is the direct labor personnel and factory space requirements. All costs are based on early 1981 economics.

  5. Historical (1750–2014) anthropogenic emissions of reactive gases and aerosols from the Community Emissions Data System (CEDS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoesly, Rachel M.; Smith, Steven J.; Feng, Leyang

    Here, we present a new data set of annual historical (1750–2014) anthropogenic chemically reactive gases (CO, CH 4, NH 3, NO x, SO 2, NMVOCs), carbonaceous aerosols (black carbon – BC, and organic carbon – OC), and CO 2 developed with the Community Emissions Data System (CEDS). We improve upon existing inventories with a more consistent and reproducible methodology applied to all emission species, updated emission factors, and recent estimates through 2014. The data system relies on existing energy consumption data sets and regional and country-specific inventories to produce trends over recent decades. All emission species are consistently estimated using the samemore » activity data over all time periods. Emissions are provided on an annual basis at the level of country and sector and gridded with monthly seasonality. These estimates are comparable to, but generally slightly higher than, existing global inventories. Emissions over the most recent years are more uncertain, particularly in low- and middle-income regions where country-specific emission inventories are less available. Future work will involve refining and updating these emission estimates, estimating emissions' uncertainty, and publication of the system as open-source software.« less

  6. Historical (1750–2014) anthropogenic emissions of reactive gases and aerosols from the Community Emissions Data System (CEDS)

    DOE PAGES

    Hoesly, Rachel M.; Smith, Steven J.; Feng, Leyang; ...

    2018-01-29

    Here, we present a new data set of annual historical (1750–2014) anthropogenic chemically reactive gases (CO, CH 4, NH 3, NO x, SO 2, NMVOCs), carbonaceous aerosols (black carbon – BC, and organic carbon – OC), and CO 2 developed with the Community Emissions Data System (CEDS). We improve upon existing inventories with a more consistent and reproducible methodology applied to all emission species, updated emission factors, and recent estimates through 2014. The data system relies on existing energy consumption data sets and regional and country-specific inventories to produce trends over recent decades. All emission species are consistently estimated using the samemore » activity data over all time periods. Emissions are provided on an annual basis at the level of country and sector and gridded with monthly seasonality. These estimates are comparable to, but generally slightly higher than, existing global inventories. Emissions over the most recent years are more uncertain, particularly in low- and middle-income regions where country-specific emission inventories are less available. Future work will involve refining and updating these emission estimates, estimating emissions' uncertainty, and publication of the system as open-source software.« less

  7. Historical (1750-2014) anthropogenic emissions of reactive gases and aerosols from the Community Emissions Data System (CEDS)

    NASA Astrophysics Data System (ADS)

    Hoesly, Rachel M.; Smith, Steven J.; Feng, Leyang; Klimont, Zbigniew; Janssens-Maenhout, Greet; Pitkanen, Tyler; Seibert, Jonathan J.; Vu, Linh; Andres, Robert J.; Bolt, Ryan M.; Bond, Tami C.; Dawidowski, Laura; Kholod, Nazar; Kurokawa, June-ichi; Li, Meng; Liu, Liang; Lu, Zifeng; Moura, Maria Cecilia P.; O'Rourke, Patrick R.; Zhang, Qiang

    2018-01-01

    We present a new data set of annual historical (1750-2014) anthropogenic chemically reactive gases (CO, CH4, NH3, NOx, SO2, NMVOCs), carbonaceous aerosols (black carbon - BC, and organic carbon - OC), and CO2 developed with the Community Emissions Data System (CEDS). We improve upon existing inventories with a more consistent and reproducible methodology applied to all emission species, updated emission factors, and recent estimates through 2014. The data system relies on existing energy consumption data sets and regional and country-specific inventories to produce trends over recent decades. All emission species are consistently estimated using the same activity data over all time periods. Emissions are provided on an annual basis at the level of country and sector and gridded with monthly seasonality. These estimates are comparable to, but generally slightly higher than, existing global inventories. Emissions over the most recent years are more uncertain, particularly in low- and middle-income regions where country-specific emission inventories are less available. Future work will involve refining and updating these emission estimates, estimating emissions' uncertainty, and publication of the system as open-source software.

  8. Improved Temperature Dynamic Model of Turbine Subcomponents for Facilitation of Generalized Tip Clearance Control

    NASA Technical Reports Server (NTRS)

    Kypuros, Javier A.; Colson, Rodrigo; Munoz, Afredo

    2004-01-01

    This paper describes efforts conducted to improve dynamic temperature estimations of a turbine tip clearance system to facilitate design of a generalized tip clearance controller. This work builds upon research previously conducted and presented in and focuses primarily on improving dynamic temperature estimations of the primary components affecting tip clearance (i.e. the rotor, blades, and casing/shroud). The temperature profiles estimated by the previous model iteration, specifically for the rotor and blades, were found to be inaccurate and, more importantly, insufficient to facilitate controller design. Some assumptions made to facilitate the previous results were not valid, and thus improvements are presented here to better match the physical reality. As will be shown, the improved temperature sub- models, match a commercially validated model and are sufficiently simplified to aid in controller design.

  9. A general moment expansion method for stochastic kinetic models

    NASA Astrophysics Data System (ADS)

    Ale, Angelique; Kirk, Paul; Stumpf, Michael P. H.

    2013-05-01

    Moment approximation methods are gaining increasing attention for their use in the approximation of the stochastic kinetics of chemical reaction systems. In this paper we derive a general moment expansion method for any type of propensities and which allows expansion up to any number of moments. For some chemical reaction systems, more than two moments are necessary to describe the dynamic properties of the system, which the linear noise approximation is unable to provide. Moreover, also for systems for which the mean does not have a strong dependence on higher order moments, moment approximation methods give information about higher order moments of the underlying probability distribution. We demonstrate the method using a dimerisation reaction, Michaelis-Menten kinetics and a model of an oscillating p53 system. We show that for the dimerisation reaction and Michaelis-Menten enzyme kinetics system higher order moments have limited influence on the estimation of the mean, while for the p53 system, the solution for the mean can require several moments to converge to the average obtained from many stochastic simulations. We also find that agreement between lower order moments does not guarantee that higher moments will agree. Compared to stochastic simulations, our approach is numerically highly efficient at capturing the behaviour of stochastic systems in terms of the average and higher moments, and we provide expressions for the computational cost for different system sizes and orders of approximation. We show how the moment expansion method can be employed to efficiently quantify parameter sensitivity. Finally we investigate the effects of using too few moments on parameter estimation, and provide guidance on how to estimate if the distribution can be accurately approximated using only a few moments.

  10. Fisher information of accelerated two-qubit systems

    NASA Astrophysics Data System (ADS)

    Metwally, N.

    2018-02-01

    In this paper, Fisher information for an accelerated system initially prepared in the X-state is discussed. An analytical solution, which consists of three parts: classical, the average over all pure states and a mixture of pure states, is derived for the general state and for Werner state. It is shown that the Unruh acceleration has a depleting effect on the Fisher information. This depletion depends on the degree of entanglement of the initial state settings. For the X-state, for some intervals of Unruh acceleration, the Fisher information remains constant, irrespective to the Unruh acceleration. In general, the possibility of estimating the state’s parameters decreases as the acceleration increases. However, the precision of estimation can be maximized for certain values of the Unruh acceleration. We also investigate the contribution of the different parts of the Fisher information on the dynamics of the total Fisher information.

  11. Models of resource allocation optimization when solving the control problems in organizational systems

    NASA Astrophysics Data System (ADS)

    Menshikh, V.; Samorokovskiy, A.; Avsentev, O.

    2018-03-01

    The mathematical model of optimizing the allocation of resources to reduce the time for management decisions and algorithms to solve the general problem of resource allocation. The optimization problem of choice of resources in organizational systems in order to reduce the total execution time of a job is solved. This problem is a complex three-level combinatorial problem, for the solving of which it is necessary to implement the solution to several specific problems: to estimate the duration of performing each action, depending on the number of performers within the group that performs this action; to estimate the total execution time of all actions depending on the quantitative composition of groups of performers; to find such a distribution of the existing resource of performers in groups to minimize the total execution time of all actions. In addition, algorithms to solve the general problem of resource allocation are proposed.

  12. Business interruption impacts of a terrorist attack on the electric power system of Los Angeles: customer resilience to a total blackout.

    PubMed

    Rose, Adam; Oladosu, Gbadebo; Liao, Shu-Yi

    2007-06-01

    Regional economies are highly dependent on electricity, thus making their power supply systems attractive terrorist targets. We estimate the largest category of economic losses from electricity outages-business interruption-in the context of a total blackout of electricity in Los Angeles. We advance the state of the art in the estimation of the two factors that strongly influence the losses: indirect effects and resilience. The results indicate that indirect effects in the context of general equilibrium analysis are moderate in size. The stronger factor, and one that pushes in the opposite direction, is resilience. Our analysis indicates that electricity customers have the ability to mute the potential shock to their business operations by as much as 86%. Moreover, market resilience lowers the losses, in part through the dampening of general equilibrium effects.

  13. The unbiasedness of a generalized mirage boundary correction method for Monte Carlo integration estimators of volume

    Treesearch

    Thomas B. Lynch; Jeffrey H. Gove

    2014-01-01

    The typical "double counting" application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical...

  14. A Portuguese value set for the SF-6D.

    PubMed

    Ferreira, Lara N; Ferreira, Pedro L; Pereira, Luis N; Brazier, John; Rowen, Donna

    2010-08-01

    The SF-6D is a preference-based measure of health derived from the SF-36 that can be used for cost-effectiveness analysis using cost-per-quality adjusted life-year analysis. This study seeks to estimate a system weight for the SF-6D for Portugal and to compare the results with the UK system weights. A sample of 55 health states defined by the SF-6D has been valued by a representative random sample of the Portuguese population, stratified by sex and age (n = 140), using the Standard Gamble (SG). Several models are estimated at both the individual and aggregate levels for predicting health-state valuations. Models with main effects, with interaction effects and with the constant forced to unity are presented. Random effects (RE) models are estimated using generalized least squares (GLS) regressions. Generalized estimation equations (GEE) are used to estimate RE models with the constant forced to unity. Estimations at the individual level were performed using 630 health-state valuations. Alternative functional forms are considered to account for the skewed distribution of health-state valuations. The models are analyzed in terms of their coefficients, overall fit, and the ability for predicting the SG-values. The RE models estimated using GLS and through GEE produce significant coefficients, which are robust across model specification. However, there are concerns regarding some inconsistent estimates, and so parsimonious consistent models were estimated. There is evidence of under prediction in some states assigned to poor health. The results are consistent with the UK results. The models estimated provide preference-based quality of life weights for the Portuguese population when health status data have been collected using the SF-36. Although the sample was randomly drowned findings should be treated with caution, given the small sample size, even knowing that they have been estimated at the individual level.

  15. Modified Petri net model sensitivity to workload manipulations

    NASA Technical Reports Server (NTRS)

    White, S. A.; Mackinnon, D. P.; Lyman, J.

    1986-01-01

    Modified Petri Nets (MPNs) are investigated as a workload modeling tool. The results of an exploratory study of the sensitivity of MPNs to work load manipulations in a dual task are described. Petri nets have been used to represent systems with asynchronous, concurrent and parallel activities (Peterson, 1981). These characteristics led some researchers to suggest the use of Petri nets in workload modeling where concurrent and parallel activities are common. Petri nets are represented by places and transitions. In the workload application, places represent operator activities and transitions represent events. MPNs have been used to formally represent task events and activities of a human operator in a man-machine system. Some descriptive applications demonstrate the usefulness of MPNs in the formal representation of systems. It is the general hypothesis herein that in addition to descriptive applications, MPNs may be useful for workload estimation and prediction. The results are reported of the first of a series of experiments designed to develop and test a MPN system of workload estimation and prediction. This first experiment is a screening test of MPN model general sensitivity to changes in workload. Positive results from this experiment will justify the more complicated analyses and techniques necessary for developing a workload prediction system.

  16. Mathematical foundations of hybrid data assimilation from a synchronization perspective

    NASA Astrophysics Data System (ADS)

    Penny, Stephen G.

    2017-12-01

    The state-of-the-art data assimilation methods used today in operational weather prediction centers around the world can be classified as generalized one-way coupled impulsive synchronization. This classification permits the investigation of hybrid data assimilation methods, which combine dynamic error estimates of the system state with long time-averaged (climatological) error estimates, from a synchronization perspective. Illustrative results show how dynamically informed formulations of the coupling matrix (via an Ensemble Kalman Filter, EnKF) can lead to synchronization when observing networks are sparse and how hybrid methods can lead to synchronization when those dynamic formulations are inadequate (due to small ensemble sizes). A large-scale application with a global ocean general circulation model is also presented. Results indicate that the hybrid methods also have useful applications in generalized synchronization, in particular, for correcting systematic model errors.

  17. Mathematical foundations of hybrid data assimilation from a synchronization perspective.

    PubMed

    Penny, Stephen G

    2017-12-01

    The state-of-the-art data assimilation methods used today in operational weather prediction centers around the world can be classified as generalized one-way coupled impulsive synchronization. This classification permits the investigation of hybrid data assimilation methods, which combine dynamic error estimates of the system state with long time-averaged (climatological) error estimates, from a synchronization perspective. Illustrative results show how dynamically informed formulations of the coupling matrix (via an Ensemble Kalman Filter, EnKF) can lead to synchronization when observing networks are sparse and how hybrid methods can lead to synchronization when those dynamic formulations are inadequate (due to small ensemble sizes). A large-scale application with a global ocean general circulation model is also presented. Results indicate that the hybrid methods also have useful applications in generalized synchronization, in particular, for correcting systematic model errors.

  18. Descendant root volume varies as a function of root type: estimation of root biomass lost during uprooting in Pinus pinaster.

    PubMed

    Danjon, Frédéric; Caplan, Joshua S; Fortin, Mathieu; Meredieu, Céline

    2013-01-01

    Root systems of woody plants generally display a strong relationship between the cross-sectional area or cross-sectional diameter (CSD) of a root and the dry weight of biomass (DWd) or root volume (Vd) that has grown (i.e., is descendent) from a point. Specification of this relationship allows one to quantify root architectural patterns and estimate the amount of material lost when root systems are extracted from the soil. However, specifications of this relationship generally do not account for the fact that root systems are comprised of multiple types of roots. We assessed whether the relationship between CSD and Vd varies as a function of root type. Additionally, we sought to identify a more accurate and time-efficient method for estimating missing root volume than is currently available. We used a database that described the 3D root architecture of Pinus pinaster root systems (5, 12, or 19 years) from a stand in southwest France. We determined the relationship between CSD and Vd for 10,000 root segments from intact root branches. Models were specified that did and did not account for root type. The relationships were then applied to the diameters of 11,000 broken root ends to estimate the volume of missing roots. CSD was nearly linearly related to the square root of Vd, but the slope of the curve varied greatly as a function of root type. Sinkers and deep roots tapered rapidly, as they were limited by available soil depth. Distal shallow roots tapered gradually, as they were less limited spatially. We estimated that younger trees lost an average of 17% of root volume when excavated, while older trees lost 4%. Missing volumes were smallest in the central parts of root systems and largest in distal shallow roots. The slopes of the curves for each root type are synthetic parameters that account for differentiation due to genetics, soil properties, or mechanical stimuli. Accounting for this differentiation is critical to estimating root loss accurately.

  19. Descendant root volume varies as a function of root type: estimation of root biomass lost during uprooting in Pinus pinaster

    PubMed Central

    Danjon, Frédéric; Caplan, Joshua S.; Fortin, Mathieu; Meredieu, Céline

    2013-01-01

    Root systems of woody plants generally display a strong relationship between the cross-sectional area or cross-sectional diameter (CSD) of a root and the dry weight of biomass (DWd) or root volume (Vd) that has grown (i.e., is descendent) from a point. Specification of this relationship allows one to quantify root architectural patterns and estimate the amount of material lost when root systems are extracted from the soil. However, specifications of this relationship generally do not account for the fact that root systems are comprised of multiple types of roots. We assessed whether the relationship between CSD and Vd varies as a function of root type. Additionally, we sought to identify a more accurate and time-efficient method for estimating missing root volume than is currently available. We used a database that described the 3D root architecture of Pinus pinaster root systems (5, 12, or 19 years) from a stand in southwest France. We determined the relationship between CSD and Vd for 10,000 root segments from intact root branches. Models were specified that did and did not account for root type. The relationships were then applied to the diameters of 11,000 broken root ends to estimate the volume of missing roots. CSD was nearly linearly related to the square root of Vd, but the slope of the curve varied greatly as a function of root type. Sinkers and deep roots tapered rapidly, as they were limited by available soil depth. Distal shallow roots tapered gradually, as they were less limited spatially. We estimated that younger trees lost an average of 17% of root volume when excavated, while older trees lost 4%. Missing volumes were smallest in the central parts of root systems and largest in distal shallow roots. The slopes of the curves for each root type are synthetic parameters that account for differentiation due to genetics, soil properties, or mechanical stimuli. Accounting for this differentiation is critical to estimating root loss accurately. PMID:24167506

  20. Drogue pose estimation for unmanned aerial vehicle autonomous aerial refueling system based on infrared vision sensor

    NASA Astrophysics Data System (ADS)

    Chen, Shanjun; Duan, Haibin; Deng, Yimin; Li, Cong; Zhao, Guozhi; Xu, Yan

    2017-12-01

    Autonomous aerial refueling is a significant technology that can significantly extend the endurance of unmanned aerial vehicles. A reliable method that can accurately estimate the position and attitude of the probe relative to the drogue is the key to such a capability. A drogue pose estimation method based on infrared vision sensor is introduced with the general goal of yielding an accurate and reliable drogue state estimate. First, by employing direct least squares ellipse fitting and convex hull in OpenCV, a feature point matching and interference point elimination method is proposed. In addition, considering the conditions that some infrared LEDs are damaged or occluded, a missing point estimation method based on perspective transformation and affine transformation is designed. Finally, an accurate and robust pose estimation algorithm improved by the runner-root algorithm is proposed. The feasibility of the designed visual measurement system is demonstrated by flight test, and the results indicate that our proposed method enables precise and reliable pose estimation of the probe relative to the drogue, even in some poor conditions.

  1. Quantifying the proportion of general practice and low-acuity patients in the emergency department.

    PubMed

    Nagree, Yusuf; Camarda, Vanessa J; Fatovich, Daniel M; Cameron, Peter A; Dey, Ian; Gosbell, Andrew D; McCarthy, Sally M; Mountain, David

    2013-06-17

    To accurately estimate the proportion of patients presenting to the emergency department (ED) who may have been suitable to be seen in general practice. Using data sourced from the Emergency Department Information Systems for the calendar 2013s 2009 to 2011 at three major tertiary hospitals in Perth, Western Australia, we compared four methods for calculating general practice-type patients. These were the validated Sprivulis method, the widely used Australasian College for Emergency Medicine method, a discharge diagnosis method developed by the Tasmanian Department of Human and Health Services, and the Australian Institute of Health and Welfare (AIHW) method. General practice-type patient attendances to EDs, estimated using the four methods. All methods except the AIHW method showed that 10%-12% of patients attending tertiary EDs in Perth may have been suitable for general practice. These attendances comprised 3%-5% of total ED length of stay. The AIHW method produced different results (general practice-type patients accounted for about 25% of attendances, comprising 10%-11% of total ED length of stay). General practice-type patient attendances were not evenly distributed across the week, with proportionally more patients presenting during weekday daytime (08:00-17:00) and proportionally fewer overnight (00:00-08:00). This suggests that it is not a lack of general practitioners that drives patients to the ED, as weekday working hours are the time of greatest GP availability. The estimated proportion of general practice-type patients attending the EDs of Perth's major hospitals is 10%-12%, and this accounts for < 5% of the total ED length of stay. The AIHW methodology overestimates the actual proportion of general practice-type patient attendances.

  2. Rule-Based Flight Software Cost Estimation

    NASA Technical Reports Server (NTRS)

    Stukes, Sherry A.; Spagnuolo, John N. Jr.

    2015-01-01

    This paper discusses the fundamental process for the computation of Flight Software (FSW) cost estimates. This process has been incorporated in a rule-based expert system [1] that can be used for Independent Cost Estimates (ICEs), Proposals, and for the validation of Cost Analysis Data Requirements (CADRe) submissions. A high-level directed graph (referred to here as a decision graph) illustrates the steps taken in the production of these estimated costs and serves as a basis of design for the expert system described in this paper. Detailed discussions are subsequently given elaborating upon the methodology, tools, charts, and caveats related to the various nodes of the graph. We present general principles for the estimation of FSW using SEER-SEM as an illustration of these principles when appropriate. Since Source Lines of Code (SLOC) is a major cost driver, a discussion of various SLOC data sources for the preparation of the estimates is given together with an explanation of how contractor SLOC estimates compare with the SLOC estimates used by JPL. Obtaining consistency in code counting will be presented as well as factors used in reconciling SLOC estimates from different code counters. When sufficient data is obtained, a mapping into the JPL Work Breakdown Structure (WBS) from the SEER-SEM output is illustrated. For across the board FSW estimates, as was done for the NASA Discovery Mission proposal estimates performed at JPL, a comparative high-level summary sheet for all missions with the SLOC, data description, brief mission description and the most relevant SEER-SEM parameter values is given to illustrate an encapsulation of the used and calculated data involved in the estimates. The rule-based expert system described provides the user with inputs useful or sufficient to run generic cost estimation programs. This system's incarnation is achieved via the C Language Integrated Production System (CLIPS) and will be addressed at the end of this paper.

  3. Traffic safety facts 2004 : a compilation of motor vehicle crash data from the fatality analysis reporting system and the general estimates system

    DOT National Transportation Integrated Search

    2005-01-01

    Fatal crash data from FARS and nonfatal crash data from GES are presented in this report in five chapters. Chapter 1, Trends, presents data from all years of FARS (1975 through 2004) and GES (1988 through 2004). The remaining chapters present d...

  4. A theoretical framework for convergence and continuous dependence of estimates in inverse problems for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1988-01-01

    Numerical techniques for parameter identification in distributed-parameter systems are developed analytically. A general convergence and stability framework (for continuous dependence on observations) is derived for first-order systems on the basis of (1) a weak formulation in terms of sesquilinear forms and (2) the resolvent convergence form of the Trotter-Kato approximation. The extension of this framework to second-order systems is considered.

  5. GTFS for Estimating Transit Ridership and Supporting Multimodal Performance Measures

    DOT National Transportation Integrated Search

    2017-12-15

    This project demonstrates a potential avenue to use new data sources to support State and local agencies in measuring the use and effectiveness of their public transportation systems. General Transit Feed Specification (GTFS) data provided by transit...

  6. Pre-crash scenario typology for crash avoidance research

    DOT National Transportation Integrated Search

    2007-04-01

    This report defines a new pre-crash scenario typology for crash avoidance research based on the 2004 General Estimates System (GES) crash database, which consists of pre-crash scenarios depicting vehicle movements and dynamics as well as the critical...

  7. Experimental evaluation of the performance of pulsed two-color laser-ranging systems

    NASA Technical Reports Server (NTRS)

    Im, Kwaifong E.; Gardner, Chester S.; Abshire, James B.; Mcgarry, Jan F.

    1987-01-01

    Two-color laser-ranging systems can be used to estimate the atmospheric delay by measuring the difference in propagation times between two optical pulses transmitted at different wavelengths. This paper describes horizontal-path ranging experiments that were conducted using flat diffuse targets and cube-corner reflector arrays. Measurements of the timing accuracy of the cross-correlation estimator, atmospheric delay, received pulse shapes, and signal power spectra are presented. The results are in general agreement with theory and indicate that target speckle can be the dominant noise source when the target is small and is located far from the ranging system or when the target consists of a small number of cube-corner reflectors.

  8. Observability and Estimation of Distributed Space Systems via Local Information-Exchange Networks

    NASA Technical Reports Server (NTRS)

    Rahmani, Amirreza; Mesbahi, Mehran; Fathpour, Nanaz; Hadaegh, Fred Y.

    2008-01-01

    In this work, we develop an approach to formation estimation by explicitly characterizing formation's system-theoretic attributes in terms of the underlying inter-spacecraft information-exchange network. In particular, we approach the formation observer/estimator design by relaxing the accessibility to the global state information by a centralized observer/estimator- and in turn- providing an analysis and synthesis framework for formation observers/estimators that rely on local measurements. The noveltyof our approach hinges upon the explicit examination of the underlying distributed spacecraft network in the realm of guidance, navigation, and control algorithmic analysis and design. The overarching goal of our general research program, some of whose results are reported in this paper, is the development of distributed spacecraft estimation algorithms that are scalable, modular, and robust to variations inthe topology and link characteristics of the formation information exchange network. In this work, we consider the observability of a spacecraft formation from a single observation node and utilize the agreement protocol as a mechanism for observing formation states from local measurements. Specifically, we show how the symmetry structure of the network, characterized in terms of its automorphism group, directly relates to the observability of the corresponding multi-agent system The ramification of this notion of observability over networks is then explored in the context of distributed formation estimation.

  9. Structural Equation Modeling: A Framework for Ocular and Other Medical Sciences Research

    PubMed Central

    Christ, Sharon L.; Lee, David J.; Lam, Byron L.; Diane, Zheng D.

    2017-01-01

    Structural equation modeling (SEM) is a modeling framework that encompasses many types of statistical models and can accommodate a variety of estimation and testing methods. SEM has been used primarily in social sciences but is increasingly used in epidemiology, public health, and the medical sciences. SEM provides many advantages for the analysis of survey and clinical data, including the ability to model latent constructs that may not be directly observable. Another major feature is simultaneous estimation of parameters in systems of equations that may include mediated relationships, correlated dependent variables, and in some instances feedback relationships. SEM allows for the specification of theoretically holistic models because multiple and varied relationships may be estimated together in the same model. SEM has recently expanded by adding generalized linear modeling capabilities that include the simultaneous estimation of parameters of different functional form for outcomes with different distributions in the same model. Therefore, mortality modeling and other relevant health outcomes may be evaluated. Random effects estimation using latent variables has been advanced in the SEM literature and software. In addition, SEM software has increased estimation options. Therefore, modern SEM is quite general and includes model types frequently used by health researchers, including generalized linear modeling, mixed effects linear modeling, and population average modeling. This article does not present any new information. It is meant as an introduction to SEM and its uses in ocular and other health research. PMID:24467557

  10. An Estimation Method of System Voltage Sag Profile using Recorded Sag Data

    NASA Astrophysics Data System (ADS)

    Tanaka, Kazuyuki; Sakashita, Tadashi

    The influence of voltage sag to electric equipment has become big issues because of wider utilization of voltage sensitive devices. In order to reduce the influence of voltage sag appearing at each customer side, it is necessary to recognize the level of receiving voltage drop due to lightning faults for transmission line. However it is hard to measure directly those sag level at every load node. In this report, a new method of efficiently estimating system voltage sag profile is proposed based on symmetrical coordinate. In the proposed method, limited recorded sag data is used as the estimation condition which is recorded at each substation in power systems. From the point of view that the number of the recorded node is generally far less than those of the transmission route, a fast solution method is developed to calculate only recorder faulted voltage by applying reciprocity theorem for Y matrix. Furthermore, effective screening process is incorporated, in which the limited candidate of faulted transmission line can be chosen. Demonstrative results are presented using the IEEJ East10 standard system and actual 1700 bus system. The results show that estimation accuracy is sufficiently acceptable under less computation labor.

  11. The pEst version 2.1 user's manual

    NASA Technical Reports Server (NTRS)

    Murray, James E.; Maine, Richard E.

    1987-01-01

    This report is a user's manual for version 2.1 of pEst, a FORTRAN 77 computer program for interactive parameter estimation in nonlinear dynamic systems. The pEst program allows the user complete generality in definig the nonlinear equations of motion used in the analysis. The equations of motion are specified by a set of FORTRAN subroutines; a set of routines for a general aircraft model is supplied with the program and is described in the report. The report also briefly discusses the scope of the parameter estimation problem the program addresses. The report gives detailed explanations of the purpose and usage of all available program commands and a description of the computational algorithms used in the program.

  12. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement.

    PubMed

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.

  13. Don't soil your chances with solar energy: Experiments of natural dust accumulation on solar modules and the effect on light transmission

    NASA Astrophysics Data System (ADS)

    Boyle, Liza

    Dust accumulation, or soiling, on solar energy harvesting systems can cause significant losses that reduce the power output of the system, increase pay-back time of the system, and reduce confidence in solar energy overall. Developing a method of estimating soiling losses could greatly improve estimates of solar energy system outputs, greatly improve operation and maintenance of solar systems, and improve siting of solar energy systems. This dissertation aims to develop a soiling model by collecting ambient soiling data as well as other environmental data and fitting a model to these data. In general a process-level approach is taken to estimating soiling. First a comparison is made between mass of deposited particulates and transmission loss. Transmission loss is the reduction in light that a solar system would see due to soiling, and mass accumulation represents the level of soiling in the system. This experiment is first conducted at two sites in the Front Range of Colorado and then expanded to three additional sites. Second mass accumulation is examined as a function of airborne particulate matter (PM) concentrations, airborne size distributions, and meteorological data. In depth analysis of this process step is done at the first two sites in Colorado, and a more general analysis is done at the three additional sites. This step is identified as less understood step, but with results still allowing for a general soiling model to be developed. Third these two process steps are combined, and spatial variability of these steps are examined. The three additional sites (an additional site in the Front Range of Colorado, a site in Albuquerque New Mexico, and a site in Cocoa Florida) represent a much more spatially and climatically diverse set of locations than the original two sites and provide a much broader sample space in which to develop the combined soiling model. Finally a few additional parameters, precipitation, micro-meteorology, and some sampling artifacts, are cursorily examined. This is to provide a broader context for these results and to help future researchers in understanding the strengths and weaknesses of this dissertation and the results presented within.

  14. Covariance expressions for eigenvalue and eigenvector problems

    NASA Astrophysics Data System (ADS)

    Liounis, Andrew J.

    There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.

  15. [Impact analysis of shuxuetong injection on abnormal changes of ALT based on generalized boosted models propensity score weighting].

    PubMed

    Yang, Wei; Yi, Dan-Hui; Xie, Yan-Ming; Yang, Wei; Dai, Yi; Zhi, Ying-Jie; Zhuang, Yan; Yang, Hu

    2013-09-01

    To estimate treatment effects of Shuxuetong injection on abnormal changes on ALT index, that is, to explore whether the Shuxuetong injection harms liver function in clinical settings and to provide clinical guidance for its safe application. Clinical information of traditional Chinese medicine (TCM) injections is gathered from hospital information system (HIS) of eighteen general hospitals. This is a retrospective cohort study, using abnormal changes in ALT index as an outcome. A large number of confounding biases are taken into account through the generalized boosted models (GBM) and multiple logistic regression model (MLRM) to estimate the treatment effects of Shuxuetong injections on abnormal changes in ALT index and to explore possible influencing factors. The advantages and process of application of GBM has been demonstrated with examples which eliminate the biases from most confounding variables between groups. This serves to modify the estimation of treatment effects of Shuxuetong injection on ALT index making the results more reliable. Based on large scale clinical observational data from HIS database, significant effects of Shuxuetong injection on abnormal changes in ALT have not been found.

  16. deltaGseg: macrostate estimation via molecular dynamics simulations and multiscale time series analysis.

    PubMed

    Low, Diana H P; Motakis, Efthymios

    2013-10-01

    Binding free energy calculations obtained through molecular dynamics simulations reflect intermolecular interaction states through a series of independent snapshots. Typically, the free energies of multiple simulated series (each with slightly different starting conditions) need to be estimated. Previous approaches carry out this task by moving averages at certain decorrelation times, assuming that the system comes from a single conformation description of binding events. Here, we discuss a more general approach that uses statistical modeling, wavelets denoising and hierarchical clustering to estimate the significance of multiple statistically distinct subpopulations, reflecting potential macrostates of the system. We present the deltaGseg R package that performs macrostate estimation from multiple replicated series and allows molecular biologists/chemists to gain physical insight into the molecular details that are not easily accessible by experimental techniques. deltaGseg is a Bioconductor R package available at http://bioconductor.org/packages/release/bioc/html/deltaGseg.html.

  17. Composing problem solvers for simulation experimentation: a case study on steady state estimation.

    PubMed

    Leye, Stefan; Ewald, Roland; Uhrmacher, Adelinde M

    2014-01-01

    Simulation experiments involve various sub-tasks, e.g., parameter optimization, simulation execution, or output data analysis. Many algorithms can be applied to such tasks, but their performance depends on the given problem. Steady state estimation in systems biology is a typical example for this: several estimators have been proposed, each with its own (dis-)advantages. Experimenters, therefore, must choose from the available options, even though they may not be aware of the consequences. To support those users, we propose a general scheme to aggregate such algorithms to so-called synthetic problem solvers, which exploit algorithm differences to improve overall performance. Our approach subsumes various aggregation mechanisms, supports automatic configuration from training data (e.g., via ensemble learning or portfolio selection), and extends the plugin system of the open source modeling and simulation framework James II. We show the benefits of our approach by applying it to steady state estimation for cell-biological models.

  18. A Knowledge-based System for Estimating Incident Clearance Duration for Maryland : I-95 a Case Study for the Project of MD-17-SHA/UM/4-19 : “Development of a Traffic Management Decision Support Tool for Freeway Incident Traffic Management (FITM) Plan Deployment”

    DOT National Transportation Integrated Search

    2017-12-01

    For the incident response operations to be appreciated by the general public, it is essential that responsible highway agencies be capable of providing the estimated clearance duration of a detected incident at the level sufficiently reliable for mot...

  19. Estimating and Enhancing Public Transit Accessibility for People with Mobility Limitations

    DOT National Transportation Integrated Search

    2017-06-30

    This two-part study employs fine-scale performance measures and analytical techniques designed to evaluate and improve transit services for people experiencing disability. Part one puts forth a series of time-sensitive, general transit feed system (G...

  20. Ecposure Related Dose Estimating Model

    EPA Science Inventory

    ERDEM is a physiologically based pharmacokinetic (PBPK) modeling system consisting of a general model and an associated front end. An actual model is defined when the user prepares an input command file. Such a command file defines the chemicals, compartments and processes that...

  1. Estimation of hydraulic conductivity in an alluvial system using temperatures.

    PubMed

    Su, Grace W; Jasperse, James; Seymour, Donald; Constantz, Jim

    2004-01-01

    Well water temperatures are often collected simultaneously with water levels; however, temperature data are generally considered only as a water quality parameter and are not utilized as an environmental tracer. In this paper, water levels and seasonal temperatures are used to estimate hydraulic conductivities in a stream-aquifer system. To demonstrate this method, temperatures and water levels are analyzed from six observation wells along an example study site, the Russian River in Sonoma County, California. The range in seasonal ground water temperatures in these wells varied from <0.2 degrees C in two wells to approximately 8 degrees C in the other four wells from June to October 2000. The temperature probes in the six wells are located at depths between 3.5 and 7.1 m relative to the river channel. Hydraulic conductivities are estimated by matching simulated ground water temperatures to the observed ground water temperatures. An anisotropy of 5 (horizontal to vertical hydraulic conductivity) generally gives the best fit to the observed temperatures. Estimated conductivities vary over an order of magnitude in the six locations analyzed. In some locations, a change in the observed temperature profile occurred during the study, most likely due to deposition of fine-grained sediment and organic matter plugging the streambed. A reasonable fit to this change in the temperature profile is obtained by decreasing the hydraulic conductivity in the simulations. This study demonstrates that seasonal ground water temperatures monitored in observation wells provide an effective means of estimating hydraulic conductivities in alluvial aquifers.

  2. Estimation of hydraulic conductivity in an alluvial system using temperatures

    USGS Publications Warehouse

    Su, G.W.; Jasperse, James; Seymour, D.; Constantz, J.

    2004-01-01

    Well water temperatures are often collected simultaneously with water levels; however, temperature data are generally considered only as a water quality parameter and are not utilized as an environmental tracer. In this paper, water levels and seasonal temperatures are used to estimate hydraulic conductivities in a stream-aquifer system. To demonstrate this method, temperatures and water levels are analyzed from six observation wells along an example study site, the Russian River in Sonoma County, California. The range in seasonal ground water temperatures in these wells varied from < 0.2??C in two wells to ???8??C in the other four wells from June to October 2000. The temperature probes in the six wells are located at depths between 3.5 and 7.1 m relative to the river channel. Hydraulic conductivities are estimated by matching simulated ground water temperatures to the observed ground water temperatures. An anisotropy of 5 (horizontal to vertical hydraulic conductivity) generally gives the best fit to the observed temperatures. Estimated conductivities vary over an order of magnitude in the six locations analyzed. In some locations, a change in the observed temperature profile occurred during the study, most likely due to deposition of fine-grained sediment and organic matter plugging the streambed. A reasonable fit to this change in the temperature profile is obtained by decreasing the hydraulic conductivity in the simulations. This study demonstrates that seasonal ground water temperatures monitored in observation wells provide an effective means of estimating hydraulic conductivities in alluvial aquifers.

  3. ICU scoring systems allow prediction of patient outcomes and comparison of ICU performance.

    PubMed

    Becker, R B; Zimmerman, J E

    1996-07-01

    Too much time and effort are wasted in attempts to pass final judgment on whether systems for ICU prognostication are "good or bad" and whether they "do or do not" provide a simple answer to the complex and often unpredictable question of individual mortality in the ICU. A substantial amount of data supports the usefulness of general ICU prognostic systems in comparing ICU performance with respect to a wide variety of endpoints, including ICU and hospital mortality, duration of stay, and efficiency of resource use. Work in progress is analyzing both general resource use and specific therapeutic interventions. It also is time to fully acknowledge that statistics never can predict whether a patient will die with 100% accuracy. There always will be exceptions to the rule, and physicians frequently will have information that is not included in prognostic models. In addition, the values of both physicians and patients frequently lead to differences in how a probability in interpreted; for some, a 95% probability estimate means that death is near and, for others, this estimate represents a tangible 5% chance for survival. This means that physicians must learn how to integrate such estimates into their medical decisions. In doing so, it is our hope that prognostic systems are not viewed as oversimplifying or automating clinical decisions. Rather, such systems provide objective data on which physicians may ground a spectrum of decisions regarding either escalation or withdrawal of therapy in critically ill patients. These systems do not dehumanize our decision-making process but, rather, help eliminate physician reliance on emotional, heuristic, poorly calibrated, or overly pessimistic subjective estimates. No decision regarding patient care can be considered best if the facts upon which it is based on imprecise or biased. Future research will improve the accuracy of individual patient predictions but, even with the highest degree of precision, such predictions are useful only in support of, and not as a substitute for, good clinical judgment.

  4. Generalized continued fractions and ergodic theory

    NASA Astrophysics Data System (ADS)

    Pustyl'nikov, L. D.

    2003-02-01

    In this paper a new theory of generalized continued fractions is constructed and applied to numbers, multidimensional vectors belonging to a real space, and infinite-dimensional vectors with integral coordinates. The theory is based on a concept generalizing the procedure for constructing the classical continued fractions and substantially using ergodic theory. One of the versions of the theory is related to differential equations. In the finite-dimensional case the constructions thus introduced are used to solve problems posed by Weyl in analysis and number theory concerning estimates of trigonometric sums and of the remainder in the distribution law for the fractional parts of the values of a polynomial, and also the problem of characterizing algebraic and transcendental numbers with the use of generalized continued fractions. Infinite-dimensional generalized continued fractions are applied to estimate sums of Legendre symbols and to obtain new results in the classical problem of the distribution of quadratic residues and non-residues modulo a prime. In the course of constructing these continued fractions, an investigation is carried out of the ergodic properties of a class of infinite-dimensional dynamical systems which are also of independent interest.

  5. An Overdetermined System for Improved Autocorrelation Based Spectral Moment Estimator Performance

    NASA Technical Reports Server (NTRS)

    Keel, Byron M.

    1996-01-01

    Autocorrelation based spectral moment estimators are typically derived using the Fourier transform relationship between the power spectrum and the autocorrelation function along with using either an assumed form of the autocorrelation function, e.g., Gaussian, or a generic complex form and applying properties of the characteristic function. Passarelli has used a series expansion of the general complex autocorrelation function and has expressed the coefficients in terms of central moments of the power spectrum. A truncation of this series will produce a closed system of equations which can be solved for the central moments of interest. The autocorrelation function at various lags is estimated from samples of the random process under observation. These estimates themselves are random variables and exhibit a bias and variance that is a function of the number of samples used in the estimates and the operational signal-to-noise ratio. This contributes to a degradation in performance of the moment estimators. This dissertation investigates the use autocorrelation function estimates at higher order lags to reduce the bias and standard deviation in spectral moment estimates. In particular, Passarelli's series expansion is cast in terms of an overdetermined system to form a framework under which the application of additional autocorrelation function estimates at higher order lags can be defined and assessed. The solution of the overdetermined system is the least squares solution. Furthermore, an overdetermined system can be solved for any moment or moments of interest and is not tied to a particular form of the power spectrum or corresponding autocorrelation function. As an application of this approach, autocorrelation based variance estimators are defined by a truncation of Passarelli's series expansion and applied to simulated Doppler weather radar returns which are characterized by a Gaussian shaped power spectrum. The performance of the variance estimators determined from a closed system is shown to improve through the application of additional autocorrelation lags in an overdetermined system. This improvement is greater in the narrowband spectrum region where the information is spread over more lags of the autocorrelation function. The number of lags needed in the overdetermined system is a function of the spectral width, the number of terms in the series expansion, the number of samples used in estimating the autocorrelation function, and the signal-to-noise ratio. The overdetermined system provides a robustness to the chosen variance estimator by expanding the region of spectral widths and signal-to-noise ratios over which the estimator can perform as compared to the closed system.

  6. A shuttle and space station manipulator system for assembly, docking, maintenance, cargo handling and spacecraft retrieval (preliminary design). Volume 3: Concept analysis. Part 2: Development program

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A preliminary estimate is presented of the resources required to develop the basic general purpose walking boom manipulator system. It is assumed that the necessary full scale zero g test facilities will be available on a no cost basis. A four year development effort is also assumed and it is phased with an estimated shuttle development program since the shuttle will be developed prior to the space station. Based on delivery of one qualification unit and one flight unit and without including any ground support equipment or flight test support it is estimated (within approximately + or - 25%) that a total of 3551 man months of effort and $17,387,000 are required.

  7. A novel aliasing-free subband information fusion approach for wideband sparse spectral estimation

    NASA Astrophysics Data System (ADS)

    Luo, Ji-An; Zhang, Xiao-Ping; Wang, Zhi

    2017-12-01

    Wideband sparse spectral estimation is generally formulated as a multi-dictionary/multi-measurement (MD/MM) problem which can be solved by using group sparsity techniques. In this paper, the MD/MM problem is reformulated as a single sparse indicative vector (SIV) recovery problem at the cost of introducing an additional system error. Thus, the number of unknowns is reduced greatly. We show that the system error can be neglected under certain conditions. We then present a new subband information fusion (SIF) method to estimate the SIV by jointly utilizing all the frequency bins. With orthogonal matching pursuit (OMP) leveraging the binary property of SIV's components, we develop a SIF-OMP algorithm to reconstruct the SIV. The numerical simulations demonstrate the performance of the proposed method.

  8. Modeling demand for public transit services in rural areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Attaluri, P.; Seneviratne, P.N.; Javid, M.

    1997-05-01

    Accurate estimates of demand are critical for planning, designing, and operating public transit systems. Previous research has demonstrated that the expected demand in rural areas is a function of both demographic and transit system variables. Numerous models have been proposed to describe the relationship between the aforementioned variables. However, most of them are site specific and their validity over time and space is not reported or perhaps has not been tested. Moreover, input variables in some cases are extremely difficult to quantify. In this article, the estimation of demand using the generalized linear modeling technique is discussed. Two separate models,more » one for fixed-route and another for demand-responsive services, are presented. These models, calibrated with data from systems in nine different states, are used to demonstrate the appropriateness and validity of generalized linear models compared to the regression models. They explain over 70% of the variation in expected demand for fixed-route services and 60% of the variation in expected demand for demand-responsive services. It was found that the models are spatially transferable and that data for calibration are easily obtainable.« less

  9. Direct calculation of modal parameters from matrix orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    El-Kafafy, Mahmoud; Guillaume, Patrick

    2011-10-01

    The object of this paper is to introduce a new technique to derive the global modal parameter (i.e. system poles) directly from estimated matrix orthogonal polynomials. This contribution generalized the results given in Rolain et al. (1994) [5] and Rolain et al. (1995) [6] for scalar orthogonal polynomials to multivariable (matrix) orthogonal polynomials for multiple input multiple output (MIMO) system. Using orthogonal polynomials improves the numerical properties of the estimation process. However, the derivation of the modal parameters from the orthogonal polynomials is in general ill-conditioned if not handled properly. The transformation of the coefficients from orthogonal polynomials basis to power polynomials basis is known to be an ill-conditioned transformation. In this paper a new approach is proposed to compute the system poles directly from the multivariable orthogonal polynomials. High order models can be used without any numerical problems. The proposed method will be compared with existing methods (Van Der Auweraer and Leuridan (1987) [4] Chen and Xu (2003) [7]). For this comparative study, simulated as well as experimental data will be used.

  10. Estimated 2008 groundwater potentiometric surface and predevelopment to 2008 water-level change in the Santa Fe Group aquifer system in the Albuquerque area, central New Mexico

    USGS Publications Warehouse

    Falk, Sarah E.; Bexfield, Laura M.; Anderholm, Scott K.

    2011-01-01

    The water-supply requirements of the Albuquerque metropolitan area of central New Mexico have historically been met almost exclusively by groundwater withdrawal from the Santa Fe Group aquifer system. Previous studies have indicated that the large quantity of groundwater withdrawal relative to recharge has resulted in water-level declines in the aquifer system throughout the metropolitan area. Analysis of the magnitude and pattern of water-level change can help improve understanding of how the groundwater system responds to withdrawals and variations in the management of the water supply and can support water-management agencies' efforts to minimize future water-level declines and improve sustainability. This report, prepared by the U.S. Geological Survey in cooperation with the Albuquerque Bernalillo County Water Utility Authority, presents the estimated groundwater potentiometric surface during winter (from December to March) of the 2008 water year and the estimated changes in water levels between predevelopment and water year 2008 for the production zone of the Santa Fe Group aquifer system in the Albuquerque and surrounding metropolitan and military areas. Hydrographs from selected wells are included to provide details of historical water-level changes. In general, water-level measurements used for this report were measured in small-diameter observation wells screened over short intervals and were considered to best represent the potentiometric head in the production zone-the interval of the aquifer, about 300 feet below land surface to 1,100 feet or more below land surface, in which production wells generally are screened. Water-level measurements were collected by various local and Federal agencies. The 2008 water year potentiometric surface map was created in a geographic information system, and the change in water-level elevation from predevelopment to water year 2008 was calculated. The 2008 water-level contours indicate that the general direction of groundwater flow is from the Rio Grande towards clusters of production wells in the east, north, and west. Water-level changes from predevelopment to 2008 are variable across the area. Hydrographs from piezometers on the east side of the river generally indicate a trend of decline in the annual highest water level through most of the period of record. Hydrographs from piezometers in the valley near the river and on the west side of the river indicate spatial variability in water-level trends.

  11. Optimum data weighting and error calibration for estimation of gravitational parameters

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.

    1989-01-01

    A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

  12. Multishaker modal testing

    NASA Technical Reports Server (NTRS)

    Craig, Roy R., Jr.

    1987-01-01

    The major accomplishments of this research are: (1) the refinement and documentation of a multi-input, multi-output modal parameter estimation algorithm which is applicable to general linear, time-invariant dynamic systems; (2) the development and testing of an unsymmetric block-Lanzcos algorithm for reduced-order modeling of linear systems with arbitrary damping; and (3) the development of a control-structure-interaction (CSI) test facility.

  13. Instructional Investment or Administrative Bloat: The Effects of Charter System Conversion on Resource Allocations and Staffing Patterns

    ERIC Educational Resources Information Center

    Kramer, Dennis A., II; Lane, Megan; Tanner, Melvin

    2017-01-01

    Despite the growing call for local autonomy and flexibility, few scholars have examined the role of school district-level flexibility on resource allocation and staffing patterns. Leveraging the charter system law within the State of Georgia, we utilize a generalized difference-in-differences approach to estimate the impact of flexibility of…

  14. Digital detection and processing of laser beacon signals for aircraft collision hazard warning

    NASA Technical Reports Server (NTRS)

    Sweet, L. M.; Miles, R. B.; Russell, G. F.; Tomeh, M. G.; Webb, S. G.; Wong, E. Y.

    1981-01-01

    A low-cost collision hazard warning system suitable for implementation in both general and commercial aviation is presented. Laser beacon systems are used as sources of accurate relative position information that are not dependent on communication between aircraft or with the ground. The beacon system consists of a rotating low-power laser beacon, detector arrays with special optics for wide angle acceptance and filtering of solar background light, microprocessors for proximity and relative trajectory computation, and pilot displays of potential hazards. The laser beacon system provides direct measurements of relative aircraft positions; using optimal nonlinear estimation theory, the measurements resulting from the current beacon sweep are combined with previous data to provide the best estimate of aircraft proximity, heading, minimium passing distance, and time to closest approach.

  15. Work Measurement as a Generalized Quantum Measurement

    NASA Astrophysics Data System (ADS)

    Roncaglia, Augusto J.; Cerisola, Federico; Paz, Juan Pablo

    2014-12-01

    We present a new method to measure the work w performed on a driven quantum system and to sample its probability distribution P (w ). The method is based on a simple fact that remained unnoticed until now: Work on a quantum system can be measured by performing a generalized quantum measurement at a single time. Such measurement, which technically speaking is denoted as a positive operator valued measure reduces to an ordinary projective measurement on an enlarged system. This observation not only demystifies work measurement but also suggests a new quantum algorithm to efficiently sample the distribution P (w ). This can be used, in combination with fluctuation theorems, to estimate free energies of quantum states on a quantum computer.

  16. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Goodman, Joseph W.

    1989-01-01

    The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.

  17. Health co-benefits in mortality avoidance from implementation of the mass rapid transit (MRT) system in Kuala Lumpur, Malaysia.

    PubMed

    Kwan, Soo Chen; Tainio, Marko; Woodcock, James; Hashim, Jamal Hisham

    2016-03-01

    The mass rapid transit (MRT) is the largest transport infrastructure project under the national key economic area (NKEA) in Malaysia. As urban rail is anticipated to be the future spine of public transport network in the Greater Kuala Lumpur city, it is important to mainstream climate change mitigation and public health benefits in the local transport development. This study quantifies the health co-benefits in terms of mortality among the urbanites when the first line of the 150 km MRT system in Kuala Lumpur commences by 2017. Using comparative health risk assessment, we estimated the potential health co-benefits from the establishment of the MRT system. We estimated the reduced CO2 emissions and air pollution (PM2.5) exposure reduction among the general population from the reduced use of motorized vehicles. Mortality avoided from traffic incidents involving motorcycles and passenger cars, and from increased physical activity from walking while using the MRT system was also estimated. A total of 363,130 tonnes of CO2 emissions could be reduced annually from the modal shift from cars and motorcycles to the MRT system. Atmospheric PM2.5 concentration could be reduced 0.61 μg/m3 annually (2%). This could avoid a total of 12 deaths, mostly from cardio-respiratory diseases among the city residents. For traffic injuries, 37 deaths could be avoided annually from motorcycle and passenger cars accidents especially among the younger age categories (aged 15-30). One additional death was attributed to pedestrian walking. The additional daily physical activity to access the MRT system could avoid 21 deaths among its riders. Most of the mortality avoided comes from cardiovascular diseases. Overall, a total of 70 deaths could be avoided annually among both the general population and the MRT users in the city. The implementation of the MRT system in Greater Kuala Lumpur could bring substantial health co-benefits to both the general population and the MRT users mainly from the avoidance of mortality from traffic injuries.

  18. Tidal friction and generalized Cassini's laws in the solar system. [for planetary spin axis rotation

    NASA Technical Reports Server (NTRS)

    Ward, W. R.

    1975-01-01

    The tidal drift toward a generalized Cassini state of rotation of the spin axis of a planet or satellite in a precessing orbit is described. Generalized Cassini's laws are applied to several solar system objects and the location of their spin axes estimated. Of those considered only the moon definitely occupies state 2 with the spin axis near to the normal of the invariable plane. Most objects appear to occupy state 1 with the spin axis near to the orbit normal. Iapetus could occupy either state depending on its oblateness. In addition, the resonant rotation of Mercury is found to have little effect on the tidal drift of its spin axis toward state 1.

  19. Single-shot quantum state estimation via a continuous measurement in the strong backaction regime

    NASA Astrophysics Data System (ADS)

    Cook, Robert L.; Riofrío, Carlos A.; Deutsch, Ivan H.

    2014-09-01

    We study quantum tomography based on a stochastic continuous-time measurement record obtained from a probe field collectively interacting with an ensemble of identically prepared systems. In comparison to previous studies, we consider here the case in which the measurement-induced backaction has a non-negligible effect on the dynamical evolution of the ensemble. We formulate a maximum likelihood estimate for the initial quantum state given only a single instance of the continuous diffusive measurement record. We apply our estimator to the simplest problem: state tomography of a single pure qubit, which, during the course of the measurement, is also subjected to dynamical control. We identify a regime where the many-body system is well approximated at all times by a separable pure spin coherent state, whose Bloch vector undergoes a conditional stochastic evolution. We simulate the results of our estimator and show that we can achieve close to the upper bound of fidelity set by the optimal generalized measurement. This estimate is compared to, and significantly outperforms, an equivalent estimator that ignores measurement backaction.

  20. A measurement fusion method for nonlinear system identification using a cooperative learning algorithm.

    PubMed

    Xia, Youshen; Kamel, Mohamed S

    2007-06-01

    Identification of a general nonlinear noisy system viewed as an estimation of a predictor function is studied in this article. A measurement fusion method for the predictor function estimate is proposed. In the proposed scheme, observed data are first fused by using an optimal fusion technique, and then the optimal fused data are incorporated in a nonlinear function estimator based on a robust least squares support vector machine (LS-SVM). A cooperative learning algorithm is proposed to implement the proposed measurement fusion method. Compared with related identification methods, the proposed method can minimize both the approximation error and the noise error. The performance analysis shows that the proposed optimal measurement fusion function estimate has a smaller mean square error than the LS-SVM function estimate. Moreover, the proposed cooperative learning algorithm can converge globally to the optimal measurement fusion function estimate. Finally, the proposed measurement fusion method is applied to ARMA signal and spatial temporal signal modeling. Experimental results show that the proposed measurement fusion method can provide a more accurate model.

  1. [Diagnostic value of integral scoring systems in assessing the severity of acute pancreatitis and patient's condition].

    PubMed

    Vinnik, Y S; Dunaevskaya, S S; Antufrieva, D A

    2015-01-01

    The aim of the study was to evaluate the diagnostic value of specific and nonspecific scoring systems Tolstoy-Krasnogorov score, Ranson, BISAP, Glasgow, MODS 2, APACHE II and CTSI, which used at urgent pancreatology for estimation the severity of acute pancreatitis and status of patient. 1550 case reports of patients which had inpatient surgical treatment at Road clinical hospital at the station Krasnoyarsk from 2009 till 2013 were analyzed. Diagnosis of severe acute pancreatitis and its complications were determined based on anamnestic data, physical exami- nation, clinical indexes, ultrasonic examination and computed tomography angiography. Specific and nonspecific scores (scoring system of estimation by Tolstoy-Krasnogorov, Ranson, Glasgow, BISAP, MODS 2, APACHE II, CTSI) were used for estimation the severity of acute pancreatitis and patient's general condition. Effectiveness of these scoring systems was determined based on some parameters: accuracy (Ac), sensitivity (Se), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV). Most valuables score for estimation of acute pancreatitis's severity is BISAP (Se--98.10%), for estimation of organ failure--MODS 2 (Sp--100%, PPV--100%) and APACHE II (Sp--100%, PPV--100%), for detection of pancreatonecrosis sings--CTSI (Sp--100%, NPV--100%), for estimation of need for intensive care--MODS 2 (Sp--100%, PPV--100%, NPV--96.29%) and APACHE II (Sp--100%, PPV--100%, NPV--97.21%), for prediction of lethality--MODS 2 (Se-- 100%, Sp--98.14%, NPV--100%) and APACHE II (Se--95.00%, NPV-.99.86%). Most effective scores for estimation of acute pancreatitis's severity are Score of estimation by Tolstoy-Krasnogorov, Ranson, Glasgow and BISAP Scoring systems MODS 2, APACHE I high specificity and positive predictive value allow using it at clinical practice.

  2. Estimation and Mitigation of Channel Non-Reciprocity in Massive MIMO

    NASA Astrophysics Data System (ADS)

    Raeesi, Orod; Gokceoglu, Ahmet; Valkama, Mikko

    2018-05-01

    Time-division duplex (TDD) based massive MIMO systems rely on the reciprocity of the wireless propagation channels when calculating the downlink precoders based on uplink pilots. However, the effective uplink and downlink channels incorporating the analog radio front-ends of the base station (BS) and user equipments (UEs) exhibit non-reciprocity due to non-identical behavior of the individual transmit and receive chains. When downlink precoder is not aware of such channel non-reciprocity (NRC), system performance can be significantly degraded due to NRC induced interference terms. In this work, we consider a general TDD-based massive MIMO system where frequency-response mismatches at both the BS and UEs, as well as the mutual coupling mismatch at the BS large-array system all coexist and induce channel NRC. Based on the NRC-impaired signal models, we first propose a novel iterative estimation method for acquiring both the BS and UE side NRC matrices and then also propose a novel NRC-aware downlink precoder design which utilizes the obtained estimates. Furthermore, an efficient pilot signaling scheme between the BS and UEs is introduced in order to facilitate executing the proposed estimation method and the NRC-aware precoding technique in practical systems. Comprehensive numerical results indicate substantially improved spectral efficiency performance when the proposed NRC estimation and NRC-aware precoding methods are adopted, compared to the existing state-of-the-art methods.

  3. Parameter estimation for stiff deterministic dynamical systems via ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Arnold, Andrea; Calvetti, Daniela; Somersalo, Erkki

    2014-10-01

    A commonly encountered problem in numerous areas of applications is to estimate the unknown coefficients of a dynamical system from direct or indirect observations at discrete times of some of the components of the state vector. A related problem is to estimate unobserved components of the state. An egregious example of such a problem is provided by metabolic models, in which the numerous model parameters and the concentrations of the metabolites in tissue are to be estimated from concentration data in the blood. A popular method for addressing similar questions in stochastic and turbulent dynamics is the ensemble Kalman filter (EnKF), a particle-based filtering method that generalizes classical Kalman filtering. In this work, we adapt the EnKF algorithm for deterministic systems in which the numerical approximation error is interpreted as a stochastic drift with variance based on classical error estimates of numerical integrators. This approach, which is particularly suitable for stiff systems where the stiffness may depend on the parameters, allows us to effectively exploit the parallel nature of particle methods. Moreover, we demonstrate how spatial prior information about the state vector, which helps the stability of the computed solution, can be incorporated into the filter. The viability of the approach is shown by computed examples, including a metabolic system modeling an ischemic episode in skeletal muscle, with a high number of unknown parameters.

  4. Ecological Footprint Analysis (EFA) for the Chicago Metropolitan Area: Initial Estimation - slides

    EPA Science Inventory

    Because of its computational simplicity, Ecological Footprint Analysis (EFA) has been extensively deployed for assessing the sustainability of various environmental systems. In general, EFA aims at capturing the impacts of human activity on the environment by computing the amount...

  5. Fully decentralized estimation and control for a modular wheeled mobile robot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mutambara, A.G.O.; Durrant-Whyte, H.F.

    2000-06-01

    In this paper, the problem of fully decentralized data fusion and control for a modular wheeled mobile robot (WMR) is addressed. This is a vehicle system with nonlinear kinematics, distributed multiple sensors, and nonlinear sensor models. The problem is solved by applying fully decentralized estimation and control algorithms based on the extended information filter. This is achieved by deriving a modular, decentralized kinematic model by using plane motion kinematics to obtain the forward and inverse kinematics for a generalized simple wheeled vehicle. This model is then used in the decentralized estimation and control algorithms. WMR estimation and control is thusmore » obtained locally using reduced order models with reduced communication of information between nodes is carried out after every measurement (full rate communication), the estimates and control signals obtained at each node are equivalent to those obtained by a corresponding centralized system. Transputer architecture is used as the basis for hardware and software design as it supports the extensive communication and concurrency requirements that characterize modular and decentralized systems. The advantages of a modular WMR vehicle include scalability, application flexibility, low prototyping costs, and high reliability.« less

  6. Optimal causal inference: estimating stored information and approximating causal architecture.

    PubMed

    Still, Susanne; Crutchfield, James P; Ellison, Christopher J

    2010-09-01

    We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding--a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.

  7. Generalized Support Software: Domain Analysis and Implementation

    NASA Technical Reports Server (NTRS)

    Stark, Mike; Seidewitz, Ed

    1995-01-01

    For the past five years, the Flight Dynamics Division (FDD) at NASA's Goddard Space Flight Center has been carrying out a detailed domain analysis effort and is now beginning to implement Generalized Support Software (GSS) based on this analysis. GSS is part of the larger Flight Dynamics Distributed System (FDDS), and is designed to run under the FDDS User Interface / Executive (UIX). The FDD is transitioning from a mainframe based environment to systems running on engineering workstations. The GSS will be a library of highly reusable components that may be configured within the standard FDDS architecture to quickly produce low-cost satellite ground support systems. The estimates for the first release is that this library will contain approximately 200,000 lines of code. The main driver for developing generalized software is development cost and schedule improvement. The goal is to ultimately have at least 80 percent of all software required for a spacecraft mission (within the domain supported by the GSS) to be configured from the generalized components.

  8. A framework for scalable parameter estimation of gene circuit models using structural information.

    PubMed

    Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin

    2013-07-01

    Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. http://sfb.kaust.edu.sa/Pages/Software.aspx. Supplementary data are available at Bioinformatics online.

  9. Development of weight and cost estimates for lifting surfaces with active controls

    NASA Technical Reports Server (NTRS)

    Anderson, R. D.; Flora, C. C.; Nelson, R. M.; Raymond, E. T.; Vincent, J. H.

    1976-01-01

    Equations and methodology were developed for estimating the weight and cost incrementals due to active controls added to the wing and horizontal tail of a subsonic transport airplane. The methods are sufficiently generalized to be suitable for preliminary design. Supporting methodology and input specifications for the weight and cost equations are provided. The weight and cost equations are structured to be flexible in terms of the active control technology (ACT) flight control system specification. In order to present a self-contained package, methodology is also presented for generating ACT flight control system characteristics for the weight and cost equations. Use of the methodology is illustrated.

  10. Global stability and quadratic Hamiltonian structure in Lotka-Volterra and quasi-polynomial systems

    NASA Astrophysics Data System (ADS)

    Szederkényi, Gábor; Hangos, Katalin M.

    2004-04-01

    We show that the global stability of quasi-polynomial (QP) and Lotka-Volterra (LV) systems with the well-known logarithmic Lyapunov function is equivalent to the existence of a local generalized dissipative Hamiltonian description of the LV system with a diagonal quadratic form as a Hamiltonian function. The Hamiltonian function can be calculated and the quadratic dissipativity neighborhood of the origin can be estimated by solving linear matrix inequalities.

  11. Characterization of measurements in quantum communication. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chan, V. W. S.

    1975-01-01

    A characterization of quantum measurements by operator valued measures is presented. The generalized measurements include simultaneous approximate measurement of noncommuting observables. This characterization is suitable for solving problems in quantum communication. Two realizations of such measurements are discussed. The first is by adjoining an apparatus to the system under observation and performing a measurement corresponding to a self-adjoint operator in the tensor-product Hilbert space of the system and apparatus spaces. The second realization is by performing, on the system alone, sequential measurements that correspond to self-adjoint operators, basing the choice of each measurement on the outcomes of previous measurements. Simultaneous generalized measurements are found to be equivalent to a single finer grain generalized measurement, and hence it is sufficient to consider the set of single measurements. An alternative characterization of generalized measurement is proposed. It is shown to be equivalent to the characterization by operator-values measures, but it is potentially more suitable for the treatment of estimation problems. Finally, a study of the interaction between the information-carrying system and a measurement apparatus provides clues for the physical realizations of abstractly characterized quantum measurements.

  12. A hierarchical estimator development for estimation of tire-road friction coefficient

    PubMed Central

    Zhang, Xudong; Göhlich, Dietmar

    2017-01-01

    The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN) and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified “magic formula” tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method. PMID:28178332

  13. Robust time and frequency domain estimation methods in adaptive control

    NASA Technical Reports Server (NTRS)

    Lamaire, Richard Orville

    1987-01-01

    A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.

  14. A hierarchical estimator development for estimation of tire-road friction coefficient.

    PubMed

    Zhang, Xudong; Göhlich, Dietmar

    2017-01-01

    The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN) and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified "magic formula" tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method.

  15. Controller certification: The generalized stability margin inference for a large number of MIMO controllers

    NASA Astrophysics Data System (ADS)

    Park, Jisang

    In this dissertation, we investigate MIMO stability margin inference of a large number of controllers using pre-established stability margins of a small number of nu-gap-wise adjacent controllers. The generalized stability margin and the nu-gap metric are inherently able to handle MIMO system analysis without the necessity of repeating multiple channel-by-channel SISO analyses. This research consists of three parts: (i) development of a decision support tool for inference of the stability margin, (ii) computational considerations for yielding the maximal stability margin with the minimal nu-gap metric in a less conservative manner, and (iii) experiment design for estimating the generalized stability margin with an assured error bound. A modern problem from aerospace control involves the certification of a large set of potential controllers with either a single plant or a fleet of potential plant systems, with both plants and controllers being MIMO and, for the moment, linear. Experiments on a limited number of controller/plant pairs should establish the stability and a certain level of margin of the complete set. We consider this certification problem for a set of controllers and provide algorithms for selecting an efficient subset for testing. This is done for a finite set of candidate controllers and, at least for SISO plants, for an infinite set. In doing this, the nu-gap metric will be the main tool. We provide a theorem restricting a radius of a ball in the parameter space so that the controller can guarantee a prescribed level of stability and performance if parameters of the controllers are contained in the ball. Computational examples are given, including one of certification of an aircraft engine controller. The overarching aim is to introduce truly MIMO margin calculations and to understand their efficacy in certifying stability over a set of controllers and in replacing legacy single-loop gain and phase margin calculations. We consider methods for the computation of; maximal MIMO stability margins bP̂,C, minimal nu-gap metrics deltanu , and the maximal difference between these two values, through the use of scaling and weighting functions. We propose simultaneous scaling selections that attempt to maximize the generalized stability margin and minimize the nu-gap. The minimization of the nu-gap by scaling involves a non-convex optimization. We modify the XY-centering algorithm to handle this non-convexity. This is done for applications in controller certification. Estimating the generalized stability margin with an accurate error bound has significant impact on controller certification. We analyze an error bound of the generalized stability margin as the infinity norm of the MIMO empirical transfer function estimate (ETFE). Input signal design to reduce the error on the estimate is also studied. We suggest running the system for a certain amount of time prior to recording of each output data set. The assured upper bound of estimation error can be tuned by the amount of the pre-experiment.

  16. An affordable RBCC-powered 2-stage small orbital payload transportation systems concept based on test-proven hardware

    NASA Astrophysics Data System (ADS)

    Escher, William J. D.

    1998-01-01

    Deriving from the initial planning activity of early 1965, which led to NASA's Advanced Space Transportation Program (ASTP), an early-available airbreathing/rocket combined propulsion system powered ``ultralight payload'' launcher was defined at the conceptual design level. This system, named the ``W Vehicle,'' was targeted to be a ``second generation'' successor to the original Bantam Lifter class, all-rocket powered systems presently being pursued by NASA and a selected set of its contractors. While this all-rocket vehicle is predicated on a fully expendable approach, the W-Vehicle system was to be a fully reusable 2-stage vehicle. The general (original) goal of the Bantam class of launchers was to orbit a 100 kg payload for a recurring per-launch cost of less than one million dollars. Reusability, as the case for larger vehicles focusing on single stage to orbit (SSTO) configurations, is considered the principal key to affordability. In the general context of a range of space transports, covering the payload range of 0.1 to 10 metric ton payloads, the W Vehicle concept-predicated mainly on ground- and flight-test proven hardware-is described in this paper, along with a nominal development schedule and budgetary estimate (recurring costs were not estimated).

  17. Groundwater flow and water budget in the surficial and Floridan aquifer systems in east-central Florida

    USGS Publications Warehouse

    Sepúlveda, Nicasio; Tiedeman, Claire; O'Reilly, Andrew M.; Davis, Jeffrey B.; Burger, Patrick

    2012-01-01

    A numerical transient model of the surficial and Floridan aquifer systems in east-central Florida was developed to (1) increase the understanding of water exchanges between the surficial and the Floridan aquifer systems, (2) assess the recharge rates to the surficial aquifer system from infiltration through the unsaturated zone and (3) obtain a simulation tool that could be used by water-resource managers to assess the impact of changes in groundwater withdrawals on spring flows and on the potentiometric surfaces of the hydrogeologic units composing the Floridan aquifer system. The hydrogeology of east-central Florida was evaluated and used to develop and calibrate the groundwater flow model, which simulates the regional fresh groundwater flow system. The U.S. Geological Survey three-dimensional groundwater flow model, MODFLOW-2005, was used to simulate transient groundwater flow in the surficial, intermediate, and Floridan aquifer systems from 1995 to 2006. The East-Central Florida Transient model encompasses an actively simulated area of about 9,000 square miles. Although the model includes surficial processes-rainfall, irrigation, evapotranspiration (ET), runoff, infiltration, lake water levels, and stream water levels and flows-its primary purpose is to characterize and refine the understanding of groundwater flow in the Floridan aquifer system. Model-independent estimates of the partitioning of rainfall into ET, streamflow, and aquifer recharge are provided from a water-budget analysis of the surficial aquifer system. The interaction of the groundwater flow system with the surface environment was simulated using the Green-Ampt infiltration method and the MODFLOW-2005 Unsaturated-Zone Flow, Lake, and Streamflow-Routing Packages. The model is intended to simulate the part of the groundwater system that contains freshwater. The bottom and lateral boundaries of the model were established at the estimated depths where the chloride concentration is 5,000 milligrams per liter in the Floridan aquifer system. Potential flow across the interface represented by this chloride concentration is simulated by the General Head Boundary Package. During 1995 through 2006, there were no major groundwater withdrawals near the freshwater and saline-water interface, making the general head boundary a suitable feature to estimate flow through the interface. The east-central Florida transient model was calibrated using the inverse parameter estimation code, PEST. Steady-state models for 1999 and 2003 were developed to estimate hydraulic conductivity (K) using average annual heads and spring flows as observations. The spatial variation of K was represented using zones of constant values in some layers, and pilot points in other layers. Estimated K values were within one order of magnitude of aquifer performance test data. A simulation of the final two years (2005-2006) of the 12-year model, with the K estimates from the steady-state calibration, was used to guide the estimation of specific yield and specific storage values. The final model yielded head and spring-flow residuals that met the calibration criteria for the 12-year transient simulation. The overall mean residual for heads, defining residual as simulated minus measured value, was -0.04 foot. The overall root-mean square residual for heads was less than 3.6 feet for each year in the 1995 to 2006 simulation period. The overall mean residual for spring flows was -0.3 cubic foot per second. The spatial distribution of head residuals was generally random, with some minor indications of bias. Simulated average ET over the 1995 to 2006 period was 34.47 inches per year, compared to the calculated average ET rate of 36.39 inches per year from the model-independent water-budget analysis. Simulated average net recharge to the surficial aquifer system was 3.58 inches per year, compared with the calculated average of 3.39 inches per year from the model-independent water-budget analysis. Groundwater withdrawals from the Floridan aquifer system averaged about 920 million gallons per day, which is equivalent to about 2 inches per year over the model area and slightly more than half of the simulated average net recharge to the surficial aquifer system over the same period. Annual net simulated recharge rates to the surficial aquifer system were less than the total groundwater withdrawals from the Floridan aquifer system only during the below-average rainfall years of 2000 and 2006.

  18. Groundwater flow and water budget in the surficial and Floridan aquifer systems in east-central Florida

    USGS Publications Warehouse

    Sepúlveda, Nicasio; Tiedeman, Claire; O'Reilly, Andrew M.; Davis, Jeffery B.; Burger, Patrick

    2012-01-01

    A numerical transient model of the surficial and Floridan aquifer systems in east-central Florida was developed to (1) increase the understanding of water exchanges between the surficial and the Floridan aquifer systems, (2) assess the recharge rates to the surficial aquifer system from infiltration through the unsaturated zone and (3) obtain a simulation tool that could be used by water-resource managers to assess the impact of changes in groundwater withdrawals on spring flows and on the potentiometric surfaces of the hydrogeologic units composing the Floridan aquifer system. The hydrogeology of east-central Florida was evaluated and used to develop and calibrate the groundwater flow model, which simulates the regional fresh groundwater flow system. The U.S. Geological Survey three-dimensional groundwater flow model, MODFLOW-2005, was used to simulate transient groundwater flow in the surficial, intermediate, and Floridan aquifer systems from 1995 to 2006. The east-central Florida transient model encompasses an actively simulated area of about 9,000 square miles. Although the model includes surficial processes-rainfall, irrigation, evapotranspiration, runoff, infiltration, lake water levels, and stream water levels and flows-its primary purpose is to characterize and refine the understanding of groundwater flow in the Floridan aquifer system. Model-independent estimates of the partitioning of rainfall into evapotranspiration, streamflow, and aquifer recharge are provided from a water-budget analysis of the surficial aquifer system. The interaction of the groundwater flow system with the surface environment was simulated using the Green-Ampt infiltration method and the MODFLOW-2005 Unsaturated-Zone Flow, Lake, and Streamflow-Routing Packages. The model is intended to simulate the part of the groundwater system that contains freshwater. The bottom and lateral boundaries of the model were established at the estimated depths where the chloride concentration is 5,000 milligrams per liter in the Floridan aquifer system. Potential flow across the interface represented by this chloride concentration is simulated by the General Head Boundary Package. During 1995 through 2006, there were no major groundwater withdrawals near the freshwater and saline-water interface, making the general head boundary a suitable feature to estimate flow through the interface. The east-central Florida transient model was calibrated using the inverse parameter estimation code, PEST. Steady-state models for 1999 and 2003 were developed to estimate hydraulic conductivity (K) using average annual heads and spring flows as observations. The spatial variation of K was represented using zones of constant values in some layers, and pilot points in other layers. Estimated K values were within one order of magnitude of aquifer performance test data. A simulation of the final two years (2005-2006) of the 12-year model, with the K estimates from the steady-state calibration, was used to guide the estimation of specific yield and specific storage values. The final model yielded head and spring-flow residuals that met the calibration criteria for the 12-year transient simulation. The overall mean residual for heads, defining residual as simulated minus measured value, was -0.04 foot. The overall root-mean square residual for heads was less than 3.6 feet for each year in the 1995 to 2006 simulation period. The overall mean residual for spring flows was -0.3 cubic foot per second. The spatial distribution of head residuals was generally random, with some minor indications of bias. Simulated average evapotranspiration (ET) over the 1995 to 2006 period was 34.5 inches per year, compared to the calculated average ET rate of 36.6 inches per year from the model-independent water-budget analysis. Simulated average net recharge to the surficial aquifer system was 3.6 inches per year, compared with the calculated average of 3.2 inches per year from the model-independent waterbudget analysis. Groundwater withdrawals from the Floridan aquifer system averaged about 800 million gallons per day, which is equivalent to about 2 inches per year over the model area and slightly more than half of the simulated average net recharge to the surficial aquifer system over the same period. Annual net simulated recharge rates to the surficial aquifer system were less than the total groundwater withdrawals from the Floridan aquifer system only during the below-average rainfall years of 2000 and 2006.

  19. [Public free anonymous HIV testing centers: cost analysis and financing options].

    PubMed

    Dozol, Adrien; Tribout, Martin; Labalette, Céline; Moreau, Anne-Christine; Duteil, Christelle; Bertrand, Dominique; Segouin, Christophe

    2011-01-01

    The services of general interest provided by hospitals, such as free HIV clinics, have been funded since 2005 by a lump sum covering all costs. The allocation of the budget was initially determined based on historical and declarative data. However, the French Ministry of Health (MoH) recently outlined new rules for determining the allocation of financial resources and contracting hospitals for each type of services of general interest provided. The aim of this study was to estimate the annual cost of a public free anonymous HIV-testing center and to assess the budgetary implications of new financing systems. Three financing options were compared: the historic block grant; a mixed system recommended by the MoH associating a lump sum covering the recurring costs of an average center and a variable part based on the type and volume of services provided; and a fee-for-services system. For the purposes of this retrospective study, the costs and activity data of the HIV testing clinic of a public hospital located in the North of Paris were obtained for 2007. The costs were analyzed from the perspective of the hospital. The total cost was estimated at 555,698 euros. Personnel costs accounted for 31% of the total costs, while laboratory expenses accounted for 36% of the total costs. While the estimated deficit was 292,553 euros under the historic system, the financial balance of the clinic was found to be positive under a fee-for-services system. The budget allocated to the HIV clinic under the system recommended by the MoH covers most of the current expenses of the HIV clinic while meeting the requirements of free confidential care.

  20. Nonsymbolic number and cumulative area representations contribute shared and unique variance to symbolic math competence

    PubMed Central

    Lourenco, Stella F.; Bonny, Justin W.; Fernandez, Edmund P.; Rao, Sonia

    2012-01-01

    Humans and nonhuman animals share the capacity to estimate, without counting, the number of objects in a set by relying on an approximate number system (ANS). Only humans, however, learn the concepts and operations of symbolic mathematics. Despite vast differences between these two systems of quantification, neural and behavioral findings suggest functional connections. Another line of research suggests that the ANS is part of a larger, more general system of magnitude representation. Reports of cognitive interactions and common neural coding for number and other magnitudes such as spatial extent led us to ask whether, and how, nonnumerical magnitude interfaces with mathematical competence. On two magnitude comparison tasks, college students estimated (without counting or explicit calculation) which of two arrays was greater in number or cumulative area. They also completed a battery of standardized math tests. Individual differences in both number and cumulative area precision (measured by accuracy on the magnitude comparison tasks) correlated with interindividual variability in math competence, particularly advanced arithmetic and geometry, even after accounting for general aspects of intelligence. Moreover, analyses revealed that whereas number precision contributed unique variance to advanced arithmetic, cumulative area precision contributed unique variance to geometry. Taken together, these results provide evidence for shared and unique contributions of nonsymbolic number and cumulative area representations to formally taught mathematics. More broadly, they suggest that uniquely human branches of mathematics interface with an evolutionarily primitive general magnitude system, which includes partially overlapping representations of numerical and nonnumerical magnitude. PMID:23091023

  1. COMPUTERIZED SHAWNEE LIME/LIMESTONE SCRUBBING MODEL USERS MANUAL

    EPA Science Inventory

    The manual gives a general description of a computerized model for estimating design and cost of lime or limestone scrubber systems for flue gas desulfurization (FGD). It supplements PB80-123037 by extending the number of scrubber options which can be evaluated. It includes spray...

  2. 48 CFR 239.7409 - Special assembly.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Special assembly. 239.7409... Services 239.7409 Special assembly. (a) Special assembly is the designing, manufacturing, arranging... general use equipment. (b) Special assembly rates and charges shall be based on estimated costs. The...

  3. 48 CFR 239.7409 - Special assembly.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Special assembly. 239.7409... Services 239.7409 Special assembly. (a) Special assembly is the designing, manufacturing, arranging... general use equipment. (b) Special assembly rates and charges shall be based on estimated costs. The...

  4. 48 CFR 239.7409 - Special assembly.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Special assembly. 239.7409... Services 239.7409 Special assembly. (a) Special assembly is the designing, manufacturing, arranging... general use equipment. (b) Special assembly rates and charges shall be based on estimated costs. The...

  5. 48 CFR 239.7409 - Special assembly.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Special assembly. 239.7409... Services 239.7409 Special assembly. (a) Special assembly is the designing, manufacturing, arranging... general use equipment. (b) Special assembly rates and charges shall be based on estimated costs. The...

  6. 48 CFR 239.7409 - Special assembly.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Special assembly. 239.7409... Services 239.7409 Special assembly. (a) Special assembly is the designing, manufacturing, arranging... general use equipment. (b) Special assembly rates and charges shall be based on estimated costs. The...

  7. Driver/Vehicle Characteristics in Rear-End Precrash Scenarios Based on the General Estimates System (GES).

    DOT National Transportation Integrated Search

    1999-03-01

    This paper studies different driver and vehicle characteristics as they impact pre-crash scenarios of rear-end collisions. It gives a statistical description of the five most frequently occurring rear-end precrash scenarios based on vehicle and drive...

  8. Identifiability of large-scale non-linear dynamic network models applied to the ADM1-case study.

    PubMed

    Nimmegeers, Philippe; Lauwers, Joost; Telen, Dries; Logist, Filip; Impe, Jan Van

    2017-06-01

    In this work, both the structural and practical identifiability of the Anaerobic Digestion Model no. 1 (ADM1) is investigated, which serves as a relevant case study of large non-linear dynamic network models. The structural identifiability is investigated using the probabilistic algorithm, adapted to deal with the specifics of the case study (i.e., a large-scale non-linear dynamic system of differential and algebraic equations). The practical identifiability is analyzed using a Monte Carlo parameter estimation procedure for a 'non-informative' and 'informative' experiment, which are heuristically designed. The model structure of ADM1 has been modified by replacing parameters by parameter combinations, to provide a generally locally structurally identifiable version of ADM1. This means that in an idealized theoretical situation, the parameters can be estimated accurately. Furthermore, the generally positive structural identifiability results can be explained from the large number of interconnections between the states in the network structure. This interconnectivity, however, is also observed in the parameter estimates, making uncorrelated parameter estimations in practice difficult. Copyright © 2017. Published by Elsevier Inc.

  9. Medicaid's Role in Financing Health Care for Children with Behavioral Health Care Needs in the Special Education System: Implications of the Deficit Reduction Act

    ERIC Educational Resources Information Center

    Mandell, David S.; Machefsky, Aliza; Rubin, David; Feudtner, Chris; Pita, Susmita; Rosenbaum, Sara

    2008-01-01

    Background: Recent changes to Medicaid policy may have unintended consequences in the education system. This study estimated the potential financial impact of the Deficit Reduction Act (DRA) on school districts by calculating Medicaid-reimbursed behavioral health care expenditures for school-aged children in general and children in special…

  10. Model verification of large structural systems

    NASA Technical Reports Server (NTRS)

    Lee, L. T.; Hasselman, T. K.

    1977-01-01

    A methodology was formulated, and a general computer code implemented for processing sinusoidal vibration test data to simultaneously make adjustments to a prior mathematical model of a large structural system, and resolve measured response data to obtain a set of orthogonal modes representative of the test model. The derivation of estimator equations is shown along with example problems. A method for improving the prior analytic model is included.

  11. Utilization of Norway’s Emergency Wards: The Second 5 Years after the Introduction of the Patient List System

    PubMed Central

    Goth, Ursula S.; Hammer, Hugo L.; Claussen, Bjørgulf

    2014-01-01

    Utilization of services is an important indicator for estimating access to healthcare. In Norway, the General Practitioner Scheme, a patient list system, was established in 2001 to enable a stable doctor-patient relationship. Although satisfaction with the system is generally high, people often choose a more accessible but inferior solution for routine care: emergency wards. The aim of the article is to investigate contact patterns in primary health care situations for the total population in urban and remote areas of Norway and for major immigrant groups in Oslo. The primary regression model had a cross-sectional study design analyzing 2,609,107 consultations in representative municipalities across Norway, estimating the probability of choosing the emergency ward in substitution to a general practitioner. In a second regression model comprising 625,590 consultations in Oslo, we calculated this likelihood for immigrants from the 14 largest groups. We noted substantial differences in emergency ward utilization between ethnic Norwegians both in rural and remote areas and among the various immigrant groups residing in Oslo. Oslo utilization of emergency ward services for the whole population declined, and so did this use among all immigrant groups after 2009. Other municipalities, while overwhelmingly ethnically Norwegian, showed diverse patterns including an increase in some and a decrease in others, results which we were unable to explain. PMID:24662997

  12. The Hyperloop as a Source of Interesting Estimation Questions

    NASA Astrophysics Data System (ADS)

    Allain, Rhett

    2014-03-01

    The Hyperloop is a conceptual high speed transportation system proposed by Elon Musk. The basic idea uses passenger capsules inside a reduced pressure tube. Even though the actual physics of dynamic air flow in a confined space can be complicated, there are a multitude estimation problems that can be addressed. These back-of-the-envelope questions can be approximated by physicists of all levels as well as the general public and serve as a great example of the fundamental aspects of physics.

  13. Defense Communications Agency Cost and Planning Factors Manual. Change 2.

    DTIC Science & Technology

    1985-09-23

    5 39-6 40. FISCAL -YEAR TIME PHASING OF COST ESTIMATE ...... 40-1 (To be published later) 41. DISCOUNTING General...cubic foot (feet) ft3 /min cubic foot (feet) per minute FY fiscal year FYDP Five Year Defense Program FYP Five Year Program G&A general & administrative... fiscal year 1 of the subsystem project plan. The remainder of the equipment and buildings and the training are to be contracted for and the system turned

  14. Electrically heated particulate filter propagation support methods and systems

    DOEpatents

    Gonze, Eugene V [Pinckney, MI; Ament, Frank [Troy, MI

    2011-06-07

    A control system that controls regeneration of a particulate filter is provided. The system generally includes a regeneration module that controls current to the particulate filter to initiate combustion of particulate matter in the particulate filter. A propagation module estimates a propagation status of the combustion of the particulate matter based on a combustion temperature. A temperature adjustment module controls the combustion temperature by selectively increasing a temperature of exhaust that passes through the particulate filter.

  15. Research study entitled advanced X-ray astrophysical observatory (AXAF). [system engineering for a total X-ray telescope assembly

    NASA Technical Reports Server (NTRS)

    Rasche, R. W.

    1979-01-01

    General background and overview material are presented along with data from studies performed to determine the sensitivity, feasibility, and required performance of systems for a total X-ray telescope assembly. Topics covered include: optical design, mirror support concepts, mirror weight estimates, the effects of l g on mirror elements, mirror assembly resonant frequencies, optical bench considerations, temperature control of the mirror assembly, and the aspect determination system.

  16. Power-law modeling based on least-squares minimization criteria.

    PubMed

    Hernández-Bermejo, B; Fairén, V; Sorribas, A

    1999-10-01

    The power-law formalism has been successfully used as a modeling tool in many applications. The resulting models, either as Generalized Mass Action or as S-systems models, allow one to characterize the target system and to simulate its dynamical behavior in response to external perturbations and parameter changes. The power-law formalism was first derived as a Taylor series approximation in logarithmic space for kinetic rate-laws. The especial characteristics of this approximation produce an extremely useful systemic representation that allows a complete system characterization. Furthermore, their parameters have a precise interpretation as local sensitivities of each of the individual processes and as rate-constants. This facilitates a qualitative discussion and a quantitative estimation of their possible values in relation to the kinetic properties. Following this interpretation, parameter estimation is also possible by relating the systemic behavior to the underlying processes. Without leaving the general formalism, in this paper we suggest deriving the power-law representation in an alternative way that uses least-squares minimization. The resulting power-law mimics the target rate-law in a wider range of concentration values than the classical power-law. Although the implications of this alternative approach remain to be established, our results show that the predicted steady-state using the least-squares power-law is closest to the actual steady-state of the target system.

  17. A new statistical method for transfer coefficient calculations in the framework of the general multiple-compartment model of transport for radionuclides in biological systems.

    PubMed

    Garcia, F; Arruda-Neto, J D; Manso, M V; Helene, O M; Vanin, V R; Rodriguez, O; Mesa, J; Likhachev, V P; Filho, J W; Deppman, A; Perez, G; Guzman, F; de Camargo, S P

    1999-10-01

    A new and simple statistical procedure (STATFLUX) for the calculation of transfer coefficients of radionuclide transport to animals and plants is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. By using experimentally available curves of radionuclide concentrations versus time, for each animal compartment (organs), flow parameters were estimated by employing a least-squares procedure, whose consistency is tested. Some numerical results are presented in order to compare the STATFLUX transfer coefficients with those from other works and experimental data.

  18. 48 CFR 28.106-6 - Furnishing information.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Furnishing information. 28... information. (a) The surety on the bond, upon its written request, may be furnished information on the..., general information concerning the work progress, payments, and the estimated percentage of completion may...

  19. Methods for Estimating Payload/Vehicle Design Loads

    NASA Technical Reports Server (NTRS)

    Chen, J. C.; Garba, J. A.; Salama, M. A.; Trubert, M. R.

    1983-01-01

    Several methods compared with respect to accuracy, design conservatism, and cost. Objective of survey: reduce time and expense of load calculation by selecting approximate method having sufficient accuracy for problem at hand. Methods generally applicable to dynamic load analysis in other aerospace and other vehicle/payload systems.

  20. Using an Ocean of Data, Researchers Model Real-Life Benefits of Cancer Screening

    Cancer.gov

    Using the results of screening trials, the NCI Cancer Intervention and Surveillance Modeling Network is trying to estimate the true benefit of cancer screening in the general population and identify the optimal way to implement screening within the health care system.

  1. Digital receiver study and implementation

    NASA Technical Reports Server (NTRS)

    Fogle, D. A.; Lee, G. M.; Massey, J. C.

    1972-01-01

    Computer software was developed which makes it possible to use any general purpose computer with A/D conversion capability as a PSK receiver for low data rate telemetry processing. Carrier tracking, bit synchronization, and matched filter detection are all performed digitally. To aid in the implementation of optimum computer processors, a study of general digital processing techniques was performed which emphasized various techniques for digitizing general analog systems. In particular, the phase-locked loop was extensively analyzed as a typical non-linear communication element. Bayesian estimation techniques for PSK demodulation were studied. A hardware implementation of the digital Costas loop was developed.

  2. A Robust State Estimation Framework Considering Measurement Correlations and Imperfect Synchronization

    DOE PAGES

    Zhao, Junbo; Wang, Shaobu; Mili, Lamine; ...

    2018-01-08

    Here, this paper develops a robust power system state estimation framework with the consideration of measurement correlations and imperfect synchronization. In the framework, correlations of SCADA and Phasor Measurements (PMUs) are calculated separately through unscented transformation and a Vector Auto-Regression (VAR) model. In particular, PMU measurements during the waiting period of two SCADA measurement scans are buffered to develop the VAR model with robustly estimated parameters using projection statistics approach. The latter takes into account the temporal and spatial correlations of PMU measurements and provides redundant measurements to suppress bad data and mitigate imperfect synchronization. In case where the SCADAmore » and PMU measurements are not time synchronized, either the forecasted PMU measurements or the prior SCADA measurements from the last estimation run are leveraged to restore system observability. Then, a robust generalized maximum-likelihood (GM)-estimator is extended to integrate measurement error correlations and to handle the outliers in the SCADA and PMU measurements. Simulation results that stem from a comprehensive comparison with other alternatives under various conditions demonstrate the benefits of the proposed framework.« less

  3. Real-Time Algebraic Derivative Estimations Using a Novel Low-Cost Architecture Based on Reconfigurable Logic

    PubMed Central

    Morales, Rafael; Rincón, Fernando; Gazzano, Julio Dondo; López, Juan Carlos

    2014-01-01

    Time derivative estimation of signals plays a very important role in several fields, such as signal processing and control engineering, just to name a few of them. For that purpose, a non-asymptotic algebraic procedure for the approximate estimation of the system states is used in this work. The method is based on results from differential algebra and furnishes some general formulae for the time derivatives of a measurable signal in which two algebraic derivative estimators run simultaneously, but in an overlapping fashion. The algebraic derivative algorithm presented in this paper is computed online and in real-time, offering high robustness properties with regard to corrupting noises, versatility and ease of implementation. Besides, in this work, we introduce a novel architecture to accelerate this algebraic derivative estimator using reconfigurable logic. The core of the algorithm is implemented in an FPGA, improving the speed of the system and achieving real-time performance. Finally, this work proposes a low-cost platform for the integration of hardware in the loop in MATLAB. PMID:24859033

  4. A Robust State Estimation Framework Considering Measurement Correlations and Imperfect Synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Junbo; Wang, Shaobu; Mili, Lamine

    Here, this paper develops a robust power system state estimation framework with the consideration of measurement correlations and imperfect synchronization. In the framework, correlations of SCADA and Phasor Measurements (PMUs) are calculated separately through unscented transformation and a Vector Auto-Regression (VAR) model. In particular, PMU measurements during the waiting period of two SCADA measurement scans are buffered to develop the VAR model with robustly estimated parameters using projection statistics approach. The latter takes into account the temporal and spatial correlations of PMU measurements and provides redundant measurements to suppress bad data and mitigate imperfect synchronization. In case where the SCADAmore » and PMU measurements are not time synchronized, either the forecasted PMU measurements or the prior SCADA measurements from the last estimation run are leveraged to restore system observability. Then, a robust generalized maximum-likelihood (GM)-estimator is extended to integrate measurement error correlations and to handle the outliers in the SCADA and PMU measurements. Simulation results that stem from a comprehensive comparison with other alternatives under various conditions demonstrate the benefits of the proposed framework.« less

  5. An M-estimator for reduced-rank system identification.

    PubMed

    Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S; Vogelstein, Joshua T

    2017-01-15

    High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ 1 and ℓ 2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models.

  6. An M-estimator for reduced-rank system identification

    PubMed Central

    Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S.; Vogelstein, Joshua T.

    2018-01-01

    High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ1 and ℓ2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models. PMID:29391659

  7. A generalized groundwater fluctuation model based on precipitation for estimating water table levels of deep unconfined aquifers

    NASA Astrophysics Data System (ADS)

    Jeong, Jina; Park, Eungyu; Shik Han, Weon; Kim, Kue-Young; Suk, Heejun; Beom Jo, Si

    2018-07-01

    A generalized water table fluctuation model based on precipitation was developed using a statistical conceptualization of unsaturated infiltration fluxes. A gamma distribution function was adopted as a transfer function due to its versatility in representing recharge rates with temporally dispersed infiltration fluxes, and a Laplace transformation was used to obtain an analytical solution. To prove the general applicability of the model, convergences with previous water table fluctuation models were shown as special cases. For validation, a few hypothetical cases were developed, where the applicability of the model to a wide range of unsaturated zone conditions was confirmed. For further validation, the model was applied to water table level estimations of three monitoring wells with considerably thick unsaturated zones on Jeju Island. The results show that the developed model represented the pattern of hydrographs from the two monitoring wells fairly well. The lag times from precipitation to recharge estimated from the developed system transfer function were found to agree with those from a conventional cross-correlation analysis. The developed model has the potential to be adopted for the hydraulic characterization of both saturated and unsaturated zones by being calibrated to actual data when extraneous and exogenous causes of water table fluctuation are limited. In addition, as it provides reference estimates, the model can be adopted as a tool for surveilling groundwater resources under hydraulically stressed conditions.

  8. A Method for Making Cross-Comparable Estimates of the Benefits of Decision Support Technologies for Air Traffic Management

    NASA Technical Reports Server (NTRS)

    Lee, David; Long, Dou; Etheridge, Mel; Plugge, Joana; Johnson, Jesse; Kostiuk, Peter

    1998-01-01

    We present a general method for making cross comparable estimates of the benefits of NASA-developed decision support technologies for air traffic management, and we apply a specific implementation of the method to estimate benefits of three decision support tools (DSTs) under development in NASA's advanced Air Transportation Technologies Program: Active Final Approach Spacing Tool (A-FAST), Expedite Departure Path (EDP), and Conflict Probe and Trial Planning Tool (CPTP). The report also reviews data about the present operation of the national airspace system (NAS) to identify opportunities for DST's to reduce delays and inefficiencies.

  9. Is extreme learning machine feasible? A theoretical assessment (part I).

    PubMed

    Liu, Xia; Lin, Shaobo; Fang, Jian; Xu, Zongben

    2015-01-01

    An extreme learning machine (ELM) is a feedforward neural network (FNN) like learning system whose connections with output neurons are adjustable, while the connections with and within hidden neurons are randomly fixed. Numerous applications have demonstrated the feasibility and high efficiency of ELM-like systems. It has, however, been open if this is true for any general applications. In this two-part paper, we conduct a comprehensive feasibility analysis of ELM. In Part I, we provide an answer to the question by theoretically justifying the following: 1) for some suitable activation functions, such as polynomials, Nadaraya-Watson and sigmoid functions, the ELM-like systems can attain the theoretical generalization bound of the FNNs with all connections adjusted, i.e., they do not degrade the generalization capability of the FNNs even when the connections with and within hidden neurons are randomly fixed; 2) the number of hidden neurons needed for an ELM-like system to achieve the theoretical bound can be estimated; and 3) whenever the activation function is taken as polynomial, the deduced hidden layer output matrix is of full column-rank, therefore the generalized inverse technique can be efficiently applied to yield the solution of an ELM-like system, and, furthermore, for the nonpolynomial case, the Tikhonov regularization can be applied to guarantee the weak regularity while not sacrificing the generalization capability. In Part II, however, we reveal a different aspect of the feasibility of ELM: there also exists some activation functions, which makes the corresponding ELM degrade the generalization capability. The obtained results underlie the feasibility and efficiency of ELM-like systems, and yield various generalizations and improvements of the systems as well.

  10. Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application

    NASA Astrophysics Data System (ADS)

    Chen, Jinduan; Boccelli, Dominic L.

    2018-02-01

    Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.

  11. A PDE-based methodology for modeling, parameter estimation and feedback control in structural and structural acoustic systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Brown, D. E.; Metcalf, Vern L.; Silcox, R. J.; Smith, Ralph C.; Wang, Yun

    1994-01-01

    A problem of continued interest concerns the control of vibrations in a flexible structure and the related problem of reducing structure-borne noise in structural acoustic systems. In both cases, piezoceramic patches bonded to the structures have been successfully used as control actuators. Through the application of a controlling voltage, the patches can be used to reduce structural vibrations which in turn lead to methods for reducing structure-borne noise. A PDE-based methodology for modeling, estimating physical parameters, and implementing a feedback control scheme for problems of this type is discussed. While the illustrating example is a circular plate, the methodology is sufficiently general so as to be applicable in a variety of structural and structural acoustic systems.

  12. Potential estimates for the p-Laplace system with data in divergence form

    NASA Astrophysics Data System (ADS)

    Cianchi, A.; Schwarzacher, S.

    2018-07-01

    A pointwise bound for local weak solutions to the p-Laplace system is established in terms of data on the right-hand side in divergence form. The relevant bound involves a Havin-Maz'ya-Wolff potential of the datum, and is a counterpart for data in divergence form of a classical result of [25], recently extended to systems in [28]. A local bound for oscillations is also provided. These results allow for a unified approach to regularity estimates for broad classes of norms, including Banach function norms (e.g. Lebesgue, Lorentz and Orlicz norms), and norms depending on the oscillation of functions (e.g. Hölder, BMO and, more generally, Campanato type norms). In particular, new regularity properties are exhibited, and well-known results are easily recovered.

  13. High Pressure Electrolyzer System Evaluation

    NASA Technical Reports Server (NTRS)

    Prokopius, Kevin; Coloza, Anthony

    2010-01-01

    This report documents the continuing efforts to evaluate the operational state of a high pressure PEM based electrolyzer located at the NASA Glenn Research Center. This electrolyzer is a prototype system built by General Electric and refurbished by Hamilton Standard (now named Hamilton Sunstrand). It is capable of producing hydrogen and oxygen at an output pressure of 3000 psi. The electrolyzer has been in storage for a number of years. Evaluation and testing was performed to determine the state of the electrolyzer and provide an estimate of the cost for refurbishment. Pressure testing was performed using nitrogen gas through the oxygen ports to ascertain the status of the internal membranes and seals. It was determined that the integrity of the electrolyzer stack was good as there were no appreciable leaks in the membranes or seals within the stack. In addition to the integrity testing, an itemized list and part cost estimate was produced for the components of the electrolyzer system. An evaluation of the system s present state and an estimate of the cost to bring it back to operational status was also produced.

  14. An increased estimate of the merger rate of double neutron stars from observations of a highly relativistic system.

    PubMed

    Burgay, M; D'Amico, N; Possenti, A; Manchester, R N; Lyne, A G; Joshi, B C; McLaughlin, M A; Kramer, M; Sarkissian, J M; Camilo, F; Kalogera, V; Kim, C; Lorimer, D R

    2003-12-04

    The merger of close binary systems containing two neutron stars should produce a burst of gravitational waves, as predicted by the theory of general relativity. A reliable estimate of the double-neutron-star merger rate in the Galaxy is crucial in order to predict whether current gravity wave detectors will be successful in detecting such bursts. Present estimates of this rate are rather low, because we know of only a few double-neutron-star binaries with merger times less than the age of the Universe. Here we report the discovery of a 22-ms pulsar, PSR J0737-3039, which is a member of a highly relativistic double-neutron-star binary with an orbital period of 2.4 hours. This system will merge in about 85 Myr, a time much shorter than for any other known neutron-star binary. Together with the relatively low radio luminosity of PSR J0737-3039, this timescale implies an order-of-magnitude increase in the predicted merger rate for double-neutron-star systems in our Galaxy (and in the rest of the Universe).

  15. Summary and evaluation of hydraulic property data available for the Hanford Site upper basalt confined aquifer system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spane, F.A. Jr.; Vermeul, V.R.

    Pacific Northwest Laboratory, as part of the Hanford Site Ground-Water Surveillance Project, examines the potential for offsite migration of contamination within the upper basalt confined aquifer system. For the past 40 years, hydrologic testing of the upper basalt confined aquifer has been conducted by a number of Hanford Site programs. Hydraulic property estimates are important for evaluating aquifer flow characteristics (i.e., ground-water flow patterns, flow velocity, transport travel time). Presented are the first comprehensive Hanford Site-wide summary of hydraulic properties for the upper basalt confined aquifer system (i.e., the upper Saddle Mountains Basalt). Available hydrologic test data were reevaluated usingmore » recently developed diagnostic test analysis methods. A comparison of calculated transmissivity estimates indicates that, for most test results, a general correspondence within a factor of two between reanalysis and previously reported test values was obtained. For a majority of the tests, previously reported values are greater than reanalysis estimates. This overestimation is attributed to a number of factors, including, in many cases, a misapplication of nonleaky confined aquifer analysis methods in previous analysis reports to tests that exhibit leaky confined aquifer response behavior. Results of the test analyses indicate a similar range for transmissivity values for the various hydro-geologic units making up the upper basalt confined aquifer. Approximately 90% of the calculated transmissivity values for upper basalt confined aquifer hydrogeologic units occur within the range of 10{sup 0} to 10{sup 2} m{sup 2}/d, with 65% of the calculated estimate values occurring between 10{sup 1} to 10{sup 2} m{sup 2}d. These summary findings are consistent with the general range of values previously reported for basalt interflow contact zones and sedimentary interbeds within the Saddle Mountains Basalt.« less

  16. Cost Estimates Of Concentrated Photovoltaic Heat Sink Production

    DTIC Science & Technology

    2016-06-01

    steady year-round sunshine and in many cases high levels of direct normal irradiance (DNI). Beyond traditional PV , some climates favor rooftop solar ...water heating, but the majority of installed solar systems, are PV (EIA, 2015). Solar power generation has great benefits for the DON considering the...systems concentrate and focus sunlight onto a smaller focal point in order to take advantage of the highly efficient solar cells. Generally, PV

  17. Optimal Sensor Scheduling for Multiple Hypothesis Testing

    DTIC Science & Technology

    1981-09-01

    Naval Research, under contract N00014-77-0532 is gratpfully acknowledged. 2 Laboratory for Information and Decision Systems , MIT Room 35-213, Cambridge...treat the more general problem [9,10]. However, two common threads connect these approaches: they obtain feedback laws mapping posterior destributions ...objective of a detection or identification algorithm is to produce correct estimates of the true state of a system . It is also bene- ficial if these

  18. Neural net target-tracking system using structured laser patterns

    NASA Astrophysics Data System (ADS)

    Cho, Jae-Wan; Lee, Yong-Bum; Lee, Nam-Ho; Park, Soon-Yong; Lee, Jongmin; Choi, Gapchu; Baek, Sunghyun; Park, Dong-Sun

    1996-06-01

    In this paper, we describe a robot endeffector tracking system using sensory information from recently-announced structured pattern laser diodes, which can generate images with several different types of structured pattern. The neural network approach is employed to recognize the robot endeffector covering the situation of three types of motion: translation, scaling and rotation. Features for the neural network to detect the position of the endeffector are extracted from the preprocessed images. Artificial neural networks are used to store models and to match with unknown input features recognizing the position of the robot endeffector. Since a minimal number of samples are used for different directions of the robot endeffector in the system, an artificial neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network trained with the back propagation learning is used to detect the position of the robot endeffector. Another feedforward neural network module is used to estimate the motion from a sequence of images and to control movements of the robot endeffector. COmbining the tow neural networks for recognizing the robot endeffector and estimating the motion with the preprocessing stage, the whole system keeps tracking of the robot endeffector effectively.

  19. Employment from Solar Energy: A Bright but Partly Cloudy Future.

    ERIC Educational Resources Information Center

    Smeltzer, K. K.; Santini, D. J.

    A comparison of quantitative and qualitative employment effects of solar and conventional systems can prove the increased employment postulated as one of the significant secondary benefits of a shift from conventional to solar energy use. Current quantitative employment estimates show solar technology-induced employment to be generally greater…

  20. 48 CFR 30.606 - Resolving cost impacts.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Resolving cost impacts. 30... impacts. (a) General. (1) The CFAO shall coordinate with the affected contracting officers before negotiating and resolving the cost impact when the estimated cost impact on any of their contracts is at least...

  1. Forest carbon sinks in the Northern Hemisphere

    Treesearch

    Christine L. Goodale; Michael J. Apps; Richard A. Birdsey; Christopher B. Field; Linda S. Heath; Richard A. Houghton; Jennifer C. Jenkins; Gundolf H. Kohlmaier; Werner Kurz; Shirong Liu; Gert-Jan Nabuurs; Sten Nilsson; Anatoly Z. Shvidenko

    2002-01-01

    There is general agreement that terrestrial systems in the Northern Hemisphere provide a significant sink for atmospheric CO2; however, estimates of the magnitude and distribution of this sink vary greatly. National forest inventories provide strong, measurement-based constraints on the magnitude of net forest carbon uptake. We brought together...

  2. The essence of fire regime-condition class assessment

    Treesearch

    McKinley-Ben Miller

    2008-01-01

    The interagency-Fire Regime / Condition Class - assessment process (FRCC) represents a contemporary and effective means of estimating the relative degree of difference or "departure" a subject landscape condition is currently in, as compared to the historic or reference ecological conditions. This process generally applied to fire adapted systems is science-...

  3. Surface refractivity measurements at NASA spacecraft tracking sites

    NASA Technical Reports Server (NTRS)

    Schmid, P. E.

    1972-01-01

    High-accuracy spacecraft tracking requires tropospheric modeling which is generally scaled by either estimated or measured values of surface refractivity. This report summarizes the results of a worldwide surface-refractivity test conducted in 1968 in support of the Apollo program. The results are directly applicable to all NASA radio-tracking systems.

  4. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering

    PubMed Central

    Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293

  5. Parameter estimation in a structural acoustic system with fully nonlinear coupling conditions

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Smith, Ralph C.

    1994-01-01

    A methodology for estimating physical parameters in a class of structural acoustic systems is presented. The general model under consideration consists of an interior cavity which is separated from an exterior noise source by an enclosing elastic structure. Piezoceramic patches are bonded to or embedded in the structure; these can be used both as actuators and sensors in applications ranging from the control of interior noise levels to the determination of structural flaws through nondestructive evaluation techniques. The presence and excitation of patches, however, changes the geometry and material properties of the structure as well as involves unknown patch parameters, thus necessitating the development of parameter estimation techniques which are applicable in this coupled setting. In developing a framework for approximation, parameter estimation and implementation, strong consideration is given to the fact that the input operator is unbonded due to the discrete nature of the patches. Moreover, the model is weakly nonlinear. As a result of the coupling mechanism between the structural vibrations and the interior acoustic dynamics. Within this context, an illustrating model is given, well-posedness and approximations results are discussed and an applicable parameter estimation methodology is presented. The scheme is then illustrated through several numerical examples with simulations modeling a variety of commonly used structural acoustic techniques for systems excitations and data collection.

  6. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering.

    PubMed

    Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-05-23

    This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level.

  7. The development of an interim generalized gate logic software simulator

    NASA Technical Reports Server (NTRS)

    Mcgough, J. G.; Nemeroff, S.

    1985-01-01

    A proof-of-concept computer program called IGGLOSS (Interim Generalized Gate Logic Software Simulator) was developed and is discussed. The simulator engine was designed to perform stochastic estimation of self test coverage (fault-detection latency times) of digital computers or systems. A major attribute of the IGGLOSS is its high-speed simulation: 9.5 x 1,000,000 gates/cpu sec for nonfaulted circuits and 4.4 x 1,000,000 gates/cpu sec for faulted circuits on a VAX 11/780 host computer.

  8. Space Operations Center system analysis. Volume 3, book 2: SOC system definition report, revision A

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The Space Operations Center (SOC) orbital space station program operations are described. A work breakdown structure for the general purpose support equipment, construction and transportation support, and resupply and logistics support systems is given. The basis for the design of each element is presented, and a mass estimate for each element supplied. The SOC build-up operation, construction, flight support, and satellite servicing operations are described. Detailed programmatics and cost analysis are presented.

  9. Improved importance sampling technique for efficient simulation of digital communication systems

    NASA Technical Reports Server (NTRS)

    Lu, Dingqing; Yao, Kung

    1988-01-01

    A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.

  10. Intelligent complementary sliding-mode control for LUSMS-based X-Y-theta motion control stage.

    PubMed

    Lin, Faa-Jeng; Chen, Syuan-Yi; Shyu, Kuo-Kai; Liu, Yen-Hung

    2010-07-01

    An intelligent complementary sliding-mode control (ICSMC) system using a recurrent wavelet-based Elman neural network (RWENN) estimator is proposed in this study to control the mover position of a linear ultrasonic motors (LUSMs)-based X-Y-theta motion control stage for the tracking of various contours. By the addition of a complementary generalized error transformation, the complementary sliding-mode control (CSMC) can efficiently reduce the guaranteed ultimate bound of the tracking error by half compared with the slidingmode control (SMC) while using the saturation function. To estimate a lumped uncertainty on-line and replace the hitting control of the CSMC directly, the RWENN estimator is adopted in the proposed ICSMC system. In the RWENN, each hidden neuron employs a different wavelet function as an activation function to improve both the convergent precision and the convergent time compared with the conventional Elman neural network (ENN). The estimation laws of the RWENN are derived using the Lyapunov stability theorem to train the network parameters on-line. A robust compensator is also proposed to confront the uncertainties including approximation error, optimal parameter vectors, and higher-order terms in Taylor series. Finally, some experimental results of various contours tracking show that the tracking performance of the ICSMC system is significantly improved compared with the SMC and CSMC systems.

  11. Investigating the error sources of the online state of charge estimation methods for lithium-ion batteries in electric vehicles

    NASA Astrophysics Data System (ADS)

    Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu

    2018-02-01

    Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.

  12. Systematic review of general thoracic surgery articles to identify predictors of operating room case durations.

    PubMed

    Dexter, Franklin; Dexter, Elisabeth U; Masursky, Danielle; Nussmeier, Nancy A

    2008-04-01

    Previous studies of operating room (OR) information systems data over the past two decades have shown how to predict case durations using the combination of scheduled procedure(s), individual surgeon and assistant(s), and type of anesthetic(s). We hypothesized that the accuracy of case duration prediction could be improved by the use of other electronic medical record data (e.g., patient weight or surgeon notes using standardized vocabularies). General thoracic surgery was used as a model specialty because much of its workload is elective (scheduled) and many of its cases are long. PubMed was searched for thoracic surgery papers reporting operative time, surgical time, etc. The systematic literature review identified 48 papers reporting statistically significant differences in perioperative times. There were multiple reports of differences in OR times based on the procedure(s), perioperative team including primary surgeon, and type of anesthetic, in that sequence of importance. All such detail may not be known when the case is originally scheduled and thus may require an updated duration the day before surgery. Although the use of these categorical data from OR systems can result in few historical data for estimating each case's duration, bias and imprecision of case duration estimates are unlikely to be affected. There was a report of a difference in case duration based on additional information. However, the incidence of the procedure for the diagnosis was so uncommon as to be unlikely to affect OR management. Matching findings of prior studies using OR information system data, multiple case series show that it is important to rely on the precise procedure(s), surgical team, and type of anesthetic when estimating case durations. OR information systems need to incorporate the statistical methods designed for small numbers of prior surgical cases. Future research should focus on the most effective methods to update the prediction of each case's duration as these data become available. The case series did not reveal additional data which could be cost-effectively integrated with OR information systems data to improve the accuracy of predicted durations for general thoracic surgery cases.

  13. Generalized two-dimensional (2D) linear system analysis metrics (GMTF, GDQE) for digital radiography systems including the effect of focal spot, magnification, scatter, and detector characteristics.

    PubMed

    Jain, Amit; Kuhls-Gilcrist, Andrew T; Gupta, Sandesh K; Bednarek, Daniel R; Rudin, Stephen

    2010-03-01

    The MTF, NNPS, and DQE are standard linear system metrics used to characterize intrinsic detector performance. To evaluate total system performance for actual clinical conditions, generalized linear system metrics (GMTF, GNNPS and GDQE) that include the effect of the focal spot distribution, scattered radiation, and geometric unsharpness are more meaningful and appropriate. In this study, a two-dimensional (2D) generalized linear system analysis was carried out for a standard flat panel detector (FPD) (194-micron pixel pitch and 600-micron thick CsI) and a newly-developed, high-resolution, micro-angiographic fluoroscope (MAF) (35-micron pixel pitch and 300-micron thick CsI). Realistic clinical parameters and x-ray spectra were used. The 2D detector MTFs were calculated using the new Noise Response method and slanted edge method and 2D focal spot distribution measurements were done using a pin-hole assembly. The scatter fraction, generated for a uniform head equivalent phantom, was measured and the scatter MTF was simulated with a theoretical model. Different magnifications and scatter fractions were used to estimate the 2D GMTF, GNNPS and GDQE for both detectors. Results show spatial non-isotropy for the 2D generalized metrics which provide a quantitative description of the performance of the complete imaging system for both detectors. This generalized analysis demonstrated that the MAF and FPD have similar capabilities at lower spatial frequencies, but that the MAF has superior performance over the FPD at higher frequencies even when considering focal spot blurring and scatter. This 2D generalized performance analysis is a valuable tool to evaluate total system capabilities and to enable optimized design for specific imaging tasks.

  14. Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-05-29

    Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less

  15. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  16. Using a Discrete-Choice Experiment Involving Cost to Value a Classification System Measuring the Quality-of-Life Impact of Self-Management for Diabetes.

    PubMed

    Rowen, Donna; Stevens, Katherine; Labeit, Alexander; Elliott, Jackie; Mulhern, Brendan; Carlton, Jill; Basarir, Hasan; Ratcliffe, Julie; Brazier, John

    2018-01-01

    To describe the use of a novel approach in health valuation of a discrete-choice experiment (DCE) including a cost attribute to value a recently developed classification system for measuring the quality-of-life impact (both health and treatment experience) of self-management for diabetes. A large online survey was conducted using DCE with cost on UK respondents from the general population (n = 1497) and individuals with diabetes (n = 405). The data were modeled using a conditional logit model with robust standard errors. The marginal rate of substitution was used to generate willingness-to-pay (WTP) estimates for every state defined by the classification system. Robustness of results was assessed by including interaction effects for household income. There were some logical inconsistencies and insignificant coefficients for the milder levels of some attributes. There were some differences in the rank ordering of different attributes for the general population and diabetic patients. The WTP to avoid the most severe state was £1118.53 per month for the general population and £2356.02 per month for the diabetic patient population. The results were largely robust. Health and self-management can be valued in a single classification system using DCE with cost. The marginal rate of substitution for key attributes can be used to inform cost-benefit analysis of self-management interventions in diabetes using results from clinical studies in which this new classification system has been applied. The method shows promise, but found large WTP estimates exceeding the cost levels used in the survey. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  17. Extremal entanglement and mixedness in continuous variable systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adesso, Gerardo; Serafini, Alessio; Illuminati, Fabrizio

    2004-08-01

    We investigate the relationship between mixedness and entanglement for Gaussian states of continuous variable systems. We introduce generalized entropies based on Schatten p norms to quantify the mixedness of a state and derive their explicit expressions in terms of symplectic spectra. We compare the hierarchies of mixedness provided by such measures with the one provided by the purity (defined as tr {rho}{sup 2} for the state {rho}) for generic n-mode states. We then review the analysis proving the existence of both maximally and minimally entangled states at given global and marginal purities, with the entanglement quantified by the logarithmic negativity.more » Based on these results, we extend such an analysis to generalized entropies, introducing and fully characterizing maximally and minimally entangled states for given global and local generalized entropies. We compare the different roles played by the purity and by the generalized p entropies in quantifying the entanglement and the mixedness of continuous variable systems. We introduce the concept of average logarithmic negativity, showing that it allows a reliable quantitative estimate of continuous variable entanglement by direct measurements of global and marginal generalized p entropies.« less

  18. An enhanced obstacle avoiding system for AUV`s

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conte, G.; Zanoli, S.M.

    1994-12-31

    This paper concerns the development of a sonar-based navigation and guidance system for underwater, unmanned vehicles. In particular, the authors describe and discuss an obstacle avoidance procedure that is capable of dealing with situations involving several obstacles. The main features of the system are the use of a Kalman filter, both for estimating data and for predicting the evolution of the observed scene, and the possibility of working at different levels of data abstraction. The system has shown satisfactory performances in dealing with moving obstacles in general situations.

  19. Comparison of Kalman filter and optimal smoother estimates of spacecraft attitude

    NASA Technical Reports Server (NTRS)

    Sedlak, J.

    1994-01-01

    Given a valid system model and adequate observability, a Kalman filter will converge toward the true system state with error statistics given by the estimated error covariance matrix. The errors generally do not continue to decrease. Rather, a balance is reached between the gain of information from new measurements and the loss of information during propagation. The errors can be further reduced, however, by a second pass through the data with an optimal smoother. This algorithm obtains the optimally weighted average of forward and backward propagating Kalman filters. It roughly halves the error covariance by including future as well as past measurements in each estimate. This paper investigates whether such benefits actually accrue in the application of an optimal smoother to spacecraft attitude determination. Tests are performed both with actual spacecraft data from the Extreme Ultraviolet Explorer (EUVE) and with simulated data for which the true state vector and noise statistics are exactly known.

  20. Estimation of Dynamic Systems for Gene Regulatory Networks from Dependent Time-Course Data.

    PubMed

    Kim, Yoonji; Kim, Jaejik

    2018-06-15

    Dynamic system consisting of ordinary differential equations (ODEs) is a well-known tool for describing dynamic nature of gene regulatory networks (GRNs), and the dynamic features of GRNs are usually captured through time-course gene expression data. Owing to high-throughput technologies, time-course gene expression data have complex structures such as heteroscedasticity, correlations between genes, and time dependence. Since gene experiments typically yield highly noisy data with small sample size, for a more accurate prediction of the dynamics, the complex structures should be taken into account in ODE models. Hence, this study proposes an ODE model considering such data structures and a fast and stable estimation method for the ODE parameters based on the generalized profiling approach with data smoothing techniques. The proposed method also provides statistical inference for the ODE estimator and it is applied to a zebrafish retina cell network.

  1. Experimental verification of a GPC-LPV method with RLS and P1-TS fuzzy-based estimation for limiting the transient and residual vibration of a crane system

    NASA Astrophysics Data System (ADS)

    Smoczek, Jaroslaw

    2015-10-01

    The paper deals with the problem of reducing the residual vibration and limiting the transient oscillations of a flexible and underactuated system with respect to the variation of operating conditions. The comparative study of generalized predictive control (GPC) and fuzzy scheduling scheme developed based on the P1-TS fuzzy theory, local pole placement method and interval analysis of closed-loop system polynomial coefficients is addressed to the problem of flexible crane control. The two alternatives of a GPC-based method are proposed that enable to realize this technique either with or without a sensor of payload deflection. The first control technique is based on the recursive least squares (RLS) method applied to on-line estimate the parameters of a linear parameter varying (LPV) model of a crane dynamic system. The second GPC-based approach is based on a payload deflection feedback estimated using a pendulum model with the parameters interpolated using the P1-TS fuzzy system. Feasibility and applicability of the developed methods were confirmed through experimental verification performed on a laboratory scaled overhead crane.

  2. Extrapolation of systemic bioavailability assessing skin absorption and epidermal and hepatic metabolism of aromatic amine hair dyes in vitro

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manwaring, John, E-mail: manwaring.jd@pg.com; Rothe, Helga; Obringer, Cindy

    Approaches to assess the role of absorption, metabolism and excretion of cosmetic ingredients that are based on the integration of different in vitro data are important for their safety assessment, specifically as it offers an opportunity to refine that safety assessment. In order to estimate systemic exposure (AUC) to aromatic amine hair dyes following typical product application conditions, skin penetration and epidermal and systemic metabolic conversion of the parent compound was assessed in human skin explants and human keratinocyte (HaCaT) and hepatocyte cultures. To estimate the amount of the aromatic amine that can reach the general circulation unchanged after passagemore » through the skin the following toxicokinetically relevant parameters were applied: a) Michaelis–Menten kinetics to quantify the epidermal metabolism; b) the estimated keratinocyte cell abundance in the viable epidermis; c) the skin penetration rate; d) the calculated Mean Residence Time in the viable epidermis; e) the viable epidermis thickness and f) the skin permeability coefficient. In a next step, in vitro hepatocyte K{sub m} and V{sub max} values and whole liver mass and cell abundance were used to calculate the scaled intrinsic clearance, which was combined with liver blood flow and fraction of compound unbound in the blood to give hepatic clearance. The systemic exposure in the general circulation (AUC) was extrapolated using internal dose and hepatic clearance, and C{sub max} was extrapolated (conservative overestimation) using internal dose and volume of distribution, indicating that appropriate toxicokinetic information can be generated based solely on in vitro data. For the hair dye, p-phenylenediamine, these data were found to be in the same order of magnitude as those published for human volunteers. - Highlights: • An entirely in silico/in vitro approach to predict in vivo exposure to dermally applied hair dyes • Skin penetration and epidermal conversion assessed in human skin explants and HaCaT • Systemic metabolism was modeled using hepatocyte cultures. • Toxicokinetically relevant parameters were applied to estimate systemic exposure. • There was a good agreement between in vitro and in vivo data.« less

  3. Improved solution accuracy for TDRSS-based TOPEX/Poseidon orbit determination

    NASA Technical Reports Server (NTRS)

    Doll, C. E.; Mistretta, G. D.; Hart, R. C.; Oza, D. H.; Bolvin, D. T.; Cox, C. M.; Nemesure, M.; Niklewski, D. J.; Samii, M. V.

    1994-01-01

    Orbit determination results are obtained by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) using a batch-least-squares estimator available in the Goddard Trajectory Determination System (GTDS) and an extended Kalman filter estimation system to process Tracking and Data Relay Satellite (TDRS) System (TDRSS) measurements. GTDS is the operational orbit determination system used by the FDD in support of the Ocean Topography Experiment (TOPEX)/Poseidon spacecraft navigation and health and safety operations. The extended Kalman filter was implemented in an orbit determination analysis prototype system, closely related to the Real-Time Orbit Determination System/Enhanced (RTOD/E) system. In addition, the Precision Orbit Determination (POD) team within the GSFC Space Geodesy Branch generated an independent set of high-accuracy trajectories to support the TOPEX/Poseidon scientific data. These latter solutions use the geodynamics (GEODYN) orbit determination system with laser ranging and Doppler Orbitography and Radiopositioning integrated by satellite (DORIS) tracking measurements. The TOPEX/Poseidon trajectories were estimated for November 7 through November 11, 1992, the timeframe under study. Independent assessments were made of the consistencies of solutions produced by the batch and sequential methods. The batch-least-squares solutions were assessed based on the solution residuals, while the sequential solutions were assessed based on primarily the estimated covariances. The batch-least-squares and sequential orbit solutions were compared with the definitive POD orbit solutions. The solution differences were generally less than 2 meters for the batch-least-squares and less than 13 meters for the sequential estimation solutions. After the sequential estimation solutions were processed with a smoother algorithm, position differences with POD orbit solutions of less than 7 meters were obtained. The differences among the POD, GTDS, and filter/smoother solutions can be traced to differences in modeling and tracking data types, which are being analyzed in detail.

  4. Adaptive control and noise suppression by a variable-gain gradient algorithm

    NASA Technical Reports Server (NTRS)

    Merhav, S. J.; Mehta, R. S.

    1987-01-01

    An adaptive control system based on normalized LMS filters is investigated. The finite impulse response of the nonparametric controller is adaptively estimated using a given reference model. Specifically, the following issues are addressed: The stability of the closed loop system is analyzed and heuristically established. Next, the adaptation process is studied for piecewise constant plant parameters. It is shown that by introducing a variable-gain in the gradient algorithm, a substantial reduction in the LMS adaptation rate can be achieved. Finally, process noise at the plant output generally causes a biased estimate of the controller. By introducing a noise suppression scheme, this bias can be substantially reduced and the response of the adapted system becomes very close to that of the reference model. Extensive computer simulations validate these and demonstrate assertions that the system can rapidly adapt to random jumps in plant parameters.

  5. Dynamic modeling, property investigation, and adaptive controller design of serial robotic manipulators modeled with structural compliance

    NASA Technical Reports Server (NTRS)

    Tesar, Delbert; Tosunoglu, Sabri; Lin, Shyng-Her

    1990-01-01

    Research results on general serial robotic manipulators modeled with structural compliances are presented. Two compliant manipulator modeling approaches, distributed and lumped parameter models, are used in this study. System dynamic equations for both compliant models are derived by using the first and second order influence coefficients. Also, the properties of compliant manipulator system dynamics are investigated. One of the properties, which is defined as inaccessibility of vibratory modes, is shown to display a distinct character associated with compliant manipulators. This property indicates the impact of robot geometry on the control of structural oscillations. Example studies are provided to illustrate the physical interpretation of inaccessibility of vibratory modes. Two types of controllers are designed for compliant manipulators modeled by either lumped or distributed parameter techniques. In order to maintain the generality of the results, neither linearization is introduced. Example simulations are given to demonstrate the controller performance. The second type controller is also built for general serial robot arms and is adaptive in nature which can estimate uncertain payload parameters on-line and simultaneously maintain trajectory tracking properties. The relation between manipulator motion tracking capability and convergence of parameter estimation properties is discussed through example case studies. The effect of control input update delays on adaptive controller performance is also studied.

  6. Generalized equations for estimating DXA percent fat of diverse young women and men: The Tiger Study

    USDA-ARS?s Scientific Manuscript database

    Popular generalized equations for estimating percent body fat (BF%) developed with cross-sectional data are biased when applied to racially/ethnically diverse populations. We developed accurate anthropometric models to estimate dual-energy x-ray absorptiometry BF% (DXA-BF%) that can be generalized t...

  7. Using a generalized version of the Titius-Bode relation to extrapolate the patterns seen in Kepler multi-exoplanet systems, and estimate the average number of planets in circumstellar habitable zones

    NASA Astrophysics Data System (ADS)

    Lineweaver, Charles H.

    2015-08-01

    The Titius-Bode (TB) relation’s successful prediction of the period of Uranus was the main motivation that led to the search for another planet between Mars and Jupiter. This search led to the discovery of the asteroid Ceres and the rest of the asteroid belt. The TB relation can also provide useful hints about the periods of as-yet-undetected planets around other stars. In Bovaird & Lineweaver (2013) [1], we used a generalized TB relation to analyze 68 multi-planet systems with four or more detected exoplanets. We found that the majority of exoplanet systems in our sample adhered to the TB relation to a greater extent than the Solar System does. Thus, the TB relation can make useful predictions about the existence of as-yet-undetected planets in Kepler multi-planet systems. These predictions are one way to correct for the main obstacle preventing us from estimating the number of Earth-like planets in the universe. That obstacle is the incomplete sampling of planets of Earth-mass and smaller [2-5]. In [6], we use a generalized Titius-Bode relation to predict the periods of 228 additional planets in 151 of these Kepler multiples. These Titius-Bode-based predictions suggest that there are, on average, 2±1 planets in the habitable zone of each star. We also estimate the inclination of the invariable plane for each system and prioritize our planet predictions by their geometric probability to transit. We highlight a short list of 77 predicted planets in 40 systems with a high geometric probability to transit, resulting in an expected detection rate of ~15 per cent, ~3 times higher than the detection rate of our previous Titius-Bode-based predictions.References: [1] Bovaird, T. & Lineweaver, C.H (2013) MNRAS, 435, 1126-1138. [2] Dong S. & Zhu Z. (2013) ApJ, 778, 53 [3] Fressin F. et al. (2013) ApJ, 766, 81 [4] Petigura E. A. et al. (2013) PNAS, 110, 19273 [5] Silburt A. et al. (2014), ApJ (arXiv:1406.6048v2) [6] Bovaird, T., Lineweaver, C.H. & Jacobsen, S.K. (2015, in press) MNRAS, arXiv:14126230v3.

  8. Nonparametric identification of nonlinear dynamic systems using a synchronisation-based method

    NASA Astrophysics Data System (ADS)

    Kenderi, Gábor; Fidlin, Alexander

    2014-12-01

    The present study proposes an identification method for highly nonlinear mechanical systems that does not require a priori knowledge of the underlying nonlinearities to reconstruct arbitrary restoring force surfaces between degrees of freedom. This approach is based on the master-slave synchronisation between a dynamic model of the system as the slave and the real system as the master using measurements of the latter. As the model synchronises to the measurements, it becomes an observer of the real system. The optimal observer algorithm in a least-squares sense is given by the Kalman filter. Using the well-known state augmentation technique, the Kalman filter can be turned into a dual state and parameter estimator to identify parameters of a priori characterised nonlinearities. The paper proposes an extension of this technique towards nonparametric identification. A general system model is introduced by describing the restoring forces as bilateral spring-dampers with time-variant coefficients, which are estimated as augmented states. The estimation procedure is followed by an a posteriori statistical analysis to reconstruct noise-free restoring force characteristics using the estimated states and their estimated variances. Observability is provided using only one measured mechanical quantity per degree of freedom, which makes this approach less demanding in the number of necessary measurement signals compared with truly nonparametric solutions, which typically require displacement, velocity and acceleration signals. Additionally, due to the statistical rigour of the procedure, it successfully addresses signals corrupted by significant measurement noise. In the present paper, the method is described in detail, which is followed by numerical examples of one degree of freedom (1DoF) and 2DoF mechanical systems with strong nonlinearities of vibro-impact type to demonstrate the effectiveness of the proposed technique.

  9. A Fault Tolerant System for an Integrated Avionics Sensor Configuration

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Lancraft, R. E.

    1984-01-01

    An aircraft sensor fault tolerant system methodology for the Transport Systems Research Vehicle in a Microwave Landing System (MLS) environment is described. The fault tolerant system provides reliable estimates in the presence of possible failures both in ground-based navigation aids, and in on-board flight control and inertial sensors. Sensor failures are identified by utilizing the analytic relationships between the various sensors arising from the aircraft point mass equations of motion. The estimation and failure detection performance of the software implementation (called FINDS) of the developed system was analyzed on a nonlinear digital simulation of the research aircraft. Simulation results showing the detection performance of FINDS, using a dual redundant sensor compliment, are presented for bias, hardover, null, ramp, increased noise and scale factor failures. In general, the results show that FINDS can distinguish between normal operating sensor errors and failures while providing an excellent detection speed for bias failures in the MLS, indicated airspeed, attitude and radar altimeter sensors.

  10. Estimating hydraulic properties of the Floridan Aquifer System by analysis of earth-tide, ocean-tide, and barometric effects, Collier and Hendry Counties, Florida

    USGS Publications Warehouse

    Merritt, Michael L.

    2004-01-01

    Aquifers are subjected to mechanical stresses from natural, non-anthropogenic, processes such as pressure loading or mechanical forcing of the aquifer by ocean tides, earth tides, and pressure fluctuations in the atmosphere. The resulting head fluctuations are evident even in deep confined aquifers. The present study was conducted for the purpose of reviewing the research that has been done on the use of these phenomena for estimating the values of aquifer properties, and determining which of the analytical techniques might be useful for estimating hydraulic properties in the dissolved-carbonate hydrologic environment of southern Florida. Fifteen techniques are discussed in this report, of which four were applied.An analytical solution for head oscillations in a well near enough to the ocean to be influenced by ocean tides was applied to data from monitor zones in a well near Naples, Florida. The solution assumes a completely non-leaky confining unit of infinite extent. Resulting values of transmissivity are in general agreement with the results of aquifer performance tests performed by the South Florida Water Management District. There seems to be an inconsistency between results of the amplitude ratio analysis and independent estimates of loading efficiency. A more general analytical solution that takes leakage through the confining layer into account yielded estimates that were lower than those obtained using the non-leaky method, and closer to the South Florida Water Management District estimates. A numerical model with a cross-sectional grid design was applied to explore additional aspects of the problem.A relation between specific storage and the head oscillation observed in a well provided estimates of specific storage that were considered reasonable. Porosity estimates based on the specific storage estimates were consistent with values obtained from measurements on core samples. Methods are described for determining aquifer diffusivity by comparing the time-varying drawdown in an open well with periodic pressure-head oscillations in the aquifer, but the applicability of such methods might be limited in studies of the Floridan aquifer system.

  11. PEITH(Θ): perfecting experiments with information theory in Python with GPU support.

    PubMed

    Dony, Leander; Mackerodt, Jonas; Ward, Scott; Filippi, Sarah; Stumpf, Michael P H; Liepe, Juliane

    2018-04-01

    Different experiments provide differing levels of information about a biological system. This makes it difficult, a priori, to select one of them beyond mere speculation and/or belief, especially when resources are limited. With the increasing diversity of experimental approaches and general advances in quantitative systems biology, methods that inform us about the information content that a given experiment carries about the question we want to answer, become crucial. PEITH(Θ) is a general purpose, Python framework for experimental design in systems biology. PEITH(Θ) uses Bayesian inference and information theory in order to derive which experiments are most informative in order to estimate all model parameters and/or perform model predictions. https://github.com/MichaelPHStumpf/Peitho. m.stumpf@imperial.ac.uk or juliane.liepe@mpibpc.mpg.de.

  12. Aircraft- and tower-based fluxes of carbon dioxide, latent, and sensible heat

    NASA Technical Reports Server (NTRS)

    Desjardins, R. L.; Hart, R. L.; Macpherson, J. I.; Schuepp, P. H.; Verma, S. B.

    1992-01-01

    Fluxes of carbon dioxide, water vapor, and sensible heat obtained over a grassland ecosystem, during the First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE), using an aircraft- and two tower-based systems are compared for several days in 1987 and in 1989. The tower-based cospectral estimates of CO2, sensible heat, water vapor, and momentum, expressed as a function of wavenumber K times sampling height z, are relatively similar to the aircraft-based estimates for K x z greater than 0.1. A measurable contribution to the fluxes is observed by tower-based systems at K x z less than 0.01 but not by the aircraft-based system operating at an altitude of approximately 100 m over a 15 x 15 km area. Using all available simultaneous aircraft and tower data, flux estimates by both systems were shown to be highly correlated. As expected from the spatial variations of the greenness index, surface extrapolation of airborne flux estimates tended to lie between those of the two tower sites. The average fluxes obtained, on July 11, 1987, and August 4, 1989, by flying a grid pattern over the FIFE site agreed with the two tower data sets for CO2, but sensible and latent heat were smaller than those obtained by the tower-based systems. However, in general, except for a small underestimation due to the long wavelength contributions and due to flux divergence with height, the differences between the aircraft- and tower-based surface estimates of fluxes appear to be mainly attributable to differences in footprint, that is, differences in the area contributing to the surface flux estimates.

  13. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems

    PubMed Central

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-01-01

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. PMID:26985896

  14. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.

    PubMed

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-03-12

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.

  15. Constellation Program Life-cycle Cost Analysis Model (LCAM)

    NASA Technical Reports Server (NTRS)

    Prince, Andy; Rose, Heidi; Wood, James

    2008-01-01

    The Constellation Program (CxP) is NASA's effort to replace the Space Shuttle, return humans to the moon, and prepare for a human mission to Mars. The major elements of the Constellation Lunar sortie design reference mission architecture are shown. Unlike the Apollo Program of the 1960's, affordability is a major concern of United States policy makers and NASA management. To measure Constellation affordability, a total ownership cost life-cycle parametric cost estimating capability is required. This capability is being developed by the Constellation Systems Engineering and Integration (SE&I) Directorate, and is called the Lifecycle Cost Analysis Model (LCAM). The requirements for LCAM are based on the need to have a parametric estimating capability in order to do top-level program analysis, evaluate design alternatives, and explore options for future systems. By estimating the total cost of ownership within the context of the planned Constellation budget, LCAM can provide Program and NASA management with the cost data necessary to identify the most affordable alternatives. LCAM is also a key component of the Integrated Program Model (IPM), an SE&I developed capability that combines parametric sizing tools with cost, schedule, and risk models to perform program analysis. LCAM is used in the generation of cost estimates for system level trades and analyses. It draws upon the legacy of previous architecture level cost models, such as the Exploration Systems Mission Directorate (ESMD) Architecture Cost Model (ARCOM) developed for Simulation Based Acquisition (SBA), and ATLAS. LCAM is used to support requirements and design trade studies by calculating changes in cost relative to a baseline option cost. Estimated costs are generally low fidelity to accommodate available input data and available cost estimating relationships (CERs). LCAM is capable of interfacing with the Integrated Program Model to provide the cost estimating capability for that suite of tools.

  16. Suicide among people with epilepsy: A population-based analysis of data from the U.S. National Violent Death Reporting System, 17 states, 2003-2011.

    PubMed

    Tian, Niu; Cui, Wanjun; Zack, Matthew; Kobau, Rosemarie; Fowler, Katherine A; Hesdorffer, Dale C

    2016-08-01

    This study analyzed suicide data in the general population from the U.S. National Violent Death Reporting System (NVDRS) to investigate suicide burden among those with epilepsy and risk factors associated with suicide and to suggest measures to prevent suicide among people with epilepsy. The NVDRS is a multiple-state, population-based, active surveillance system that collects information on violent deaths including suicide. Among people 10years old and older, we identified 972 suicide cases with epilepsy and 81,529 suicide cases without epilepsy in 17 states from 2003 through 2011. We estimated their suicide rates, evaluated suicide risk among people with epilepsy, and investigated suicide risk factors specific to epilepsy by comparing those with and without epilepsy. In 16 of the 17 states providing continual data from 2005 through 2011, we also compared suicide trends in people with epilepsy (n=833) and without epilepsy (n=68,662). From 2003 through 2011, the estimated annual suicide mortality rate among people with epilepsy was 16.89/100,000 per persons, 22% higher than that in the general population. Compared with those without epilepsy, those with epilepsy were more likely to have died from suicide in houses, apartments, or residential institutions (81% vs. 76%, respectively) and were twice as likely to poison themselves (38% vs. 17%) (P<0.01). More of those with epilepsy aged 40-49 died from suicide than comparably aged persons without epilepsy (29% vs. 22%) (P<0.01). The proportion of suicides among those with epilepsy increased steadily from 2005 through 2010, peaking significantly in 2010 before falling. For the first time, the suicide rate among people with epilepsy in a large U.S. general population was estimated, and the suicide risk exceeded that in the general population. Suicide prevention efforts should target people with epilepsy 40-49years old. Additional preventive efforts include reducing the availability or exposure to poisons, especially at home, and supporting other evidence-based programs to reduce mental illness comorbidity associated with suicide. Published by Elsevier Inc.

  17. Tritium as an indicator of ground-water age in Central Wisconsin

    USGS Publications Warehouse

    Bradbury, Kenneth R.

    1991-01-01

    In regions where ground water is generally younger than about 30 years, developing the tritium input history of an area for comparison with the current tritium content of ground water allows quantitative estimates of minimum ground-water age. The tritium input history for central Wisconsin has been constructed using precipitation tritium measured at Madison, Wisconsin and elsewhere. Weighted tritium inputs to ground water reached a peak of over 2,000 TU in 1964, and have declined since that time to about 20-30 TU at present. In the Buena Vista basin in central Wisconsin, most ground-water samples contained elevated levels of tritium, and estimated minimum ground-water ages in the basin ranged from less than one year to over 33 years. Ground water in mapped recharge areas was generally younger than ground water in discharge areas, and estimated ground-water ages were consistent with flow system interpretations based on other data. Estimated minimum ground-water ages increased with depth in areas of downward ground-water movement. However, water recharging through thick moraine sediments was older than water in other recharge areas, reflecting slower infiltration through the sandy till of the moraine.

  18. Remotely piloted vehicle: Application of the GRASP analysis method

    NASA Technical Reports Server (NTRS)

    Andre, W. L.; Morris, J. B.

    1981-01-01

    The application of General Reliability Analysis Simulation Program (GRASP) to the remotely piloted vehicle (RPV) system is discussed. The model simulates the field operation of the RPV system. By using individual component reliabilities, the overall reliability of the RPV system is determined. The results of the simulations are given in operational days. The model represented is only a basis from which more detailed work could progress. The RPV system in this model is based on preliminary specifications and estimated values. The use of GRASP from basic system definition, to model input, and to model verification is demonstrated.

  19. A weak Galerkin least-squares finite element method for div-curl systems

    NASA Astrophysics Data System (ADS)

    Li, Jichun; Ye, Xiu; Zhang, Shangyou

    2018-06-01

    In this paper, we introduce a weak Galerkin least-squares method for solving div-curl problem. This finite element method leads to a symmetric positive definite system and has the flexibility to work with general meshes such as hybrid mesh, polytopal mesh and mesh with hanging nodes. Error estimates of the finite element solution are derived. The numerical examples demonstrate the robustness and flexibility of the proposed method.

  20. A Prolog System for Converting VHDL-Based Models to Generalized Extraction System (GES) Rules

    DTIC Science & Technology

    1991-06-01

    NOTICE When Government drawings, specifications, or other data are used for any purpose other than in connection with a definitely Government-related...any way supplied the said drawings, specifications, or other data , is not to be regarded by implication, or otherwise in any manner construed, as...information is estimated to average I hour per response, including the time for reviewilng instuctions. searching existing data souarces, qthprinq ted

  1. Inverse problem studies of biochemical systems with structure identification of S-systems by embedding training functions in a genetic algorithm.

    PubMed

    Sarode, Ketan Dinkar; Kumar, V Ravi; Kulkarni, B D

    2016-05-01

    An efficient inverse problem approach for parameter estimation, state and structure identification from dynamic data by embedding training functions in a genetic algorithm methodology (ETFGA) is proposed for nonlinear dynamical biosystems using S-system canonical models. Use of multiple shooting and decomposition approach as training functions has been shown for handling of noisy datasets and computational efficiency in studying the inverse problem. The advantages of the methodology are brought out systematically by studying it for three biochemical model systems of interest. By studying a small-scale gene regulatory system described by a S-system model, the first example demonstrates the use of ETFGA for the multifold aims of the inverse problem. The estimation of a large number of parameters with simultaneous state and network identification is shown by training a generalized S-system canonical model with noisy datasets. The results of this study bring out the superior performance of ETFGA on comparison with other metaheuristic approaches. The second example studies the regulation of cAMP oscillations in Dictyostelium cells now assuming limited availability of noisy data. Here, flexibility of the approach to incorporate partial system information in the identification process is shown and its effect on accuracy and predictive ability of the estimated model are studied. The third example studies the phenomenological toy model of the regulation of circadian oscillations in Drosophila that follows rate laws different from S-system power-law. For the limited noisy data, using a priori information about properties of the system, we could estimate an alternate S-system model that showed robust oscillatory behavior with predictive abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Estimation of group means when adjusting for covariates in generalized linear models.

    PubMed

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  3. Benchmarking real-time RGBD odometry for light-duty UAVs

    NASA Astrophysics Data System (ADS)

    Willis, Andrew R.; Sahawneh, Laith R.; Brink, Kevin M.

    2016-06-01

    This article describes the theoretical and implementation challenges associated with generating 3D odometry estimates (delta-pose) from RGBD sensor data in real-time to facilitate navigation in cluttered indoor environments. The underlying odometry algorithm applies to general 6DoF motion; however, the computational platforms, trajectories, and scene content are motivated by their intended use on indoor, light-duty UAVs. Discussion outlines the overall software pipeline for sensor processing and details how algorithm choices for the underlying feature detection and correspondence computation impact the real-time performance and accuracy of the estimated odometry and associated covariance. This article also explores the consistency of odometry covariance estimates and the correlation between successive odometry estimates. The analysis is intended to provide users information needed to better leverage RGBD odometry within the constraints of their systems.

  4. Dielectric response of periodic systems from quantum Monte Carlo calculations.

    PubMed

    Umari, P; Willamson, A J; Galli, Giulia; Marzari, Nicola

    2005-11-11

    We present a novel approach that allows us to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric-enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wave function, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence, sampled via forward walking. This approach has been validated for the case of an isolated hydrogen atom and then applied to a periodic system, to calculate the dielectric susceptibility of molecular-hydrogen chains. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.

  5. Systems GMM estimates of the health care spending and GDP relationship: a note.

    PubMed

    Kumar, Saten

    2013-06-01

    This paper utilizes the systems generalized method of moments (GMM) [Arellano and Bover (1995) J Econometrics 68:29-51; Blundell and Bond (1998) J Econometrics 87:115-143], and panel Granger causality [Hurlin and Venet (2001) Granger Causality tests in panel data models with fixed coefficients. Mime'o, University Paris IX], to investigate the health care spending and gross domestic product (GDP) relationship for organisation for economic co-operation and development countries over the period 1960-2007. The system GMM estimates confirm that the contribution of real GDP to health spending is significant and positive. The panel Granger causality tests imply that a bi-directional causality exists between health spending and GDP. To this end, policies aimed at raising health spending will eventually improve the well-being of the population in the long run.

  6. A method to estimate groundwater depletion from confining layers

    USGS Publications Warehouse

    Konikow, Leonard F.; Neuzil, Christopher E.

    2007-01-01

    Although depletion of storage in low‐permeability confining layers is the source of much of the groundwater produced from many confined aquifer systems, it is all too frequently overlooked or ignored. This makes effective management of groundwater resources difficult by masking how much water has been derived from storage and, in some cases, the total amount of water that has been extracted from an aquifer system. Analyzing confining layer storage is viewed as troublesome because of the additional computational burden and because the hydraulic properties of confining layers are poorly known. In this paper we propose a simplified method for computing estimates of confining layer depletion, as well as procedures for approximating confining layer hydraulic conductivity (K) and specific storage (Ss) using geologic information. The latter makes the technique useful in developing countries and other settings where minimal data are available or when scoping calculations are needed. As such, our approach may be helpful for estimating the global transfer of groundwater to surface water. A test of the method on a synthetic system suggests that the computational errors will generally be small. Larger errors will probably result from inaccuracy in confining layer property estimates, but these may be no greater than errors in more sophisticated analyses. The technique is demonstrated by application to two aquifer systems: the Dakota artesian aquifer system in South Dakota and the coastal plain aquifer system in Virginia. In both cases, depletion from confining layers was substantially larger than depletion from the aquifers.

  7. A parametric generalization of the Hayne estimator for line transect sampling

    USGS Publications Warehouse

    Burnham, Kenneth P.

    1979-01-01

    The Hayne model for line transect sampling is generalized by using an elliptical (rather than circular) flushing model for animal detection. By assuming the ration of major and minor axes lengths is constant for all animals, a model results which allows estimation of population density based directly upon sighting distances and sighting angles. The derived estimator of animal density is a generalization of the Hayne estimator for line transect sampling.

  8. Doing Justice? Criminal Offenders with Developmental Disabilities. Detailed Research Findings.

    ERIC Educational Resources Information Center

    Petersilia, Joan

    People with cognitive, intellectual, or developmental disabilities are a small but increasing portion of offenders in the criminal justice system. People with developmental disabilities are estimated to comprise 2-3% of the general population, but 4-10% of the prison population, and an even higher percentage of those in juvenile facilities and in…

  9. Sufficiency and Necessity Assumptions in Causal Structure Induction

    ERIC Educational Resources Information Center

    Mayrhofer, Ralf; Waldmann, Michael R.

    2016-01-01

    Research on human causal induction has shown that people have general prior assumptions about causal strength and about how causes interact with the background. We propose that these prior assumptions about the parameters of causal systems do not only manifest themselves in estimations of causal strength or the selection of causes but also when…

  10. Technical Elements, Demonstration Projects, and Fiscal Models in Medicaid Managed Care for People with Developmental Disabilities.

    ERIC Educational Resources Information Center

    Kastner, Theodore A.; Walsh, Kevin K.; Criscione, Teri

    1997-01-01

    Presents a general model of the structure and functioning of managed care and describes elements (provider networks, fiscal elements, risk estimation, case-mix, management information systems, practice parameters, and quality improvement) critical to people with developmental disabilities. Managed care demonstration projects and a hypothetical…

  11. Blow-up for a three dimensional Keller-Segel model with consumption of chemoattractant

    NASA Astrophysics Data System (ADS)

    Jiang, Jie; Wu, Hao; Zheng, Songmu

    2018-04-01

    We investigate blow-up properties for the initial-boundary value problem of a Keller-Segel model with consumption of chemoattractant when the spatial dimension is three. Through a kinetic reformulation of the Keller-Segel system, we first derive some higher-order estimates and obtain certain blow-up criteria for the local classical solutions. These blow-up criteria generalize the results in [4,5] from the whole space R3 to the case of bounded smooth domain Ω ⊂R3. Lower global blow-up estimate on ‖ n ‖ L∞ (Ω) is also obtained based on our higher-order estimates. Moreover, we prove local non-degeneracy for blow-up points.

  12. Advanced Earth Observation System Instrumentation Study (aeosis)

    NASA Technical Reports Server (NTRS)

    White, R.; Grant, F.; Malchow, H.; Walker, B.

    1975-01-01

    Various types of measurements were studied for estimating the orbit and/or attitude of an Earth Observation Satellite. An investigation was made into the use of known ground targets in the earth sensor imagery, in combination with onboard star sightings and/or range and range rate measurements by ground tracking stations or tracking satellites (TDRSS), to estimate satellite attitude, orbital ephemeris, and gyro bias drift. Generalized measurement equations were derived for star measurements with a particular type of star tracker, and for landmark measurements with a multispectral scanner being proposed for an advanced Earth Observation Satellite. The use of infra-red horizon measurements to estimate the attitude and gyro bias drift of a geosynchronous satellite was explored.

  13. Estimation for general birth-death processes

    PubMed Central

    Crawford, Forrest W.; Minin, Vladimir N.; Suchard, Marc A.

    2013-01-01

    Birth-death processes (BDPs) are continuous-time Markov chains that track the number of “particles” in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution. PMID:25328261

  14. Estimation for general birth-death processes.

    PubMed

    Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A

    2014-04-01

    Birth-death processes (BDPs) are continuous-time Markov chains that track the number of "particles" in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution.

  15. A nonintrusive temperature measuring system for estimating deep body temperature in bed.

    PubMed

    Sim, S Y; Lee, W K; Baek, H J; Park, K S

    2012-01-01

    Deep body temperature is an important indicator that reflects human being's overall physiological states. Existing deep body temperature monitoring systems are too invasive to apply to awake patients for a long time. Therefore, we proposed a nonintrusive deep body temperature measuring system. To estimate deep body temperature nonintrusively, a dual-heat-flux probe and double-sensor probes were embedded in a neck pillow. When a patient uses the neck pillow to rest, the deep body temperature can be assessed using one of the thermometer probes embedded in the neck pillow. We could estimate deep body temperature in 3 different sleep positions. Also, to reduce the initial response time of dual-heat-flux thermometer which measures body temperature in supine position, we employed the curve-fitting method to one subject. And thereby, we could obtain the deep body temperature in a minute. This result shows the possibility that the system can be used as practical temperature monitoring system with appropriate curve-fitting model. In the next study, we would try to establish a general fitting model that can be applied to all of the subjects. In addition, we are planning to extract meaningful health information such as sleep structure analysis from deep body temperature data which are acquired from this system.

  16. Laparoscopic cholecystectomy in a patient with Steinert myotonic dystrophy. Case report.

    PubMed

    Agrusa, A; Mularo, S; Alessi, R; Di Paola, P; Mularo, A; Amato, G; Romano, G

    2011-01-01

    Myotonic dystrophy (MD) is a serious multi-systemic autosomal dominant disease. The estimated incidence is 1 in every 8000 births, with an estimated prevalence of between 2.1 and 14.3 cases per 100,000 inhabitants. Signs and symptoms vary from a severe form of congenital myopathy, present from birth and often fatal, to a classic form and a delayed form, which generally presents after the age of 50 and in which the only sign is a cataract and life expectancy is completely normal. We describe the clinical case of a 40-year-old woman with Steinert myotonic dystrophy who underwent laparoscopic cholecystectomy (under general anesthesia) for symptomatic gallbladder stones. The conduct of anesthesia in such patients must be carefully considered, as hypothermia, shivering, electrical and mechanical stimulation, and the drugs used can all trigger myotonia.

  17. New Method for Estimating Landslide Losses for Major Winter Storms in California.

    NASA Astrophysics Data System (ADS)

    Wills, C. J.; Perez, F. G.; Branum, D.

    2014-12-01

    We have developed a prototype system for estimating the economic costs of landslides due to winter storms in California. This system uses some of the basic concepts and estimates of the value of structures from the HAZUS program developed for FEMA. Using the only relatively complete landslide loss data set that we could obtain, data gathered by the City of Los Angeles in 1978, we have developed relations between landslide susceptibility and loss ratio for private property (represented as the value of wood frame structures from HAZUS). The landslide loss ratios estimated from the Los Angeles data are calibrated using more generalized data from the 1982 storms in the San Francisco Bay area to develop relationships that can be used to estimate loss for any value of 2-day or 30-day rainfall averaged over a county. The current estimates for major storms are long projections from very small data sets, subject to very large uncertainties, so provide a very rough estimate of the landslide damage to structures and infrastructure on hill slopes. More importantly, the system can be extended and improved with additional data and used to project landslide losses in future major winter storms. The key features of this system—the landslide susceptibility map, the relationship between susceptibility and loss ratio, and the calibration of estimates against losses in past storms—can all be improved with additional data. Most importantly, this study highlights the importance of comprehensive studies of landslide damage. Detailed surveys of landslide damage following future storms that include locations and amounts of damage for all landslides within an area are critical for building a well-calibrated system to project future landslide losses. Without an investment in post-storm landslide damage surveys, it will not be possible to improve estimates of the magnitude or distribution of landslide damage, which can range up to billions of dollars.

  18. The estimated reduction in the odds of loss-of-control type crashes for sport utility vehicles equipped with electronic stability control.

    PubMed

    Green, Paul E; Woodrooffe, John

    2006-01-01

    Using data from the NASS General Estimates System (GES), the method of induced exposure was used to assess the effects of electronic stability control (ESC) on loss-of-control type crashes for sport utility vehicles. Sport utility vehicles were classified into crash types generally associated with loss of control and crash types most likely not associated with loss of control. Vehicles were then compared as to whether ESC technology was present or absent in the vehicles. A generalized additive model was fit to assess the effects of ESC, driver age, and driver gender on the odds of loss of control. In addition, the effects of ESC on roads that were not dry were compared to effects on roads that were dry. Overall, the estimated percentage reduction in the odds of a loss-of-control crash for sport utility vehicles equipped with ESC was 70.3%. Both genders and all age groups showed reduced odds of loss-of-control crashes, but there was no significant difference between males and females. With respect to driver age, the maximum percentage reduction of 73.6% occurred at age 27. The positive effects of ESC on roads that were not dry were significantly greater than on roads that were dry.

  19. Are numbers grounded in a general magnitude processing system? A functional neuroimaging meta-analysis.

    PubMed

    Sokolowski, H Moriah; Fias, Wim; Bosah Ononye, Chuka; Ansari, Daniel

    2017-10-01

    It is currently debated whether numbers are processed using a number-specific system or a general magnitude processing system, also used for non-numerical magnitudes such as physical size, duration, or luminance. Activation likelihood estimation (ALE) was used to conduct the first quantitative meta-analysis of 93 empirical neuroimaging papers examining neural activation during numerical and non-numerical magnitude processing. Foci were compiled to generate probabilistic maps of activation for non-numerical magnitudes (e.g. physical size), symbolic numerical magnitudes (e.g. Arabic digits), and nonsymbolic numerical magnitudes (e.g. dot arrays). Conjunction analyses revealed overlapping activation for symbolic, nonsymbolic and non-numerical magnitudes in frontal and parietal lobes. Contrast analyses revealed specific activation in the left superior parietal lobule for symbolic numerical magnitudes. In contrast, small regions in the bilateral precuneus were specifically activated for nonsymbolic numerical magnitudes. No regions in the parietal lobes were activated for non-numerical magnitudes that were not also activated for numerical magnitudes. Therefore, numbers are processed using both a generalized magnitude system and format specific number regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Modulation Depth Estimation and Variable Selection in State-Space Models for Neural Interfaces

    PubMed Central

    Hochberg, Leigh R.; Donoghue, John P.; Brown, Emery N.

    2015-01-01

    Rapid developments in neural interface technology are making it possible to record increasingly large signal sets of neural activity. Various factors such as asymmetrical information distribution and across-channel redundancy may, however, limit the benefit of high-dimensional signal sets, and the increased computational complexity may not yield corresponding improvement in system performance. High-dimensional system models may also lead to overfitting and lack of generalizability. To address these issues, we present a generalized modulation depth measure using the state-space framework that quantifies the tuning of a neural signal channel to relevant behavioral covariates. For a dynamical system, we develop computationally efficient procedures for estimating modulation depth from multivariate data. We show that this measure can be used to rank neural signals and select an optimal channel subset for inclusion in the neural decoding algorithm. We present a scheme for choosing the optimal subset based on model order selection criteria. We apply this method to neuronal ensemble spike-rate decoding in neural interfaces, using our framework to relate motor cortical activity with intended movement kinematics. With offline analysis of intracortical motor imagery data obtained from individuals with tetraplegia using the BrainGate neural interface, we demonstrate that our variable selection scheme is useful for identifying and ranking the most information-rich neural signals. We demonstrate that our approach offers several orders of magnitude lower complexity but virtually identical decoding performance compared to greedy search and other selection schemes. Our statistical analysis shows that the modulation depth of human motor cortical single-unit signals is well characterized by the generalized Pareto distribution. Our variable selection scheme has wide applicability in problems involving multisensor signal modeling and estimation in biomedical engineering systems. PMID:25265627

  1. Multistage estimation of received carrier signal parameters under very high dynamic conditions of the receiver

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1991-01-01

    A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.

  2. Multistage estimation of received carrier signal parameters under very high dynamic conditions of the receiver

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1990-01-01

    A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.

  3. A General Model for Estimating and Correcting the Effects of Nonindependence in Meta-Analysis.

    ERIC Educational Resources Information Center

    Strube, Michael J.

    A general model is described which can be used to represent the four common types of meta-analysis: (1) estimation of effect size by combining study outcomes; (2) estimation of effect size by contrasting study outcomes; (3) estimation of statistical significance by combining study outcomes; and (4) estimation of statistical significance by…

  4. Assessing the Impact of Observations on Numerical Weather Forecasts Using the Adjoint Method

    NASA Technical Reports Server (NTRS)

    Gelaro, Ronald

    2012-01-01

    The adjoint of a data assimilation system provides a flexible and efficient tool for estimating observation impacts on short-range weather forecasts. The impacts of any or all observations can be estimated simultaneously based on a single execution of the adjoint system. The results can be easily aggregated according to data type, location, channel, etc., making this technique especially attractive for examining the impacts of new hyper-spectral satellite instruments and for conducting regular, even near-real time, monitoring of the entire observing system. This talk provides a general overview of the adjoint method, including the theoretical basis and practical implementation of the technique. Results are presented from the adjoint-based observation impact monitoring tool in NASA's GEOS-5 global atmospheric data assimilation and forecast system. When performed in conjunction with standard observing system experiments (OSEs), the adjoint results reveal both redundancies and dependencies between observing system impacts as observations are added or removed from the assimilation system. Understanding these dependencies may be important for optimizing the use of the current observational network and defining requirements for future observing systems

  5. Generalized shrunken type-GM estimator and its application

    NASA Astrophysics Data System (ADS)

    Ma, C. Z.; Du, Y. L.

    2014-03-01

    The parameter estimation problem in linear model is considered when multicollinearity and outliers exist simultaneously. A class of new robust biased estimator, Generalized Shrunken Type-GM Estimation, with their calculated methods are established by combination of GM estimator and biased estimator include Ridge estimate, Principal components estimate and Liu estimate and so on. A numerical example shows that the most attractive advantage of these new estimators is that they can not only overcome the multicollinearity of coefficient matrix and outliers but also have the ability to control the influence of leverage points.

  6. On the rate of convergence of the alternating projection method in finite dimensional spaces

    NASA Astrophysics Data System (ADS)

    Galántai, A.

    2005-10-01

    Using the results of Smith, Solmon, and Wagner [K. Smith, D. Solomon, S. Wagner, Practical and mathematical aspects of the problem of reconstructing objects from radiographs, Bull. Amer. Math. Soc. 83 (1977) 1227-1270] and Nelson and Neumann [S. Nelson, M. Neumann, Generalizations of the projection method with application to SOR theory for Hermitian positive semidefinite linear systems, Numer. Math. 51 (1987) 123-141] we derive new estimates for the speed of the alternating projection method and its relaxed version in . These estimates can be computed in at most O(m3) arithmetic operations unlike the estimates in papers mentioned above that require spectral information. The new and old estimates are equivalent in many practical cases. In cases when the new estimates are weaker, the numerical testing indicates that they approximate the original bounds in papers mentioned above quite well.

  7. Physics-of-Failure Approach to Prognostics

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.

    2017-01-01

    As more and more electric vehicles emerge in our daily operation progressively, a very critical challenge lies in accurate prediction of the electrical components present in the system. In case of electric vehicles, computing remaining battery charge is safety-critical. In order to tackle and solve the prediction problem, it is essential to have awareness of the current state and health of the system, especially since it is necessary to perform condition-based predictions. To be able to predict the future state of the system, it is also required to possess knowledge of the current and future operations of the vehicle. In this presentation our approach to develop a system level health monitoring safety indicator for different electronic components is presented which runs estimation and prediction algorithms to determine state-of-charge and estimate remaining useful life of respective components. Given models of the current and future system behavior, the general approach of model-based prognostics can be employed as a solution to the prediction problem and further for decision making.

  8. Advanced multilateration theory, software development, and data processing: The MICRODOT system

    NASA Technical Reports Server (NTRS)

    Escobal, P. R.; Gallagher, J. F.; Vonroos, O. H.

    1976-01-01

    The process of geometric parameter estimation to accuracies of one centimeter, i.e., multilateration, is defined and applications are listed. A brief functional explanation of the theory is presented. Next, various multilateration systems are described in order of increasing system complexity. Expected systems accuracy is discussed from a general point of view and a summary of the errors is listed. An outline of the design of a software processing system for multilateration, called MICRODOT, is presented next. The links of this software, which can be used for multilateration data simulations or operational data reduction, are examined on an individual basis. Functional flow diagrams are presented to aid in understanding the software capability. MICRODOT capability is described with respect to vehicle configurations, interstation coordinate reduction, geophysical parameter estimation, and orbit determination. Numerical results obtained from MICRODOT via data simulations are displayed both for hypothetical and real world vehicle/station configurations such as used in the GEOS-3 Project. These simulations show the inherent power of the multilateration procedure.

  9. Results of solar electric thrust vector control system design, development and tests

    NASA Technical Reports Server (NTRS)

    Fleischer, G. E.

    1973-01-01

    Efforts to develop and test a thrust vector control system TVCS for a solar-energy-powered ion engine array are described. The results of solar electric propulsion system technology (SEPST) III real-time tests of present versions of TVCS hardware in combination with computer-simulated attitude dynamics of a solar electric multi-mission spacecraft (SEMMS) Phase A-type spacecraft configuration are summarized. Work on an improved solar electric TVCS, based on the use of a state estimator, is described. SEPST III tests of TVCS hardware have generally proved successful and dynamic response of the system is close to predictions. It appears that, if TVCS electronic hardware can be effectively replaced by control computer software, a significant advantage in control capability and flexibility can be gained in future developmental testing, with practical implications for flight systems as well. Finally, it is concluded from computer simulations that TVCS stabilization using rate estimation promises a substantial performance improvement over the present design.

  10. An assessment of the reliability of quantitative genetics estimates in study systems with high rate of extra-pair reproduction and low recruitment.

    PubMed

    Bourret, A; Garant, D

    2017-03-01

    Quantitative genetics approaches, and particularly animal models, are widely used to assess the genetic (co)variance of key fitness related traits and infer adaptive potential of wild populations. Despite the importance of precision and accuracy of genetic variance estimates and their potential sensitivity to various ecological and population specific factors, their reliability is rarely tested explicitly. Here, we used simulations and empirical data collected from an 11-year study on tree swallow (Tachycineta bicolor), a species showing a high rate of extra-pair paternity and a low recruitment rate, to assess the importance of identity errors, structure and size of the pedigree on quantitative genetic estimates in our dataset. Our simulations revealed an important lack of precision in heritability and genetic-correlation estimates for most traits, a low power to detect significant effects and important identifiability problems. We also observed a large bias in heritability estimates when using the social pedigree instead of the genetic one (deflated heritabilities) or when not accounting for an important cause of resemblance among individuals (for example, permanent environment or brood effect) in model parameterizations for some traits (inflated heritabilities). We discuss the causes underlying the low reliability observed here and why they are also likely to occur in other study systems. Altogether, our results re-emphasize the difficulties of generalizing quantitative genetic estimates reliably from one study system to another and the importance of reporting simulation analyses to evaluate these important issues.

  11. Generalized fluctuation-dissipation theorem as a test of the Markovianity of a system

    NASA Astrophysics Data System (ADS)

    Willareth, Lucian; Sokolov, Igor M.; Roichman, Yael; Lindner, Benjamin

    2017-04-01

    We study how well a generalized fluctuation-dissipation theorem (GFDT) is suited to test whether a stochastic system is not Markovian. To this end, we simulate a stochastic non-equilibrium model of the mechanosensory hair bundle from the inner ear organ and analyze its spontaneous activity and response to external stimulation. We demonstrate that this two-dimensional Markovian system indeed obeys the GFDT, as long as i) the averaging ensemble is sufficiently large and ii) finite-size effects in estimating the conjugated variable and its susceptibility can be neglected. Furthermore, we test the GFDT also by looking only at a one-dimensional projection of the system, the experimentally accessible position variable. This reduced system is certainly non-Markovian and the GFDT is somewhat violated but not as drastically as for the equilibrium fluctuation-dissipation theorem. We explore suitable measures to quantify the violation of the theorem and demonstrate that for a set of limited experimental data it might be difficult to decide whether the system is Markovian or not.

  12. Tools and techniques for developing policies for complex and uncertain systems.

    PubMed

    Bankes, Steven C

    2002-05-14

    Agent-based models (ABM) are examples of complex adaptive systems, which can be characterized as those systems for which no model less complex than the system itself can accurately predict in detail how the system will behave at future times. Consequently, the standard tools of policy analysis, based as they are on devising policies that perform well on some best estimate model of the system, cannot be reliably used for ABM. This paper argues that policy analysis by using ABM requires an alternative approach to decision theory. The general characteristics of such an approach are described, and examples are provided of its application to policy analysis.

  13. Observations of Heavy Rainfall in a Post Wildland Fire Area Using X-Band Polarimetric Radar

    NASA Astrophysics Data System (ADS)

    Cifelli, R.; Matrosov, S. Y.; Gochis, D. J.; Kennedy, P.; Moody, J. A.

    2011-12-01

    Polarimetric X-band radar systems have been used increasingly over the last decade for rainfall measurements. Since X-band radar systems are generally less costly, more mobile, and have narrower beam widths (for same antenna sizes) than those operating at lower frequencies (e.g., C and S-bands), they can be used for the "gap-filling" purposes for the areas when high resolution rainfall measurements are needed and existing operational radars systems lack adequate coverage and/or resolution for accurate quantitative precipitation estimation (QPE). The main drawback of X-band systems is attenuation of radar signals, which is significantly stronger compared to frequencies used by "traditional" precipitation radars operating at lower frequencies. The use of different correction schemes based on polarimetric data can, to a certain degree, overcome this drawback when attenuation does not cause total signal extinction. This presentation will focus on examining the use of high-resolution data from the NOAA Earth System Research Laboratory (ESRL) mobile X-band dual polarimetric radar for the purpose of estimating precipitation in a post-wildland fire area. The NOAA radar was deployed in the summer of 2011 to examine the impact of gap-fill radar on QPE and the resulting hydrologic response during heavy rain events in the Colorado Front Range in collaboration with colleagues from the National Center for Atmospheric Research (NCAR), Colorado State University (CSU), and the U.S. Geological Survey (USGS). A network of rain gauges installed by NCAR, the Denver Urban Drainage Flood Control District (UDFCD), and the USGS are used to compare with the radar estimates. Supplemental data from NEXRAD and the CSU-CHILL dual polarimetric radar are also used to compare with the NOAA X-band and rain gauges. It will be shown that rainfall rates and accumulations estimated from specific differential phase measurements (KDP) at X-band are in good agreement with the measurements from the gauge network during heavy rain and rain/hail mixture events. The X-band radar measurements also were generally successful in capturing the high spatial variability in convective rainfall, which caused post-fire debris flows.

  14. On the quantification of the dissolved hydroxyl radicals in the plasma-liquid system using the molecular probe method

    NASA Astrophysics Data System (ADS)

    Ma, Yupengxue; Gong, Xinning; He, Bangbang; Li, Xiaofei; Cao, Dianyu; Li, Junshuai; Xiong, Qing; Chen, Qiang; Chen, Bing Hui; Huo Liu, Qing

    2018-04-01

    Hydroxyl (OH) radical is one of the most important reactive species produced by plasma-liquid interactions, and the OH in liquid phase (dissolved OH radical, OHdis) takes effect in many plasma-based applications due to its high reactivity. Therefore, the quantification of the OHdis in a plasma-liquid system is of great importance, and a molecular probe method usually used for the OHdis detection might be applied. Herein, we investigate the validity of using the molecular probe method to estimate the [OHdis] in the plasma-liquid system. Dimethyl sulfoxide is used as the molecular probe to estimate the [OHdis] in an air plasma-liquid system, and usually the estimation of [OHdis] is deduced by quantifying the OHdis-induced derivative, the formaldehyde (HCHO). The analysis indicates that the true concentration of the OHdis should be estimated from the sum of three terms: the formed HCHO, the existing OH scavengers, and the H2O2 formed from the OHdis. The results show that the measured [HCHO] needs to be corrected since the HCHO consumption is not negligible in the plasma-liquid system. We conclude from the results and the analysis that the molecular probe method generally underestimates the [OHdis] in the plasma-liquid system. If one wants to obtain the true concentration of the OHdis in the plasma-liquid system, one needs to know the consumption behavior of the OHdis-induced derivatives, the information of the OH scavengers (such as hydrated electron, atomic hydrogen besides the molecular probe), and also the knowledge of the H2O2 formed from the OHdis.

  15. Doppler ultrasound-based measurement of tendon velocity and displacement for application toward detecting user-intended motion.

    PubMed

    Stegman, Kelly J; Park, Edward J; Dechev, Nikolai

    2012-07-01

    The motivation of this research is to non-invasively monitor the wrist tendon's displacement and velocity, for purposes of controlling a prosthetic device. This feasibility study aims to determine if the proposed technique using Doppler ultrasound is able to accurately estimate the tendon's instantaneous velocity and displacement. This study is conducted with a tendon mimicking experiment consisting of two different materials: a commercial ultrasound scanner, and a reference linear motion stage set-up. Audio-based output signals are acquired from the ultrasound scanner, and are processed with our proposed Fourier technique to obtain the tendon's velocity and displacement estimates. We then compare our estimates to an external reference system, and also to the ultrasound scanner's own estimates based on its proprietary software. The proposed tendon motion estimation method has been shown to be repeatable, effective and accurate in comparison to the external reference system, and is generally more accurate than the scanner's own estimates. After establishing this feasibility study, future testing will include cadaver-based studies to test the technique on the human arm tendon anatomy, and later on live human test subjects in order to further refine the proposed method for the novel purpose of detecting user-intended tendon motion for controlling wearable prosthetic devices.

  16. Modeling the injury prevention impact of mandatory alcohol ignition interlock installation in all new US vehicles.

    PubMed

    Carter, Patrick M; Flannagan, Carol A C; Bingham, C Raymond; Cunningham, Rebecca M; Rupp, Jonathan D

    2015-05-01

    We estimated the injury prevention impact and cost savings associated with alcohol interlock installation in all new US vehicles. We identified fatal and nonfatal injuries associated with drinking driver vehicle crashes from the Fatality Analysis Reporting System and National Automotive Sampling System's General Estimates System data sets (2006-2010). We derived the estimated impact of universal interlock installation using an estimate of the proportion of alcohol-related crashes that were preventable in vehicles < 1 year-old. We repeated this analysis for each subsequent year, assuming a 15-year implementation. We applied existing crash-induced injury cost metrics to approximate economic savings, and we used a sensitivity analysis to examine results with varying device effectiveness. Over 15 years, 85% of crash fatalities (> 59 000) and 84% to 88% of nonfatal injuries (> 1.25 million) attributed to drinking drivers would be prevented, saving an estimated $342 billion in injury-related costs, with the greatest injury and cost benefit realized among recently legal drinking drivers. Cost savings outweighed installation costs after 3 years, with the policy remaining cost effective provided device effectiveness remained above approximately 25%. Alcohol interlock installation in all new vehicles is likely a cost-effective primary prevention policy that will substantially reduce alcohol-involved crash fatalities and injuries, especially among young vulnerable drivers.

  17. Design of Supersonic Transport Flap Systems for Thrust Recovery at Subsonic Speeds

    NASA Technical Reports Server (NTRS)

    Mann, Michael J.; Carlson, Harry W.; Domack, Christopher S.

    1999-01-01

    A study of the subsonic aerodynamics of hinged flap systems for supersonic cruise commercial aircraft has been conducted using linear attached-flow theory that has been modified to include an estimate of attainable leading edge thrust and an approximate representation of vortex forces. Comparisons of theoretical predictions with experimental results show that the theory gives a reasonably good and generally conservative estimate of the performance of an efficient flap system and provides a good estimate of the leading and trailing-edge deflection angles necessary for optimum performance. A substantial reduction in the area of the inboard region of the leading edge flap has only a minor effect on the performance and the optimum deflection angles. Changes in the size of the outboard leading-edge flap show that performance is greatest when this flap has a chord equal to approximately 30 percent of the wing chord. A study was also made of the performance of various combinations of individual leading and trailing-edge flaps, and the results show that aerodynamic efficiencies as high as 85 percent of full suction are predicted.

  18. Load Balancing Using Time Series Analysis for Soft Real Time Systems with Statistically Periodic Loads

    NASA Technical Reports Server (NTRS)

    Hailperin, M.

    1993-01-01

    This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that the authors' techniques allow more accurate estimation of the global system loading, resulting in fewer object migrations than local methods. The authors' method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive load-balancing methods. Results from a preliminary analysis of another system and from simulation with a synthetic load provide some evidence of more general applicability.

  19. The use of DRG for identifying clinical trials centers with high recruitment potential: a feasability study.

    PubMed

    Aegerter, Philippe; Bendersky, Noelle; Tran, Thi-Chien; Ropers, Jacques; Taright, Namik; Chatellier, Gilles

    2014-01-01

    Recruitment of large samples of patients is crucial for evidence level and efficacy of clinical trials (CT). Clinical Trial Recruitment Support Systems (CTRSS) used to estimate patient recruitment are generally specific to Hospital Information Systems and few were evaluated on a large number of trials. Our aim was to assess, on a large number of CT, the usefulness of commonly available data as Diagnosis Related Groups (DRG) databases in order to estimate potential recruitment. We used the DRG database of a large French multicenter medical institution (1.2 million inpatient stays and 400 new trials each year). Eligibility criteria of protocols were broken down into in atomic entities (diagnosis, procedures, treatments...) then translated into codes and operators recorded in a standardized form. A program parsed the forms and generated requests on the DRG database. A large majority of selection criteria could be coded and final estimations of number of eligible patients were close to observed ones (median difference = 25). Such a system could be part of the feasability evaluation and center selection process before the start of the clinical trial.

  20. Ground-Water Recharge in Minnesota

    USGS Publications Warehouse

    Delin, G.N.; Falteisek, J.D.

    2007-01-01

    'Ground-water recharge' broadly describes the addition of water to the ground-water system. Most water recharging the ground-water system moves relatively rapidly to surface-water bodies and sustains streamflow, lake levels, and wetlands. Over the long term, recharge is generally balanced by discharge to surface waters, to plants, and to deeper parts of the ground-water system. However, this balance can be altered locally as a result of pumping, impervious surfaces, land use, or climate changes that could result in increased or decreased recharge. * Recharge rates to unconfined aquifers in Minnesota typically are about 20-25 percent of precipitation. * Ground-water recharge is least (0-2 inches per year) in the western and northwestern parts of the State and increases to greater than 6 inches per year in the central and eastern parts of the State. * Water-level measurement frequency is important in estimating recharge. Measurements made less frequently than about once per week resulted in as much as a 48 percent underestimation of recharge compared with estimates based on an hourly measurement frequency. * High-quality, long-term, continuous hydrologic and climatic data are important in estimating recharge rates.

  1. Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis

    PubMed Central

    Ollenschläger, Malte; Roth, Nils; Klucken, Jochen

    2017-01-01

    Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis. PMID:28832511

  2. Vehicle-based Methane Mapping Helps Find Natural Gas Leaks and Prioritize Leak Repairs

    NASA Astrophysics Data System (ADS)

    von Fischer, J. C.; Weller, Z.; Roscioli, J. R.; Lamb, B. K.; Ferrara, T.

    2017-12-01

    Recently, mobile methane sensing platforms have been developed to detect and locate natural gas (NG) leaks in urban distribution systems and to estimate their size. Although this technology has already been used in targeted deployment for prioritization of NG pipeline infrastructure repair and replacement, one open question regarding this technology is how effective the resulting data are for prioritizing infrastructure repair and replacement. To answer this question we explore the accuracy and precision of the natural gas leak location and emission estimates provided by methane sensors placed on Google Street View (GSV) vehicles. We find that the vast majority (75%) of methane emitting sources detected by these mobile platforms are NG leaks and that the location estimates are effective at identifying the general location of leaks. We also show that the emission rate estimates from mobile detection platforms are able to effectively rank NG leaks for prioritizing leak repair. Our findings establish that mobile sensing platforms are an efficient and effective tool for improving the safety and reducing the environmental impacts of low-pressure NG distribution systems by reducing atmospheric methane emissions.

  3. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    NASA Astrophysics Data System (ADS)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  4. Investigation of design considerations for a complex demodulation filter

    NASA Technical Reports Server (NTRS)

    Stoughton, J. W.

    1984-01-01

    The digital design of an adaptive digital filter to be employed in the processing of microwave remote sensor data was developed. In particular, a complex demodulation approach was developed to provide narrow band power estimation for a proposed Doppler scatterometer system. This scatterometer was considered for application in the proposed National Oceanographic survey satellite, on an improvement of SEASAT features. A generalized analysis of complex diagrams for the digital architecture component of the proposed system.

  5. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  6. System Error Budgets, Target Distributions and Hitting Performance Estimates for General-Purpose Rifles and Sniper Rifles of 7.62 x 51 mm and Larger Calibers

    DTIC Science & Technology

    1990-05-01

    CLASSIFICATION AUTPOVITY 3. DISTRIBUTION IAVAILABILITY OF REPORT 2b. P OCLASSIFICATION/OOWNGRADING SC14DULE Approved for public release; distribution 4...in the Red Book should obtain a copy of the Engineering Design Handbook, Army Weapon System Analysis, Part One, DARCOM- P 706-101, November 1977; a...companion volume: Army Weapon System Analysis, Part Two, DARCOM- P 706-102, October 1979, also makes worthwhile study. Both of these documents, written by

  7. Health risks of energy systems.

    PubMed

    Krewitt, W; Hurley, F; Trukenmüller, A; Friedrich, R

    1998-08-01

    Health risks from fossil, renewable and nuclear reference energy systems are estimated following a detailed impact pathway approach. Using a set of appropriate air quality models and exposure-effect functions derived from the recent epidemiological literature, a methodological framework for risk assessment has been established and consistently applied across the different energy systems, including the analysis of consequences from a major nuclear accident. A wide range of health impacts resulting from increased air pollution and ionizing radiation is quantified, and the transferability of results derived from specific power plants to a more general context is discussed.

  8. Improved methods to estimate the effective impervious area in urban catchments using rainfall-runoff data

    NASA Astrophysics Data System (ADS)

    Ebrahimian, Ali; Wilson, Bruce N.; Gulliver, John S.

    2016-05-01

    Impervious surfaces are useful indicators of the urbanization impacts on water resources. Effective impervious area (EIA), which is the portion of total impervious area (TIA) that is hydraulically connected to the drainage system, is a better catchment parameter in the determination of actual urban runoff. Development of reliable methods for quantifying EIA rather than TIA is currently one of the knowledge gaps in the rainfall-runoff modeling context. The objective of this study is to improve the rainfall-runoff data analysis method for estimating EIA fraction in urban catchments by eliminating the subjective part of the existing method and by reducing the uncertainty of EIA estimates. First, the theoretical framework is generalized using a general linear least square model and using a general criterion for categorizing runoff events. Issues with the existing method that reduce the precision of the EIA fraction estimates are then identified and discussed. Two improved methods, based on ordinary least square (OLS) and weighted least square (WLS) estimates, are proposed to address these issues. The proposed weighted least squares method is then applied to eleven urban catchments in Europe, Canada, and Australia. The results are compared to map measured directly connected impervious area (DCIA) and are shown to be consistent with DCIA values. In addition, both of the improved methods are applied to nine urban catchments in Minnesota, USA. Both methods were successful in removing the subjective component inherent in the analysis of rainfall-runoff data of the current method. The WLS method is more robust than the OLS method and generates results that are different and more precise than the OLS method in the presence of heteroscedastic residuals in our rainfall-runoff data.

  9. Estimating the dilemma strength for game systems. Comment on "Universal scaling for the dilemma strength in evolutionary games", by Z. Wang et al.

    NASA Astrophysics Data System (ADS)

    Chen, Xiaojie

    2015-09-01

    The puzzle of cooperation exists widely in the realistic world, including biological, social, and engineering systems. How to solve the cooperation puzzle has received considerable attention in recent years [1]. Evolutionary game theory provides a common mathematical framework to study the problem of cooperation. In principle, these practical biological, social, or engineering systems can be described by complex game models composed of multiple autonomous individuals with mutual interactions. And generally there exists a dilemma for the evolution of cooperation in the game systems.

  10. HIV Care Continuum Applied to the US Department of Veterans Affairs: HIV Virologic Outcomes in an Integrated Health Care System.

    PubMed

    Backus, Lisa; Czarnogorski, Maggie; Yip, Gale; Thomas, Brittani P; Torres, Marisa; Bell, Tierney; Ross, David

    2015-08-01

    The Department of Veterans Affairs (VA), the largest integrated HIV care provider in the United States (US), used the HIV Care Continuum to compare clinical care within the VA HIV population with the general US HIV population and to identify areas for improvement. National data from the VA's HIV Clinical Case Registry were used to construct measures along the Continuum for Veterans in VA care diagnosed with HIV by June 2013 and alive by December 31, 2013. Comparisons were made to recent estimates for the same measures for the US HIV population. Additional comparisons were performed for demographic subgroups of sex, race/ethnicity, and age. Of 25,480 Veterans diagnosed with HIV, 77.4% were engaged in care compared with 46.3% in the US population diagnosed with HIV (P < 0.001). Seventy-three percent of Veterans diagnosed with HIV received antiretroviral therapy compared with 43% of the US population diagnosed with HIV (P < 0.001). Nearly two-thirds (65.3%) of HIV-diagnosed Veterans had suppressed HIV viral loads compared with 35.0% of the US population diagnosed with HIV (P < 0.001). The VA health care system performed better at every stage of the HIV Care Continuum compared with the general US estimates. Comparable high rates with some variation were noted among the demographic groups in the VA cohort. The high viral suppression rate in VA, which was almost double the estimate for the HIV-diagnosed US population, demonstrates that improved outcomes along the HIV Care Continuum can be achieved in a comprehensive integrated health care system.

  11. Experimental estimation of transmissibility matrices for industrial multi-axis vibration isolation systems

    NASA Astrophysics Data System (ADS)

    Beijen, Michiel A.; Voorhoeve, Robbert; Heertjes, Marcel F.; Oomen, Tom

    2018-07-01

    Vibration isolation is essential for industrial high-precision systems to suppress external disturbances. The aim of this paper is to develop a general identification approach to estimate the frequency response function (FRF) of the transmissibility matrix, which is a key performance indicator for vibration isolation systems. The major challenge lies in obtaining a good signal-to-noise ratio in view of a large system weight. A non-parametric system identification method is proposed that combines floor and shaker excitations. Furthermore, a method is presented to analyze the input power spectrum of the floor excitations, both in terms of magnitude and direction. In turn, the input design of the shaker excitation signals is investigated to obtain sufficient excitation power in all directions with minimum experiment cost. The proposed methods are shown to provide an accurate FRF of the transmissibility matrix in three relevant directions on an industrial active vibration isolation system over a large frequency range. This demonstrates that, despite their heavy weight, industrial vibration isolation systems can be accurately identified using this approach.

  12. Flexible Approaches to Computing Mediated Effects in Generalized Linear Models: Generalized Estimating Equations and Bootstrapping

    ERIC Educational Resources Information Center

    Schluchter, Mark D.

    2008-01-01

    In behavioral research, interest is often in examining the degree to which the effect of an independent variable X on an outcome Y is mediated by an intermediary or mediator variable M. This article illustrates how generalized estimating equations (GEE) modeling can be used to estimate the indirect or mediated effect, defined as the amount by…

  13. Model verification of mixed dynamic systems. [POGO problem in liquid propellant rockets

    NASA Technical Reports Server (NTRS)

    Chrostowski, J. D.; Evensen, D. A.; Hasselman, T. K.

    1978-01-01

    A parameter-estimation method is described for verifying the mathematical model of mixed (combined interactive components from various engineering fields) dynamic systems against pertinent experimental data. The model verification problem is divided into two separate parts: defining a proper model and evaluating the parameters of that model. The main idea is to use differences between measured and predicted behavior (response) to adjust automatically the key parameters of a model so as to minimize response differences. To achieve the goal of modeling flexibility, the method combines the convenience of automated matrix generation with the generality of direct matrix input. The equations of motion are treated in first-order form, allowing for nonsymmetric matrices, modeling of general networks, and complex-mode analysis. The effectiveness of the method is demonstrated for an example problem involving a complex hydraulic-mechanical system.

  14. Can storage reduce electricity consumption? A general equation for the grid-wide efficiency impact of using cooling thermal energy storage for load shifting

    NASA Astrophysics Data System (ADS)

    Deetjen, Thomas A.; Reimers, Andrew S.; Webber, Michael E.

    2018-02-01

    This study estimates changes in grid-wide, energy consumption caused by load shifting via cooling thermal energy storage (CTES) in the building sector. It develops a general equation for relating generator fleet fuel consumption to building cooling demand as a function of ambient temperature, relative humidity, transmission and distribution current, and baseline power plant efficiency. The results present a graphical sensitivity analysis that can be used to estimate how shifting load from cooling demand to cooling storage could affect overall, grid-wide, energy consumption. In particular, because power plants, air conditioners and transmission systems all have higher efficiencies at cooler ambient temperatures, it is possible to identify operating conditions such that CTES increases system efficiency rather than decreasing it as is typical for conventional storage approaches. A case study of the Dallas-Fort Worth metro area in Texas, USA shows that using CTES to shift daytime cooling load to nighttime cooling storage can reduce annual, system-wide, primary fuel consumption by 17.6 MWh for each MWh of installed CTES capacity. The study concludes that, under the right circumstances, cooling thermal energy storage can reduce grid-wide energy consumption, challenging the perception of energy storage as a net energy consumer.

  15. Patient-based estimation of organ dose for a population of 58 adult patients across 13 protocol categories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sahbaee, Pooyan, E-mail: psahbae@ncsu.edu; Segars, W. Paul; Samei, Ehsan

    2014-07-15

    Purpose: This study aimed to provide a comprehensive patient-specific organ dose estimation across a multiplicity of computed tomography (CT) examination protocols. Methods: A validated Monte Carlo program was employed to model a common CT system (LightSpeed VCT, GE Healthcare). The organ and effective doses were estimated from 13 commonly used body and neurological CT examination. The dose estimation was performed on 58 adult computational extended cardiac-torso phantoms (35 male, 23 female, mean age 51.5 years, mean weight 80.2 kg). The organ dose normalized by CTDI{sub vol} (h factor) and effective dose normalized by the dose length product (DLP) (k factor)more » were calculated from the results. A mathematical model was derived for the correlation between the h and k factors with the patient size across the protocols. Based on this mathematical model, a dose estimation iPhone operating system application was designed and developed to be used as a tool to estimate dose to the patients for a variety of routinely used CT examinations. Results: The organ dose results across all the protocols showed an exponential decrease with patient body size. The correlation was generally strong for the organs which were fully or partially located inside the scan coverage (Pearson sample correlation coefficient (r) of 0.49). The correlation was weaker for organs outside the scan coverage for which distance between the organ and the irradiation area was a stronger predictor of dose to the organ. For body protocols, the effective dose before and after normalization by DLP decreased exponentially with increasing patient's body diameter (r > 0.85). The exponential relationship between effective dose and patient's body diameter was significantly weaker for neurological protocols (r < 0.41), where the trunk length was a slightly stronger predictor of effective dose (0.15 < r < 0.46). Conclusions: While the most accurate estimation of a patient dose requires specific modeling of the patient anatomy, a first order approximation of organ and effective doses from routine CT scan protocols can be reasonably estimated using size specific factors. Estimation accuracy is generally poor for organ outside the scan range and for neurological protocols. The dose calculator designed in this study can be used to conveniently estimate and report the dose values for a patient across a multiplicity of CT scan protocols.« less

  16. Comparison of Rates of Death Having any Death-Certificate Mention of Heart, Kidney, or Liver Disease Among Persons Diagnosed with HIV Infection with those in the General US Population, 2009-2011.

    PubMed

    Whiteside, Y Omar; Selik, Richard; An, Qian; Huang, Taoying; Karch, Debra; Hernandez, Angela L; Hall, H Irene

    2015-01-01

    Compare age-adjusted rates of death due to liver, kidney, and heart diseases during 2009-2011 among US residents diagnosed with HIV infection with those in the general population. Numerators were numbers of records of multiple-cause mortality data from the national vital statistics system with an ICD-10 code for the disease of interest (any mention, not necessarily the underlying cause), divided into those 1) with and 2) without an additional code for HIV infection. Denominators were 1) estimates of persons living with diagnosed HIV infection from national HIV surveillance system data and 2) general population estimates from the US Census Bureau. We compared age-adjusted rates overall (unstratified by sex, race/ethnicity, or region of residence) and stratified by demographic group. Overall, compared with the general population, persons diagnosed with HIV infection had higher age-adjusted rates of death reported with hepatitis B (rate ratio [RR]=42.6; 95% CI: 34.7-50.7), hepatitis C (RR=19.4; 95% CI: 18.1-20.8), liver disease excluding hepatitis B or C (RR=2.1; 95% CI: 1.8-2.3), kidney disease (RR=2.4; 95% CI: 2.2-2.6), and cardiomyopathy (RR=1.9; 95% CI: 1.6-2.3), but lower rates of death reported with ischemic heart disease (RR=0.6; 95% CI: 0.6-0.7) and heart failure (RR=0.8; 95% CI: 0.6-0.9). However, the differences in rates of death reported with the heart diseases were insignificant in some demographic groups. Persons with HIV infection have a higher risk of death with liver and kidney diseases reported as causes than the general population.

  17. Estimated 2012 groundwater potentiometric surface and drawdown from predevelopment to 2012 in the Santa Fe Group aquifer system in the Albuquerque metropolitan area, central New Mexico

    USGS Publications Warehouse

    Powell, Rachel I.; McKean, Sarah E.

    2014-01-01

    Historically, the water-supply requirements of the Albuquerque metropolitan area of central New Mexico were met almost exclusively by groundwater withdrawal from the Santa Fe Group aquifer system. In response to water-level declines, the Albuquerque Bernalillo County Water Utility Authority (ABCWUA) began diverting water from the San Juan-Chama Drinking Water Project in December 2008 to reduce the use of groundwater to meet municipal demand. Modifications in the demand for water and the source of the supply of water for the Albuquerque metropolitan area have resulted in a variable response in the potentiometric surface of the production zone (the interval of the aquifer, from within about 200 feet below the water table to 900 feet or more, in which supply wells generally are screened) of the Santa Fe Group aquifer system. Analysis of the magnitude and spatial distribution of water-level change can help improve the understanding of how the groundwater system responds to withdrawals and variations in the management of the water supply and can support water-management agencies’ efforts to minimize future water-level declines and improve sustainability. The U.S. Geological Survey (USGS), in cooperation with the ABCWUA, has developed an estimate of the 2012 potentiometric surface of the production zone of the Santa Fe Group aquifer system in the Albuquerque metropolitan area. This potentiometric surface is the latest in a series of reports depicting the potentiometric surface of the area. This report presents the estimated potentiometric surface during winter (from December to March) of water year 2012 and the estimated changes in potentiometric surface between predevelopment (pre-1961) and water year 2012 for the production zone of the Santa Fe Group aquifer system in the Albuquerque metropolitan area. Hydrographs from selected piezometers are included to provide details of historical water-level changes. In general, water-level measurements used for this report were collected in small-diameter observation wells screened over short intervals near the middle of the production zone and were considered to best represent the potentiometric head in the production zone. The water-level measurements were collected by various local and Federal agencies. The water year 2012 potentiometric surface map was created in a geographic information system, and the change in water-level altitude from predevelopment to water year 2012 was calculated. The 2012 potentiometric surface indicates that the general direction of groundwater flow is from the Rio Grande towards clusters of supply wells in the east, north, and west. Water-level changes from predevelopment to 2012 were variable across the Albuquerque metropolitan area. Estimated drawdown from 2008 was spatially variable across the Albuquerque metropolitan area. Hydrographs from piezometers on the east side of the river indicate an increase in the annual highest water-level measurement from 2008 to 2012. Hydrographs from piezometers in the northwest part of the study area indicate either steady decline of the water-level altitude over the period of record or recently variable trends in which water-level altitudes increased for a number of years but have declined since water year 2012.

  18. A statistical methodology for estimating transport parameters: Theory and applications to one-dimensional advectivec-dispersive systems

    USGS Publications Warehouse

    Wagner, Brian J.; Gorelick, Steven M.

    1986-01-01

    A simulation nonlinear multiple-regression methodology for estimating parameters that characterize the transport of contaminants is developed and demonstrated. Finite difference contaminant transport simulation is combined with a nonlinear weighted least squares multiple-regression procedure. The technique provides optimal parameter estimates and gives statistics for assessing the reliability of these estimates under certain general assumptions about the distributions of the random measurement errors. Monte Carlo analysis is used to estimate parameter reliability for a hypothetical homogeneous soil column for which concentration data contain large random measurement errors. The value of data collected spatially versus data collected temporally was investigated for estimation of velocity, dispersion coefficient, effective porosity, first-order decay rate, and zero-order production. The use of spatial data gave estimates that were 2–3 times more reliable than estimates based on temporal data for all parameters except velocity. Comparison of estimated linear and nonlinear confidence intervals based upon Monte Carlo analysis showed that the linear approximation is poor for dispersion coefficient and zero-order production coefficient when data are collected over time. In addition, examples demonstrate transport parameter estimation for two real one-dimensional systems. First, the longitudinal dispersivity and effective porosity of an unsaturated soil are estimated using laboratory column data. We compare the reliability of estimates based upon data from individual laboratory experiments versus estimates based upon pooled data from several experiments. Second, the simulation nonlinear regression procedure is extended to include an additional governing equation that describes delayed storage during contaminant transport. The model is applied to analyze the trends, variability, and interrelationship of parameters in a mourtain stream in northern California.

  19. Estimated Ground-Water Withdrawals from the Death Valley Regional Flow System, Nevada and California, 1913-98

    USGS Publications Warehouse

    Moreo, Michael T.; Halford, Keith J.; La Camera, Richard J.; Laczniak, Randell J.

    2003-01-01

    Ground-water withdrawals from 1913 through 1998 from the Death Valley regional flow system have been compiled to support a regional, three-dimensional, transient ground-water flow model. Withdrawal locations and depths of production intervals were estimated and associated errors were reported for 9,300 wells. Withdrawals were grouped into three categories: mining, public-supply, and commercial water use; domestic water use; and irrigation water use. In this report, groupings were based on the method used to estimate pumpage. Cumulative ground-water withdrawals from 1913 through 1998 totaled 3 million acre-feet, most of which was used to irrigate alfalfa. Annual withdrawal for irrigation ranged from 80 to almost 100 percent of the total pumpage. About 75,000 acre-feet was withdrawn for irrigation in 1998. Annual irrigation withdrawals generally were estimated as the product of irrigated acreage and application rate. About 320 fields totaling 11,000 acres were identified in six hydrographic areas. Annual application rates for high water-use crops ranged from 5 feet in Penoyer Valley to 9 feet in Pahrump Valley. The uncertainty in the estimates of ground-water withdrawals was attributed primarily to the uncertainty of application rate estimates. Annual ground-water withdrawal was estimated at about 90,000 acre-feet in 1998 with an assigned uncertainty bounded by 60,000 to 130,000 acre-feet.

  20. [The readiness of the young teacher for the job].

    PubMed

    Ruskova, R

    1992-01-01

    The study aims at studying the professional readiness of the young teachers concerning their psychic state. It includes subjective-individual determinants--attitude to the profession, professional choice and steadiness, professional skills and satisfaction. The investigation is part of a broad complex study. The method used is directed first of all to self-estimation of the teacher concerning the structural system of the pedagogic activity which embraces supplementary questionnaire, revealing the motivation side of the scales for self-estimation. The subject of the examination are teachers from primary schools with length of service one to five years--time for completing their adaptation to the profession. The investigation includes 40 teachers from the cities of Sofia and Burgas. A general conclusion could be made, that there is professional readiness of the young teacher to be up to the requirements. His/her self-estimation corresponds to the adaptive behaviour and the choice of profession has a considerable effect on the professional steadiness. The general low satisfaction is not a sign for dysadaptation, but this low level presupposes lack of stimuli for personal development and perfection.

  1. Long-term morbidity, mortality, and economics of rheumatoid arthritis.

    PubMed

    Wong, J B; Ramey, D R; Singh, G

    2001-12-01

    To estimate the morbidity, mortality, and lifetime costs of care for rheumatoid arthritis (RA). We developed a Markov model based on the Arthritis, Rheumatism, and Aging Medical Information System Post-Marketing Surveillance Program cohort, involving 4,258 consecutively enrolled RA patients who were followed up for 17,085 patient-years. Markov states of health were based on drug treatment and Health Assessment Questionnaire scores. Costs were based on resource utilization, and utilities were based on visual analog scale-based general health scores. The cohort had a mean age of 57 years, 76.4% were women, and the mean duration of disease was 11.8 years. Compared with a life expectancy of 22.0 years for the general population, this cohort had a life expectancy of 18.6 years and 11.3 quality-adjusted life years. Lifetime direct medical care costs were estimated to be $93,296. Higher costs were associated with higher disability scores. A Markov model can be used to estimate lifelong morbidity, mortality, and costs associated with RA, providing a context in which to consider the potential value of new therapies for the disease.

  2. Peak-flow characteristics of Wyoming streams

    USGS Publications Warehouse

    Miller, Kirk A.

    2003-01-01

    Peak-flow characteristics for unregulated streams in Wyoming are described in this report. Frequency relations for annual peak flows through water year 2000 at 364 streamflow-gaging stations in and near Wyoming were evaluated and revised or updated as needed. Analyses of historical floods, temporal trends, and generalized skew were included in the evaluation. Physical and climatic basin characteristics were determined for each gaging station using a geographic information system. Gaging stations with similar peak-flow and basin characteristics were grouped into six hydrologic regions. Regional statistical relations between peak-flow and basin characteristics were explored using multiple-regression techniques. Generalized least squares regression equations for estimating magnitudes of annual peak flows with selected recurrence intervals from 1.5 to 500 years were developed for each region. Average standard errors of estimate range from 34 to 131 percent. Average standard errors of prediction range from 35 to 135 percent. Several statistics for evaluating and comparing the errors in these estimates are described. Limitations of the equations are described. Methods for applying the regional equations for various circumstances are listed and examples are given.

  3. Reliability and Validity in Hospital Case-Mix Measurement

    PubMed Central

    Pettengill, Julian; Vertrees, James

    1982-01-01

    There is widespread interest in the development of a measure of hospital output. This paper describes the problem of measuring the expected cost of the mix of inpatient cases treated in a hospital (hospital case-mix) and a general approach to its solution. The solution is based on a set of homogenous groups of patients, defined by a patient classification system, and a set of estimated relative cost weights corresponding to the patient categories. This approach is applied to develop a summary measure of the expected relative costliness of the mix of Medicare patients treated in 5,576 participating hospitals. The Medicare case-mix index is evaluated by estimating a hospital average cost function. This provides a direct test of the hypothesis that the relationship between Medicare case-mix and Medicare cost per case is proportional. The cost function analysis also provides a means of simulating the effects of classification error on our estimate of this relationship. Our results indicate that this general approach to measuring hospital case-mix provides a valid and robust measure of the expected cost of a hospital's case-mix. PMID:10309909

  4. Descriptive epidemiology of cervical dystonia.

    PubMed

    Defazio, Giovanni; Jankovic, Joseph; Giel, Jennifer L; Papapetropoulos, Spyridon

    2013-01-01

    Cervical dystonia (CD), the most common form of adult-onset focal dystonia, has a heterogeneous clinical presentation with variable clinical features, leading to difficulties and delays in diagnosis. Owing to the lack of reviews specifically focusing on the frequency of primary CD in the general population, we performed a systematic literature search to examine its prevalence/incidence and analyze methodological differences among studies. We performed a systematic literature search to examine the prevalence data of primary focal CD. Sixteen articles met our methodological criteria. Because the reported prevalence estimates were found to vary widely across studies, we analyzed methodological differences and other factors to determine whether true differences exist in prevalence rates among geographic areas (and by gender and age distributions), as well as to facilitate recommendations for future studies. Prevalence estimates ranged from 20-4,100 cases/million. Generally, studies that relied on service-based and record-linkage system data likely underestimated the prevalence of CD, whereas population-based studies suffered from over-ascertainment. The more methodologically robust studies yielded a range of estimates of 28-183 cases/million. Despite the varying prevalence estimates, an approximate 2:1 female:male ratio was consistent among many studies. Three studies estimated incidence, ranging from 8-12 cases/million person-years. Although several studies have attempted to estimate the prevalence and incidence of CD, there is a need for additional well-designed epidemiological studies on primary CD that include large populations; use defined CD diagnostic criteria; and stratify for factors such as age, gender, and ethnicity.

  5. Parameter estimation in 3D affine and similarity transformation: implementation of variance component estimation

    NASA Astrophysics Data System (ADS)

    Amiri-Simkooei, A. R.

    2018-01-01

    Three-dimensional (3D) coordinate transformations, generally consisting of origin shifts, axes rotations, scale changes, and skew parameters, are widely used in many geomatics applications. Although in some geodetic applications simplified transformation models are used based on the assumption of small transformation parameters, in other fields of applications such parameters are indeed large. The algorithms of two recent papers on the weighted total least-squares (WTLS) problem are used for the 3D coordinate transformation. The methodology can be applied to the case when the transformation parameters are generally large of which no approximate values of the parameters are required. Direct linearization of the rotation and scale parameters is thus not required. The WTLS formulation is employed to take into consideration errors in both the start and target systems on the estimation of the transformation parameters. Two of the well-known 3D transformation methods, namely affine (12, 9, and 8 parameters) and similarity (7 and 6 parameters) transformations, can be handled using the WTLS theory subject to hard constraints. Because the method can be formulated by the standard least-squares theory with constraints, the covariance matrix of the transformation parameters can directly be provided. The above characteristics of the 3D coordinate transformation are implemented in the presence of different variance components, which are estimated using the least squares variance component estimation. In particular, the estimability of the variance components is investigated. The efficacy of the proposed formulation is verified on two real data sets.

  6. Developing a Novel Parameter Estimation Method for Agent-Based Model in Immune System Simulation under the Framework of History Matching: A Case Study on Influenza A Virus Infection

    PubMed Central

    Li, Tingting; Cheng, Zhengguo; Zhang, Le

    2017-01-01

    Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393

  7. Developing a Novel Parameter Estimation Method for Agent-Based Model in Immune System Simulation under the Framework of History Matching: A Case Study on Influenza A Virus Infection.

    PubMed

    Li, Tingting; Cheng, Zhengguo; Zhang, Le

    2017-12-01

    Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.

  8. Information fusion methods based on physical laws.

    PubMed

    Rao, Nageswara S V; Reister, David B; Barhen, Jacob

    2005-01-01

    We consider systems whose parameters satisfy certain easily computable physical laws. Each parameter is directly measured by a number of sensors, or estimated using measurements, or both. The measurement process may introduce both systematic and random errors which may then propagate into the estimates. Furthermore, the actual parameter values are not known since every parameter is measured or estimated, which makes the existing sample-based fusion methods inapplicable. We propose a fusion method for combining the measurements and estimators based on the least violation of physical laws that relate the parameters. Under fairly general smoothness and nonsmoothness conditions on the physical laws, we show the asymptotic convergence of our method and also derive distribution-free performance bounds based on finite samples. For suitable choices of the fuser classes, we show that for each parameter the fused estimate is probabilistically at least as good as its best measurement as well as best estimate. We illustrate the effectiveness of this method for a practical problem of fusing well-log data in methane hydrate exploration.

  9. Alternative sampling designs and estimators for annual surveys

    Treesearch

    Paul C. Van Deusen

    2000-01-01

    Annual forest inventory systems in the United States have generally converged on sampling designs that: (1) measure equal proportions of the total number of plots each year; and (2) call for the plots to be systematically dispersed. However, there will inevitably be a need to deviate from the basic design to respond to special requests, natural disasters, and budgetary...

  10. Analysis of Unit Costs in a University. The Fribourg Example. Program on Institutional Management in Higher Education.

    ERIC Educational Resources Information Center

    Pasquier, Jacques; Sachse, Matthias

    Costing principles are applied to a university by estimating unit costs and their component factors for the university's different inputs, activities, and outputs. The information system used is designed for Fribourg University but could be applicable to other Swiss universities and could serve Switzerland's universities policy. In general, it…

  11. A Web-Based System for Early Detection of Symptoms of Depression

    ERIC Educational Resources Information Center

    Pandya, Bhairavi D.

    2013-01-01

    Background: According to data reported by the World Health Organization Depression is a common disorder, affecting about 121 million people worldwide. The Centers for Disease Control report in the US an estimated 10% of the general population will experience a depressive episode in a given year. Delay in diagnosis and subsequent delay in treatment…

  12. Uncertainty in modeled upper ocean heat content change

    NASA Astrophysics Data System (ADS)

    Tokmakian, Robin; Challenor, Peter

    2014-02-01

    This paper examines the uncertainty in the change in the heat content in the ocean component of a general circulation model. We describe the design and implementation of our statistical methodology. Using an ensemble of model runs and an emulator, we produce an estimate of the full probability distribution function (PDF) for the change in upper ocean heat in an Atmosphere/Ocean General Circulation Model, the Community Climate System Model v. 3, across a multi-dimensional input space. We show how the emulator of the GCM's heat content change and hence, the PDF, can be validated and how implausible outcomes from the emulator can be identified when compared to observational estimates of the metric. In addition, the paper describes how the emulator outcomes and related uncertainty information might inform estimates of the same metric from a multi-model Coupled Model Intercomparison Project phase 3 ensemble. We illustrate how to (1) construct an ensemble based on experiment design methods, (2) construct and evaluate an emulator for a particular metric of a complex model, (3) validate the emulator using observational estimates and explore the input space with respect to implausible outcomes and (4) contribute to the understanding of uncertainties within a multi-model ensemble. Finally, we estimate the most likely value for heat content change and its uncertainty for the model, with respect to both observations and the uncertainty in the value for the input parameters.

  13. Real-time estimation of differential piston at the LBT

    NASA Astrophysics Data System (ADS)

    Böhm, Michael; Pott, Jörg-Uwe; Sawodny, Oliver; Herbst, Tom; Kürster, Martin

    2014-07-01

    In this paper, we present and compare different strategies to minimize the effects of telescope vibrations to the differential piston (OPD) for LINC/NIRVANA at the LBT using an accelerometer feedforward compensation approach. We summarize why this technology is of importance for LINC/NIRVANA, but also for future telescopes and instruments. We outline the estimation problem in general and its specifics at the LBT. Model based estimation and broadband filtering techniques can be used to solve the estimation task, each having its own advantages and disadvantages, which will be discussed. Simulation results and measurements at the LBT are shown to motivate and support our choice of the estimation algorithm for the instrument LINC/NIRVANA. We explain our laboratory setup aimed at imitating the vibration behaviour at the LBT in general, and the M2 as main contributor in particular, and we demonstrate the controller's ability to suppress vibrations in the frequency range of 8 Hz to 60 Hz. In this range, telescope vibrations are the most dominant disturbance to the optical path. For our measurements, we introduce a disturbance time series which has a frequency spectrum comparable to what can be measured at the LBT on a typical night. We show promising experimental results, indicating the ability to suppress differential piston induced by telescope vibrations by a factor of about 5 (RMS), which is significantly better than any currently commissioned system.

  14. A Class of Factor Analysis Estimation Procedures with Common Asymptotic Sampling Properties

    ERIC Educational Resources Information Center

    Swain, A. J.

    1975-01-01

    Considers a class of estimation procedures for the factor model. The procedures are shown to yield estimates possessing the same asymptotic sampling properties as those from estimation by maximum likelihood or generalized last squares, both special members of the class. General expressions for the derivatives needed for Newton-Raphson…

  15. Ground target recognition using rectangle estimation.

    PubMed

    Grönwall, Christina; Gustafsson, Fredrik; Millnert, Mille

    2006-11-01

    We propose a ground target recognition method based on 3-D laser radar data. The method handles general 3-D scattered data. It is based on the fact that man-made objects of complex shape can be decomposed to a set of rectangles. The ground target recognition method consists of four steps; 3-D size and orientation estimation, target segmentation into parts of approximately rectangular shape, identification of segments that represent the target's functional/main parts, and target matching with CAD models. The core in this approach is rectangle estimation. The performance of the rectangle estimation method is evaluated statistically using Monte Carlo simulations. A case study on tank recognition is shown, where 3-D data from four fundamentally different types of laser radar systems are used. Although the approach is tested on rather few examples, we believe that the approach is promising.

  16. Comparison of thermal and microwave paleointensity estimates in specimens that violate Thellier's laws

    NASA Astrophysics Data System (ADS)

    Grappone, J. M., Jr.; Biggin, A. J.; Barrett, T. J.; Hill, M. J.

    2017-12-01

    Deep in the Earth, thermodynamic behavior drives the geodynamo and creates the Earth's magnetic field. Determining how the strength of the field, its paleointensity (PI), varies with time, is vital to our understanding of Earth's evolution. Thellier-style paleointensity experiments assume the presence of non-interacting, single domain (SD) magnetic particles, which follow Thellier's laws. Most natural rocks however, contain larger, multi-domain (MD) or interacting single domain (ISD) particles that often violate these laws and cause experiments to fail. Even for samples that pass reliability criteria designed to minimize the impact of MD or ISD grains, different PI techniques can give systematically different estimates, implying violation of Thellier's laws. Our goal is to identify any disparities in PI results that may be explainable by protocol-specific MD and ISD behavior and determine optimum methods to maximize accuracy. Volcanic samples from the Hawai'ian SOH1 borehole previously produced method-dependent PI estimates. Previous studies showed consistently lower PI values when using a microwave (MW) system and the perpendicular method than using the original thermal Thellier-Thellier (OT) technique. However, the data were ambiguous regarding the cause of the discrepancy. The diverging estimates appeared to be either the result of using OT instead of the perpendicular method or the result of using MW protocols instead of thermal protocols. Comparison experiments were conducted using the thermal perpendicular method and microwave OT technique to bridge the gap. Preliminary data generally show that the perpendicular method gives lower estimates than OT for comparable Hlab values. MW estimates are also generally lower than thermal estimates using the same protocol.

  17. Estimation and Selection via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications

    PubMed Central

    Huang, Jian; Zhang, Cun-Hui

    2013-01-01

    The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100

  18. Calculating the free energy of transfer of small solutes into a model lipid membrane: Comparison between metadynamics and umbrella sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bochicchio, Davide; Panizon, Emanuele; Ferrando, Riccardo

    2015-10-14

    We compare the performance of two well-established computational algorithms for the calculation of free-energy landscapes of biomolecular systems, umbrella sampling and metadynamics. We look at benchmark systems composed of polyethylene and polypropylene oligomers interacting with lipid (phosphatidylcholine) membranes, aiming at the calculation of the oligomer water-membrane free energy of transfer. We model our test systems at two different levels of description, united-atom and coarse-grained. We provide optimized parameters for the two methods at both resolutions. We devote special attention to the analysis of statistical errors in the two different methods and propose a general procedure for the error estimation inmore » metadynamics simulations. Metadynamics and umbrella sampling yield the same estimates for the water-membrane free energy profile, but metadynamics can be more efficient, providing lower statistical uncertainties within the same simulation time.« less

  19. Design of polynomial fuzzy observer-controller for nonlinear systems with state delay: sum of squares approach

    NASA Astrophysics Data System (ADS)

    Gassara, H.; El Hajjaji, A.; Chaabane, M.

    2017-07-01

    This paper investigates the problem of observer-based control for two classes of polynomial fuzzy systems with time-varying delay. The first class concerns a special case where the polynomial matrices do not depend on the estimated state variables. The second one is the general case where the polynomial matrices could depend on unmeasurable system states that will be estimated. For the last case, two design procedures are proposed. The first one gives the polynomial fuzzy controller and observer gains in two steps. In the second procedure, the designed gains are obtained using a single-step approach to overcome the drawback of a two-step procedure. The obtained conditions are presented in terms of sum of squares (SOS) which can be solved via the SOSTOOLS and a semi-definite program solver. Illustrative examples show the validity and applicability of the proposed results.

  20. Signal Recovery and System Calibration from Multiple Compressive Poisson Measurements

    DOE PAGES

    Wang, Liming; Huang, Jiaji; Yuan, Xin; ...

    2015-09-17

    The measurement matrix employed in compressive sensing typically cannot be known precisely a priori and must be estimated via calibration. One may take multiple compressive measurements, from which the measurement matrix and underlying signals may be estimated jointly. This is of interest as well when the measurement matrix may change as a function of the details of what is measured. This problem has been considered recently for Gaussian measurement noise, and here we develop this idea with application to Poisson systems. A collaborative maximum likelihood algorithm and alternating proximal gradient algorithm are proposed, and associated theoretical performance guarantees are establishedmore » based on newly derived concentration-of-measure results. A Bayesian model is then introduced, to improve flexibility and generality. Connections between the maximum likelihood methods and the Bayesian model are developed, and example results are presented for a real compressive X-ray imaging system.« less

  1. Estimation of population mean under systematic sampling

    NASA Astrophysics Data System (ADS)

    Noor-ul-amin, Muhammad; Javaid, Amjad

    2017-11-01

    In this study we propose a generalized ratio estimator under non-response for systematic random sampling. We also generate a class of estimators through special cases of generalized estimator using different combinations of coefficients of correlation, kurtosis and variation. The mean square errors and mathematical conditions are also derived to prove the efficiency of proposed estimators. Numerical illustration is included using three populations to support the results.

  2. Potential Improvements to Remote Primary Productivity Estimation in the Southern California Current System

    NASA Astrophysics Data System (ADS)

    Jacox, M.; Edwards, C. A.; Kahru, M.; Rudnick, D. L.; Kudela, R. M.

    2012-12-01

    A 26-year record of depth integrated primary productivity (PP) in the Southern California Current System (SCCS) is analyzed with the goal of improving satellite net primary productivity (PP) estimates. The ratio of integrated primary productivity to surface chlorophyll correlates strongly to surface chlorophyll concentration (chl0). However, chl0 does not correlate to chlorophyll-specific productivity, and appears to be a proxy for vertical phytoplankton distribution rather than phytoplankton physiology. Modest improvements in PP model performance are achieved by tuning existing algorithms for the SCCS, particularly by empirical parameterization of photosynthetic efficiency in the Vertically Generalized Production Model. Much larger improvements are enabled by improving accuracy of subsurface chlorophyll and light profiles. In a simple vertically resolved production model, substitution of in situ surface data for remote sensing estimates offers only marginal improvements in model r2 and total log10 root mean squared difference, while inclusion of in situ chlorophyll and light profiles improves these metrics significantly. Autonomous underwater gliders, capable of measuring subsurface fluorescence on long-term, long-range deployments, significantly improve PP model fidelity in the SCCS. We suggest their use (and that of other autonomous profilers such as Argo floats) in conjunction with satellites as a way forward for improved PP estimation in coastal upwelling systems.

  3. Exploiting passive polarimetric imagery for remote sensing applications

    NASA Astrophysics Data System (ADS)

    Vimal Thilak Krishna, Thilakam

    Polarization is a property of light or electromagnetic radiation that conveys information about the orientation of the transverse electric and magnetic fields. The polarization of reflected light complements other electromagnetic radiation attributes such as intensity, frequency, or spectral characteristics. A passive polarization based imaging system records the polarization state of light reflected by objects that are illuminated with an unpolarized and generally uncontrolled source. The polarization due to surface reflections from such objects contains information about the targets that can be exploited in remote sensing applications such as target detection, target classification, object recognition and shape extraction/recognition. In recent years, there has been renewed interest in the use of passive polarization information in remote sensing applications. The goal of our research is to design image processing algorithms for remote sensing applications by utilizing physics-based models that describe the polarization imparted by optical scattering from an object. In this dissertation, we present a method to estimate the complex index of refraction and reflection angle from multiple polarization measurements. This method employs a polarimetric bidirectional reflectance distribution function (pBRDF) that accounts for polarization due to specular scattering. The parameters of interest are derived by utilizing a nonlinear least squares estimation algorithm, and computer simulation results show that the estimation accuracy generally improves with an increasing number of source position measurements. Furthermore, laboratory results indicate that the proposed method is effective for recovering the reflection angle and that the estimated index of refraction provides a feature vector that is robust to the reflection angle. We also study the use of extracted index of refraction as a feature vector in designing two important image processing applications, namely image segmentation and material classification so that the resulting systems are largely invariant to illumination source location. This is in contrast to most passive polarization-based image processing algorithms proposed in the literature that employ quantities such as Stokes vectors and the degree of polarization and which are not robust to changes in illumination conditions. The estimated index of refraction, on the other hand, is invariant to illumination conditions and hence can be used as an input to image processing algorithms. The proposed estimation framework also is extended to the case where the position of the observer (camera) moves between measurements while that of the source remains fixed. Finally, we explore briefly the topic of parameter estimation for a generalized model that accounts for both specular and volumetric scattering. A combination of simulation and experimental results are provided to evaluate the effectiveness of the above methods.

  4. Reliability Analysis of Systems Subject to First-Passage Failure

    NASA Technical Reports Server (NTRS)

    Lutes, Loren D.; Sarkani, Shahram

    2009-01-01

    An obvious goal of reliability analysis is the avoidance of system failure. However, it is generally recognized that it is often not feasible to design a practical or useful system for which failure is impossible. Thus it is necessary to use techniques that estimate the likelihood of failure based on modeling the uncertainty about such items as the demands on and capacities of various elements in the system. This usually involves the use of probability theory, and a design is considered acceptable if it has a sufficiently small probability of failure. This report contains findings of analyses of systems subject to first-passage failure.

  5. Sensor/Response Coordination In A Tactical Self-Protection System

    NASA Astrophysics Data System (ADS)

    Steinberg, Alan N.

    1988-08-01

    This paper describes a model for integrating information acquisition functions into a response planner within a tactical self-defense system. This model may be used in defining requirements in such applications for sensor systems and for associated processing and control functions. The goal of information acquisition in a self-defense system is generally not that of achieving the best possible estimate of the threat environment; but rather to provide resolution of that environment sufficient to support response decisions. We model the information acquisition problem as that of achieving a partition among possible world states such that the final partition maps into the system's repertoire of possible responses.

  6. ODIN system technology module library, 1972 - 1973

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Watson, D. A.; Glatt, C. R.; Jones, R. T.; Galipeau, J.; Phoa, Y. T.; White, R. J.

    1978-01-01

    ODIN/RLV is a digital computing system for the synthesis and optimization of reusable launch vehicle preliminary designs. The system consists of a library of technology modules in the form of independent computer programs and an executive program, ODINEX, which operates on the technology modules. The technology module library contains programs for estimating all major military flight vehicle system characteristics, for example, geometry, aerodynamics, economics, propulsion, inertia and volumetric properties, trajectories and missions, steady state aeroelasticity and flutter, and stability and control. A general system optimization module, a computer graphics module, and a program precompiler are available as user aids in the ODIN/RLV program technology module library.

  7. GPACC program cost work breakdown structure-dictionary. General purpose aft cargo carrier study, volume 3

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The results of detailed cost estimates and economic analysis performed on the updated Model 101 configuration of the general purpose Aft Cargo Carrier (ACC) are given. The objective of this economic analysis is to provide the National Aeronautics and Space Administration (NASA) with information on the economics of using the ACC on the Space Transportation System (STS). The detailed cost estimates for the ACC are presented by a work breakdown structure (WBS) to ensure that all elements of cost are considered in the economic analysis and related subsystem trades. Costs reported by WBS provide NASA with a basis for comparing competing designs and provide detailed cost information that can be used to forecast phase C/D planning for new projects or programs derived from preliminary conceptual design studies. The scope covers all STS and STS/ACC launch vehicle cost impacts for delivering payloads to a 160 NM low Earth orbit (LEO).

  8. Optimized parameter estimation in the presence of collective phase noise

    NASA Astrophysics Data System (ADS)

    Altenburg, Sanah; Wölk, Sabine; Tóth, Géza; Gühne, Otfried

    2016-11-01

    We investigate phase and frequency estimation with different measurement strategies under the effect of collective phase noise. First, we consider the standard linear estimation scheme and present an experimentally realizable optimization of the initial probe states by collective rotations. We identify the optimal rotation angle for different measurement times. Second, we show that subshot noise sensitivity—up to the Heisenberg limit—can be reached in presence of collective phase noise by using differential interferometry, where one part of the system is used to monitor the noise. For this, not only Greenberger-Horne-Zeilinger states but also symmetric Dicke states are suitable. We investigate the optimal splitting for a general symmetric Dicke state at both inputs and discuss possible experimental realizations of differential interferometry.

  9. Launching Science: Science Opportunities Provided by NASA's Constellation System

    NASA Technical Reports Server (NTRS)

    2008-01-01

    In 2004 NASA began implementation of the first phases of a new space exploration policy. This implementation effort included the development of a new human-carrying spacecraft, known as Orion; the Altair lunar lander; and two new launch vehicles, the Ares I and Ares V rockets.collectively called the Constellation System (described in Chapter 5 of this report). The Altair lunar lander, which is in the very preliminary concept stage, is not discussed in detail in the report. In 2007 NASA asked the National Research Council (NRC) to evaluate the science opportunities enabled by the Constellation System. To do so, the NRC established the Committee on Science Opportunities Enabled by NASA's Constellation System. In general, the committee interpreted "Constellation-enabled" broadly, to include not only mission concepts that required Constellation, but also those that could be significantly enhanced by Constellation. The committee intends this report to be a general overview of the topic of science missions that might be enabled by Constellation, a sort of textbook introduction to the subject. The mission concepts that are reviewed in this report should serve as general examples of kinds of missions, and the committee s evaluation should not be construed as an endorsement of the specific teams that developed the mission concepts or of their proposals. Additionally, NASA has a well-developed process for establishing scientific priorities by asking the NRC to conduct a "decadal survey" for a particular discipline. Any scientific mission that eventually uses the Constellation System will have to be properly evaluated by means of this decadal survey process. The committee was impressed with the scientific potential of many of the proposals that it evaluated. However, the committee notes that the Constellation System has been justified by NASA and selected in order to enable human exploration beyond low Earth orbit.not to enable science missions. Virtually all of the science mission concepts that could take advantage of Constellation s unique capabilities are likely to be prohibitively expensive. Several times in the past NASA has begun ambitious space science missions that ultimately proved too expensive for the agency to pursue. Examples include the Voyager-Mars mission and the Prometheus program and its Jupiter Icy Moons Orbiter spacecraft (both examples are discussed in Chapter 1). Finding: The scientific missions reviewed by the committee as appropriate for launch on an Ares V vehicle fall, with few exceptions, into the "flagship" class of missions. The preliminary cost estimates, based on mission concepts that at this time are not very detailed, indicate that the costs of many of the missions analyzed will be above $5 billion (in current dollars). The Ares V costs are not included in these estimates. All of the costs discussed in this report are presented in current-year (2008) dollars, not accounting for potential inflation that could occur between now and the decade in which these missions might be pursued. In general, preliminary cost estimates for proposed missions are, for many reasons, significantly lower than the final costs. Given the large cost estimates for many of the missions assessed in this report, the potentially large impacts on NASA's budget by many of these missions are readily apparent.

  10. Using Kriging with a heterogeneous measurement error to improve the accuracy of extreme precipitation return level estimation

    NASA Astrophysics Data System (ADS)

    Yin, Shui-qing; Wang, Zhonglei; Zhu, Zhengyuan; Zou, Xu-kai; Wang, Wen-ting

    2018-07-01

    Extreme precipitation can cause flooding and may result in great economic losses and deaths. The return level is a commonly used measure of extreme precipitation events and is required for hydrological engineer designs, including those of sewerage systems, dams, reservoirs and bridges. In this paper, we propose a two-step method to estimate the return level and its uncertainty for a study region. In the first step, we use the generalized extreme value distribution, the L-moment method and the stationary bootstrap to estimate the return level and its uncertainty at the site with observations. In the second step, a spatial model incorporating the heterogeneous measurement errors and covariates is trained to estimate return levels at sites with no observations and to improve the estimates at sites with limited information. The proposed method is applied to the daily rainfall data from 273 weather stations in the Haihe river basin of North China. We compare the proposed method with two alternatives: the first one is based on the ordinary Kriging method without measurement error, and the second one smooths the estimated location and scale parameters of the generalized extreme value distribution by the universal Kriging method. Results show that the proposed method outperforms its counterparts. We also propose a novel approach to assess the two-step method by comparing it with the at-site estimation method with a series of reduced length of observations. Estimates of the 2-, 5-, 10-, 20-, 50- and 100-year return level maps and the corresponding uncertainties are provided for the Haihe river basin, and a comparison with those released by the Hydrology Bureau of Ministry of Water Resources of China is made.

  11. Methodology for Estimating ton-Miles of Goods Movements for U.S. Freight Mulitimodal Network System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling

    2013-01-01

    Ton-miles is a commonly used measure of freight transportation output. Estimation of ton-miles in the U.S. transportation system requires freight flow data at disaggregated level (either by link flow, path flows or origin-destination flows between small geographic areas). However, the sheer magnitude of the freight data system as well as industrial confidentiality concerns in Census survey, limit the freight data which is made available to the public. Through the years, the Center for Transportation Analysis (CTA) of the Oak Ridge National Laboratory (ORNL) has been working in the development of comprehensive national and regional freight databases and network flow models.more » One of the main products of this effort is the Freight Analysis Framework (FAF), a public database released by the ORNL. FAF provides to the general public a multidimensional matrix of freight flows (weight and dollar value) on the U.S. transportation system between states, major metropolitan areas, and remainder of states. Recently, the CTA research team has developed a methodology to estimate ton-miles by mode of transportation between the 2007 FAF regions. This paper describes the data disaggregation methodology. The method relies on the estimation of disaggregation factors that are related to measures of production, attractiveness and average shipments distances by mode service. Production and attractiveness of counties are captured by the total employment payroll. Likely mileages for shipments between counties are calculated by using a geographic database, i.e. the CTA multimodal network system. Results of validation experiments demonstrate the validity of the method. Moreover, 2007 FAF ton-miles estimates are consistent with the major freight data programs for rail and water movements.« less

  12. Methods for Factor Screening in Computer Simulation Experiments

    DTIC Science & Technology

    1979-03-01

    are generally of two types: 1. Factors that are centrollable or subject to de .-ign in the "real world" system being modeled, such as inventory...62! La 2j L -2 J The least squares estimates of the parameters become o0[30.50 4.50 K2 15.50 06121 -0.50 From examining the estimates of the effects...BDF ACDEF F ACD BDE ABCEF AB CE BCDF ADEF AC BE DF ABCDEF AD CF BCDE ABDF AE BC CDEF ABPE A? CD BCEF ABDE BD E.F ACDE ABCF BF DE ABCD ACEF ABF CEP BCD

  13. Understanding cost growth during operations of planetary missions: An explanation of changes

    NASA Astrophysics Data System (ADS)

    McNeill, J. F.; Chapman, E. L.; Sklar, M. E.

    In the development of project cost estimates for interplanetary missions, considerable focus is generally given to the development of cost estimates for the development of ground, flight, and launch systems, i.e., Phases B, C, and D. Depending on the project team, efforts expended to develop cost estimates for operations (Phase E) may be relatively less rigorous than that devoted to estimates for ground and flight systems development. Furthermore, the project team may be challenged to develop a solid estimate of operations cost in the early stages of mission development, e.g., Concept Study Report or Systems Requirement Review (CSR/SRR), Preliminary Design Review (PDR), as mission specific peculiarities that impact cost may not be well understood. In addition, a methodology generally used to develop Phase E cost is engineering build-up, also known as “ grass roots” . Phase E can include cost and schedule risks that are not anticipated at the time of the major milestone reviews prior to launch. If not incorporated into the engineering build-up cost method for Phase E, this may translate into an estimation of the complexity of operations and overall cost estimates that are not mature and at worse, insufficient. As a result, projects may find themselves with thin reserves during cruise and on-orbit operations or project overruns prior to the end of mission. This paper examines a set of interplanetary missions in an effort to better understand the reasons for cost and staffing growth in Phase E. The method used in the study is discussed as well as the major findings summarized as the Phase E Explanation of Change (EoC). Research for the study entailed the review of project materials, including Estimates at Completion (EAC) for Phase E and staffing profiles, major project milestone reviews, e.g., CSR, PDR, Critical Design Review (CDR), the interviewing of select project and mission management, and review of Phase E replan materials. From this work, a detai- ed picture is constructed of why cost grew during the operations phase, even to the level of specific events in the life of the missions. As a next step, the Phase E EoC results were gleaned and synthesized to produce leading indicators, i.e., what may be identifiable signs of cost and staffing growth that may be present as early as PDR or CDR. Both a qualitative and quantitative approach was used to determine leading indicators. These leading indicators will be reviewed and a practical method for their use will be discussed.

  14. Information security of power enterprises of North-Arctic region

    NASA Astrophysics Data System (ADS)

    Sushko, O. P.

    2018-05-01

    The role of information technologies in providing technological security for energy enterprises is a component of the economic security for the northern Arctic region in general. Applying instruments and methods of information protection modelling of the energy enterprises' business process in the northern Arctic region (such as Arkhenergo and Komienergo), the authors analysed and identified most frequent risks of information security. With the analytic hierarchy process based on weighting factor estimations, information risks of energy enterprises' technological processes were ranked. The economic estimation of the information security within an energy enterprise considers weighting factor-adjusted variables (risks). Investments in information security systems of energy enterprises in the northern Arctic region are related to necessary security elements installation; current operating expenses on business process protection systems become materialized economic damage.

  15. Use of photovoltaic detector for photocatalytic activity estimation

    NASA Astrophysics Data System (ADS)

    Das, Susanta Kumar; Satapathy, Pravakar; Rao, P. Sai Shruti; Sabar, Bilu; Panda, Rudrashish; Khatua, Lizina

    2018-05-01

    Photocatalysis is a very important process and have numerous applications. Generally, to estimate the photocatalytic activity of newly grown material, its reaction rate constant w.r.t to some standard commercial TiO2 nanoparticles like Degussa P25 is evaluated. Here a photovoltaic detector in conjunction with laser is used to determine this rate constant. This method is tested using Zinc Orthotitanate (Zn2TiO4) nanoparticles prepared by solid state reaction and it is found that its reaction rate constant is six times higher than that of P25. The value is found to be close to the value found by a conventional system. Our proposed system is much more cost-effective than the conventional one and has the potential to do real time monitoring of the photocatalytic activity.

  16. An improved Rosetta pedotransfer function and evaluation in earth system models

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Schaap, M. G.

    2017-12-01

    Soil hydraulic parameters are often difficult and expensive to measure, leading to the pedotransfer functions (PTFs) an alternative to predict those parameters. Rosetta (Schaap et al., 2001, denoted as Rosetta1) are widely used PTFs, which is based on artificial neural network (ANN) analysis coupled with the bootstrap re-sampling method, allowing the estimation of van Genuchten water retention parameters (van Genuchten, 1980, abbreviated here as VG), saturated hydraulic conductivity (Ks), as well as their uncertainties. We present an improved hierarchical pedotransfer functions (Rosetta3) that unify the VG water retention and Ks submodels into one, thus allowing the estimation of uni-variate and bi-variate probability distributions of estimated parameters. Results show that the estimation bias of moisture content was reduced significantly. Rosetta1 and Posetta3 were implemented in the python programming language, and the source code are available online. Based on different soil water retention equations, there are diverse PTFs used in different disciplines of earth system modelings. PTFs based on Campbell [1974] or Clapp and Hornberger [1978] are frequently used in land surface models and general circulation models, while van Genuchten [1980] based PTFs are more widely used in hydrology and soil sciences. We use an independent global scale soil database to evaluate the performance of diverse PTFs used in different disciplines of earth system modelings. PTFs are evaluated based on different soil characteristics and environmental characteristics, such as soil textural data, soil organic carbon, soil pH, as well as precipitation and soil temperature. This analysis provides more quantitative estimation error information for PTF predictions in different disciplines of earth system modelings.

  17. Defence electronics industry profile, 1990-1991

    NASA Astrophysics Data System (ADS)

    The defense electronics industry profiled in this review comprises an estimated 150 Canadian companies that develop, manufacture, and repair radio and communications equipment, radars for surveillance and navigation, air traffic control systems, acoustic and infrared sensors, computers for navigation and fire control, signal processors and display units, special-purpose electronic components, and systems engineering and associated software. Canadian defense electronics companies generally serve market niches and end users of their products are limited to the military, government agencies, or commercial airlines. Geographically, the industry is concentrated in Ontario and Quebec, where about 91 percent of the industry's production and employment is found. In 1989, the estimated revenue of the industry was $2.36 billion, and exports totalled an estimated $1.4 billion. Strengths and weaknesses of the industry are discussed in terms of such factors as the relatively small size of Canadian companies, the ability of Canadian firms to access research and development opportunities and export markets in the United States, the dependence on foreign-made components, and international competition.

  18. Robust Multivariable Estimation of the Relevant Information Coming from a Wheel Speed Sensor and an Accelerometer Embedded in a Car under Performance Tests

    PubMed Central

    Hernandez, Wilmar

    2005-01-01

    In the present paper, in order to estimate the response of both a wheel speed sensor and an accelerometer placed in a car under performance tests, robust and optimal multivariable estimation techniques are used. In this case, the disturbances and noises corrupting the relevant information coming from the sensors' outputs are so dangerous that their negative influence on the electrical systems impoverish the general performance of the car. In short, the solution to this problem is a safety related problem that deserves our full attention. Therefore, in order to diminish the negative effects of the disturbances and noises on the car's electrical and electromechanical systems, an optimum observer is used. The experimental results show a satisfactory improvement in the signal-to-noise ratio of the relevant signals and demonstrate the importance of the fusion of several intelligent sensor design techniques when designing the intelligent sensors that today's cars need.

  19. Community incidence of pathogen-specific gastroenteritis: reconstructing the surveillance pyramid for seven pathogens in seven European Union member states.

    PubMed

    Haagsma, J A; Geenen, P L; Ethelberg, S; Fetsch, A; Hansdotter, F; Jansen, A; Korsgaard, H; O'Brien, S J; Scavia, G; Spitznagel, H; Stefanoff, P; Tam, C C; Havelaar, A H

    2013-08-01

    By building reconstruction models for a case of gastroenteritis in the general population moving through different steps of the surveillance pyramid we estimated that millions of illnesses occur annually in the European population, leading to thousands of hospitalizations. We used data on the healthcare system in seven European Union member states in relation to pathogen characteristics that influence healthcare seeking. Data on healthcare usage were obtained by harmonized cross-sectional surveys. The degree of under-diagnosis and underreporting varied by pathogen and country. Overall, underreporting and under-diagnosis were estimated to be lowest for Germany and Sweden, followed by Denmark, The Netherlands, UK, Italy and Poland. Across all countries, the incidence rate was highest for Campylobacter spp. and Salmonella spp. Incidence estimates resulting from the pyramid reconstruction approach are adjusted for biases due to different surveillance systems and are therefore a better basis for international comparisons than reported data.

  20. Energy Conversion Alternatives Study (ECAS), General Electric Phase 1. Volume 3: Energy conversion subsystems and components. Part 3: Gasification, process fuels, and balance of plant

    NASA Technical Reports Server (NTRS)

    Boothe, W. A.; Corman, J. C.; Johnson, G. G.; Cassel, T. A. V.

    1976-01-01

    Results are presented of an investigation of gasification and clean fuels from coal. Factors discussed include: coal and coal transportation costs; clean liquid and gas fuel process efficiencies and costs; and cost, performance, and environmental intrusion elements of the integrated low-Btu coal gasification system. Cost estimates for the balance-of-plant requirements associated with advanced energy conversion systems utilizing coal or coal-derived fuels are included.

  1. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  2. Transceiver optics for interplanetary communications

    NASA Astrophysics Data System (ADS)

    Roberts, W. T.; Farr, W. H.; Rider, B.; Sampath, D.

    2017-11-01

    In-situ interplanetary science missions constantly push the spacecraft communications systems to support successively higher downlink rates. However, the highly restrictive mass and power constraints placed on interplanetary spacecraft significantly limit the desired bandwidth increases in going forward with current radio frequency (RF) technology. To overcome these limitations, we have evaluated the ability of free-space optical communications systems to make substantial gains in downlink bandwidth, while holding to the mass and power limits allocated to current state-of-the-art Ka-band communications systems. A primary component of such an optical communications system is the optical assembly, comprised of the optical support structure, optical elements, baffles and outer enclosure. We wish to estimate the total mass that such an optical assembly might require, and assess what form it might take. Finally, to ground this generalized study, we should produce a conceptual design, and use that to verify its ability to achieve the required downlink gain, estimate it's specific optical and opto-mechanical requirements, and evaluate the feasibility of producing the assembly.

  3. Military markets for solar thermal electric power systems

    NASA Technical Reports Server (NTRS)

    Hauger, J. S.

    1980-01-01

    The Department of Defense maintains an inventory of over 1,800 MW of engine-generators 15 KW and larger, with an estimated procurement rate of over 140 MW/year. Nearly the entire requirement could be met by advanced heat engines of the types being developed as point-focussing, distributed receiver power plants. A conceptual system consisting of a heat engine which efficiently burns liquid fossil or synthetic fuels, with a 'solarization kit' for conversion to hybrid solar operation could meet existing DOD requirements for new systems which are quieter, lighter, and multi-fueled. An estimated 24 percent (33 MW/year) or more could operationally benefit from the solar option. Baseline cost projections indicate levelized energy cost goals of 210 to 120 mills/KWh (15 to 1000 KW systems). Fuel cost escalation is the major factor affecting the value of the solar option. A baseline calculation for fuel at $0.59/gal in spring, 1979, escalating at 8 percent above general inflation indicates a value of $2700/KWe for a solarization kit.

  4. Probabilistic risk analysis of building contamination.

    PubMed

    Bolster, D T; Tartakovsky, D M

    2008-10-01

    We present a general framework for probabilistic risk assessment (PRA) of building contamination. PRA provides a powerful tool for the rigorous quantification of risk in contamination of building spaces. A typical PRA starts by identifying relevant components of a system (e.g. ventilation system components, potential sources of contaminants, remediation methods) and proceeds by using available information and statistical inference to estimate the probabilities of their failure. These probabilities are then combined by means of fault-tree analyses to yield probabilistic estimates of the risk of system failure (e.g. building contamination). A sensitivity study of PRAs can identify features and potential problems that need to be addressed with the most urgency. Often PRAs are amenable to approximations, which can significantly simplify the approach. All these features of PRA are presented in this paper via a simple illustrative example, which can be built upon in further studies. The tool presented here can be used to design and maintain adequate ventilation systems to minimize exposure of occupants to contaminants.

  5. Anticipating the Chaotic Behaviour of Industrial Systems Based on Stochastic, Event-Driven Simulations

    NASA Astrophysics Data System (ADS)

    Bruzzone, Agostino G.; Revetria, Roberto; Simeoni, Simone; Viazzo, Simone; Orsoni, Alessandra

    2004-08-01

    In logistics and industrial production managers must deal with the impact of stochastic events to improve performances and reduce costs. In fact, production and logistics systems are generally designed considering some parameters as deterministically distributed. While this assumption is mostly used for preliminary prototyping, it is sometimes also retained during the final design stage, and especially for estimated parameters (i.e. Market Request). The proposed methodology can determine the impact of stochastic events in the system by evaluating the chaotic threshold level. Such an approach, based on the application of a new and innovative methodology, can be implemented to find the condition under which chaos makes the system become uncontrollable. Starting from problem identification and risk assessment, several classification techniques are used to carry out an effect analysis and contingency plan estimation. In this paper the authors illustrate the methodology with respect to a real industrial case: a production problem related to the logistics of distributed chemical processing.

  6. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    PubMed

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-12-01

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.

  7. Methods for estimating annual exceedance-probability discharges for streams in Iowa, based on data through water year 2010

    USGS Publications Warehouse

    Eash, David A.; Barnes, Kimberlee K.; Veilleux, Andrea G.

    2013-01-01

    A statewide study was performed to develop regional regression equations for estimating selected annual exceedance-probability statistics for ungaged stream sites in Iowa. The study area comprises streamgages located within Iowa and 50 miles beyond the State’s borders. Annual exceedance-probability estimates were computed for 518 streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data through 2010. The estimation of the selected statistics included a Bayesian weighted least-squares/generalized least-squares regression analysis to update regional skew coefficients for the 518 streamgages. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low flows. Also, geographic information system software was used to measure 59 selected basin characteristics for each streamgage. Regional regression analysis, using generalized least-squares regression, was used to develop a set of equations for each flood region in Iowa for estimating discharges for ungaged stream sites with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. A total of 394 streamgages were included in the development of regional regression equations for three flood regions (regions 1, 2, and 3) that were defined for Iowa based on landform regions and soil regions. Average standard errors of prediction range from 31.8 to 45.2 percent for flood region 1, 19.4 to 46.8 percent for flood region 2, and 26.5 to 43.1 percent for flood region 3. The pseudo coefficients of determination for the generalized least-squares equations range from 90.8 to 96.2 percent for flood region 1, 91.5 to 97.9 percent for flood region 2, and 92.4 to 96.0 percent for flood region 3. The regression equations are applicable only to stream sites in Iowa with flows not significantly affected by regulation, diversion, channelization, backwater, or urbanization and with basin characteristics within the range of those used to develop the equations. These regression equations will be implemented within the U.S. Geological Survey StreamStats Web-based geographic information system tool. StreamStats allows users to click on any ungaged site on a river and compute estimates of the eight selected statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged sites also are provided by the Web-based tool. StreamStats also allows users to click on any streamgage in Iowa and estimates computed for these eight selected statistics are provided for the streamgage.

  8. Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2005-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.

  9. Parameter estimation and actuator characteristics of hybrid magnetic bearings for axial flow blood pump applications.

    PubMed

    Lim, Tau Meng; Cheng, Shanbao; Chua, Leok Poh

    2009-07-01

    Axial flow blood pumps are generally smaller as compared to centrifugal pumps. This is very beneficial because they can provide better anatomical fit in the chest cavity, as well as lower the risk of infection. This article discusses the design, levitated responses, and parameter estimation of the dynamic characteristics of a compact hybrid magnetic bearing (HMB) system for axial flow blood pump applications. The rotor/impeller of the pump is driven by a three-phase permanent magnet brushless and sensorless motor. It is levitated by two HMBs at both ends in five degree of freedom with proportional-integral-derivative controllers, among which four radial directions are actively controlled and one axial direction is passively controlled. The frequency domain parameter estimation technique with statistical analysis is adopted to validate the stiffness and damping coefficients of the HMB system. A specially designed test rig facilitated the estimation of the bearing's coefficients in air-in both the radial and axial directions. Experimental estimation showed that the dynamic characteristics of the HMB system are dominated by the frequency-dependent stiffness coefficients. By injecting a multifrequency excitation force signal onto the rotor through the HMBs, it is noticed in the experimental results the maximum displacement linear operating range is 20% of the static eccentricity with respect to the rotor and stator gap clearance. The actuator gain was also successfully calibrated and may potentially extend the parameter estimation technique developed in the study of identification and monitoring of the pump's dynamic properties under normal operating conditions with fluid.

  10. Foot placement relies on state estimation during visually guided walking.

    PubMed

    Maeda, Rodrigo S; O'Connor, Shawn M; Donelan, J Maxwell; Marigold, Daniel S

    2017-02-01

    As we walk, we must accurately place our feet to stabilize our motion and to navigate our environment. We must also achieve this accuracy despite imperfect sensory feedback and unexpected disturbances. In this study we tested whether the nervous system uses state estimation to beneficially combine sensory feedback with forward model predictions to compensate for these challenges. Specifically, subjects wore prism lenses during a visually guided walking task, and we used trial-by-trial variation in prism lenses to add uncertainty to visual feedback and induce a reweighting of this input. To expose altered weighting, we added a consistent prism shift that required subjects to adapt their estimate of the visuomotor mapping relationship between a perceived target location and the motor command necessary to step to that position. With added prism noise, subjects responded to the consistent prism shift with smaller initial foot placement error but took longer to adapt, compatible with our mathematical model of the walking task that leverages state estimation to compensate for noise. Much like when we perform voluntary and discrete movements with our arms, it appears our nervous systems uses state estimation during walking to accurately reach our foot to the ground. Accurate foot placement is essential for safe walking. We used computational models and human walking experiments to test how our nervous system achieves this accuracy. We find that our control of foot placement beneficially combines sensory feedback with internal forward model predictions to accurately estimate the body's state. Our results match recent computational neuroscience findings for reaching movements, suggesting that state estimation is a general mechanism of human motor control. Copyright © 2017 the American Physiological Society.

  11. Ground-water discharge determined from estimates of evapotranspiration, Death Valley regional flow system, Nevada and California

    USGS Publications Warehouse

    Laczniak, Randell J.; Smith, J. LaRue; Elliott, Peggy E.; DeMeo, Guy A.; Chatigny, Melissa A.; Roemer, Gaius J.

    2001-01-01

    The Death Valley regional flow system (DVRFS) is one of the larger ground-water flow systems in the southwestern United States and includes much of southern Nevada and the Death Valley region of eastern California. Centrally located within the ground-water flow system is the Nevada Test Site (NTS). The NTS, a large tract covering about 1,375 square miles, historically has been used for testing nuclear devices and currently is being studied as a potential repository for the long-term storage of high-level nuclear waste generated in the United States. The U.S. Department of Energy, as mandated by Federal and State regulators, is evaluating the risk associated with contaminants that have been or may be introduced into the subsurface as a consequence of any past or future activities at the NTS. Because subsurface contaminants can be transported away from the NTS by ground water, components of the ground-water budget are of great interest. One such component is regional ground-water discharge. Most of the ground water leaving the DVRFS is limited to local areas where geologic and hydrologic conditions force ground water upward toward the surface to discharge at springs and seeps. Available estimates of ground-water discharge are based primarily on early work done as part of regional reconnaissance studies. These early efforts covered large, geologically complex areas and often applied substantially different techniques to estimate ground-water discharge. This report describes the results of a study that provides more consistent, accurate, and scientifically defensible measures of regional ground-water losses from each of the major discharge areas of the DVRFS. Estimates of ground-water discharge presented in this report are based on a rigorous quantification of local evapotranspiration (ET). The study identifies areas of ongoing ground-water ET, delineates different ET areas based on similarities in vegetation and soil-moisture conditions, and determines an ET rate for each delineated area. Each area, referred to as an ET unit, generally consists of one or more assemblages of local phreatophytes or a unique moist soil environment. Ten ET units are identified throughout the DVRFS based on differences in spectral-reflectance characteristics. Spectral differences are determined from satellite imagery acquired June 21, 1989, and June 13, 1992. The units identified include areas of open playa, moist bare soils, sparse to dense vegetation, and open water. ET rates estimated for each ET unit range from a few tenths of a foot per year for open playa to nearly 9 feet per year for open water. Mean annual ET estimates are computed for each discharge area by summing estimates of annual ET from each ET unit within a discharge area. The estimate of annual ET from each ET unit is computed as the product of an ET unit's acreage and estimated ET rate. Estimates of mean annual ET range from 450 acre-feet in the Franklin Well area to 30,000 acre-feet in Sarcobatus Flat. Ground-water discharge is estimated as annual ET minus that part of ET attributed to local precipitation. Mean annual ground-water discharge estimates range from 350 acre-feet in the Franklin Well area to 18,000 acre-feet in Ash Meadows. Generally, these estimates are greater for the northern discharge areas (Sarcobatus Flat and Oasis Valley) and less for the southern discharge areas (Franklin Lake, Shoshone area, and Tecopa/ California Valley area) than those previously reported.

  12. Comparing cancer screening estimates: Behavioral Risk Factor Surveillance System and National Health Interview Survey.

    PubMed

    Sauer, Ann Goding; Liu, Benmei; Siegel, Rebecca L; Jemal, Ahmedin; Fedewa, Stacey A

    2018-01-01

    Cancer screening prevalence from the Behavioral Risk Factor Surveillance System (BRFSS), designed to provide state-level estimates, and the National Health Interview Survey (NHIS), designed to provide national estimates, are used to measure progress in cancer control. A detailed description of the extent to which recent cancer screening estimates vary by key demographic characteristics has not been previously described. We examined national prevalence estimates for recommended breast, cervical, and colorectal cancer screening using data from the 2012 and 2014 BRFSS and the 2010 and 2013 NHIS. Treating the NHIS estimates as the reference, direct differences (DD) were calculated by subtracting NHIS estimates from BRFSS estimates. Relative differences were computed by dividing the DD by the NHIS estimates. Two-sample t-tests (2-tails), were performed to test for statistically significant differences. BRFSS screening estimates were higher than those from NHIS for breast (78.4% versus 72.5%; DD=5.9%, p<0.0001); colorectal (65.5% versus 57.6%; DD=7.9%, p<0.0001); and cervical (83.4% versus 81.8%; DD=1.6%, p<0.0001) cancers. DDs were generally higher in racial/ethnic minorities than whites, in the least educated than most educated persons, and in uninsured than insured persons. For example, the colorectal cancer screening DD for whites was 7.3% compared to ≥8.9% for blacks and Hispanics. Despite higher prevalence estimates in BRFSS compared to NHIS, each survey has a unique and important role in providing information to track cancer screening utilization among various populations. Awareness of these differences and their potential causes is important when comparing the surveys and determining the best application for each data source. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. High-resolution photo-mosaic time-series imagery for monitoring human use of an artificial reef.

    PubMed

    Wood, Georgina; Lynch, Tim P; Devine, Carlie; Keller, Krystle; Figueira, Will

    2016-10-01

    Successful marine management relies on understanding patterns of human use. However, obtaining data can be difficult and expensive given the widespread and variable nature of activities conducted. Remote camera systems are increasingly used to overcome cost limitations of conventional labour-intensive methods. Still, most systems face trade-offs between the spatial extent and resolution over which data are obtained, limiting their application. We trialed a novel methodology, CSIRO Ruggedized Autonomous Gigapixel System (CRAGS), for time series of high-resolution photo-mosaic (HRPM) imagery to estimate fine-scale metrics of human activity at an artificial reef located 1.3 km from shore. We compared estimates obtained using the novel system to those produced with a web camera that concurrently monitored the site. We evaluated the effect of day type (weekday/weekend) and time of day on each of the systems and compared to estimates obtained from binocular observations. In general, both systems delivered similar estimates for the number of boats observed and to those obtained by binocular counts; these results were also unaffected by the type of day (weekend vs. weekday). CRAGS was able to determine additional information about the user type and party size that was not possible with the lower resolution webcam system. However, there was an effect of time of day as CRAGS suffered from poor image quality in early morning conditions as a result of fixed camera settings. Our field study provides proof of concept of use of this new cost-effective monitoring tool for the remote collection of high-resolution large-extent data on patterns of human use at high temporal frequency.

  14. 7 CFR 58.135 - Bacterial estimate.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., GENERAL SPECIFICATIONS FOR APPROVED PLANTS AND STANDARDS FOR GRADES OF DAIRY PRODUCTS 1 General Specifications for Dairy Plants Approved for USDA Inspection and Grading Service 1 Quality Specifications for Raw Milk § 58.135 Bacterial estimate. (a) Methods of Testing. Milk shall be tested for bacterial estimate...

  15. Parallelization strategies for continuum-generalized method of moments on the multi-thread systems

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.

    2017-07-01

    Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.

  16. Study of a homotopy continuation method for early orbit determination with the Tracking and Data Relay Satellite System (TDRSS)

    NASA Technical Reports Server (NTRS)

    Smith, R. L.; Huang, C.

    1986-01-01

    A recent mathematical technique for solving systems of equations is applied in a very general way to the orbit determination problem. The study of this technique, the homotopy continuation method, was motivated by the possible need to perform early orbit determination with the Tracking and Data Relay Satellite System (TDRSS), using range and Doppler tracking alone. Basically, a set of six tracking observations is continuously transformed from a set with known solution to the given set of observations with unknown solutions, and the corresponding orbit state vector is followed from the a priori estimate to the solutions. A numerical algorithm for following the state vector is developed and described in detail. Numerical examples using both real and simulated TDRSS tracking are given. A prototype early orbit determination algorithm for possible use in TDRSS orbit operations was extensively tested, and the results are described. Preliminary studies of two extensions of the method are discussed: generalization to a least-squares formulation and generalization to an exhaustive global method.

  17. An introduction to analyzing dichotomous outcomes in a longitudinal setting: a NIDRR traumatic brain injury model systems communication.

    PubMed

    Pretz, Christopher R; Ketchum, Jessica M; Cuthbert, Jeffery P

    2014-01-01

    An untapped wealth of temporal information is captured within the Traumatic Brain Injury Model Systems National Database. Utilization of appropriate longitudinal analyses can provide an avenue toward unlocking the value of this information. This article highlights 2 statistical methods used for assessing change over time when examination of noncontinuous outcomes is of interest where this article focuses on investigation of dichotomous responses. Specifically, the intent of this article is to familiarize the rehabilitation community with the application of generalized estimating equations and generalized linear mixed models as used in longitudinal studies. An introduction to each method is provided where similarities and differences between the 2 are discussed. In addition, to reinforce the ideas and concepts embodied in each approach, we highlight each method, using examples based on data from the Rocky Mountain Regional Brain Injury System.

  18. Regional Frequency and Uncertainty Analysis of Extreme Precipitation in Bangladesh

    NASA Astrophysics Data System (ADS)

    Mortuza, M. R.; Demissie, Y.; Li, H. Y.

    2014-12-01

    Increased frequency of extreme precipitations, especially those with multiday durations, are responsible for recent urban floods and associated significant losses of lives and infrastructures in Bangladesh. Reliable and routinely updated estimation of the frequency of occurrence of such extreme precipitation events are thus important for developing up-to-date hydraulic structures and stormwater drainage system that can effectively minimize future risk from similar events. In this study, we have updated the intensity-duration-frequency (IDF) curves for Bangladesh using daily precipitation data from 1961 to 2010 and quantified associated uncertainties. Regional frequency analysis based on L-moments is applied on 1-day, 2-day and 5-day annual maximum precipitation series due to its advantages over at-site estimation. The regional frequency approach pools the information from climatologically similar sites to make reliable estimates of quantiles given that the pooling group is homogeneous and of reasonable size. We have used Region of influence (ROI) approach along with homogeneity measure based on L-moments to identify the homogenous pooling groups for each site. Five 3-parameter distributions (i.e., Generalized Logistic, Generalized Extreme value, Generalized Normal, Pearson Type Three, and Generalized Pareto) are used for a thorough selection of appropriate models that fit the sample data. Uncertainties related to the selection of the distributions and historical data are quantified using the Bayesian Model Averaging and Balanced Bootstrap approaches respectively. The results from this study can be used to update the current design and management of hydraulic structures as well as in exploring spatio-temporal variations of extreme precipitation and associated risk.

  19. Determining the folding and binding free energy of DNA-based nanodevices and nanoswitches using urea titration curves

    PubMed Central

    Idili, Andrea

    2017-01-01

    Abstract DNA nanotechnology takes advantage of the predictability of DNA interactions to build complex DNA-based functional nanoscale structures. However, when DNA functional and responsive units that are based on non-canonical DNA interactions are employed it becomes quite challenging to predict, understand and control their thermodynamics. In response to this limitation, here we demonstrate the use of isothermal urea titration experiments to estimate the free energy involved in a set of DNA-based systems ranging from unimolecular DNA-based nanoswitches to more complex DNA folds (e.g. aptamers) and nanodevices. We propose here a set of fitting equations that allow to analyze the urea titration curves of these DNA responsive units based on Watson–Crick and non-canonical interactions (stem-loop, G-quadruplex, triplex structures) and to correctly estimate their relative folding and binding free energy values under different experimental conditions. The results described herein will pave the way toward the use of urea titration experiments in the field of DNA nanotechnology to achieve easier and more reliable thermodynamic characterization of DNA-based functional responsive units. More generally, our results will be of general utility to characterize other complex supramolecular systems based on different biopolymers. PMID:28605461

  20. A network-based multi-target computational estimation scheme for anticoagulant activities of compounds.

    PubMed

    Li, Qian; Li, Xudong; Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie

    2011-03-22

    Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. This article proposes a network-based multi-target computational estimation method for anticoagulant activities of compounds by combining network efficiency analysis with scoring function from molecular docking.

  1. A Network-Based Multi-Target Computational Estimation Scheme for Anticoagulant Activities of Compounds

    PubMed Central

    Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie

    2011-01-01

    Background Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. Methodology We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. Conclusions This article proposes a network-based multi-target computational estimation method for anticoagulant activities of compounds by combining network efficiency analysis with scoring function from molecular docking. PMID:21445339

  2. Prevalence of chronic medical conditions among inmates in the Texas prison system.

    PubMed

    Harzke, Amy J; Baillargeon, Jacques G; Pruitt, Sandi L; Pulvino, John S; Paar, David P; Kelley, Michael F

    2010-05-01

    Given the rapid growth and aging of the US prison population in recent years, the disease profile and health care needs of inmates portend to have far-reaching public health implications. Although numerous studies have examined infectious disease prevalence and treatment in incarcerated populations, little is known about the prevalence of non-infectious chronic medical conditions in US prison populations. The purpose of this study was to estimate the prevalence of selected non-infectious chronic medical conditions among inmates in the Texas prison system. The study population consisted of the total census of inmates who were incarcerated in the Texas Department of Criminal Justice for any duration from September 1, 2006 through August 31, 2007 (N=234,031). Information on medical diagnoses was obtained from a system-wide electronic medical record system. Overall crude prevalence estimates for the selected conditions were as follows: hypertension, 18.8%; asthma, 5.4%; diabetes, 4.2%; ischemic heart disease, 1.7%; chronic obstructive pulmonary disease, 0.96%; and cerebrovascular disease, 0.23%. Nearly one quarter (24.5%) of the study population had at least one of the selected conditions. Except for asthma, crude prevalence estimates of the selected conditions increased monotonically with age. Nearly two thirds (64.6%) of inmates who were >or=55 years of age had at least one of the selected conditions. Except for diabetes, crude prevalence estimates for the selected conditions were lower among Hispanic inmates than among non-Hispanic White inmates and African American inmates. Although age-standardized prevalence estimates for the selected conditions did not appear to exceed age-standardized estimates from the US general population, a large number of inmates were affected by one or more of these conditions. As the prison population continues to grow and to age, the burden of these conditions on correctional and community health care systems can be expected to increase.

  3. The temporal relationship between drug supply indicators: an audit of international government surveillance systems.

    PubMed

    Werb, Dan; Kerr, Thomas; Nosyk, Bohdan; Strathdee, Steffanie; Montaner, Julio; Wood, Evan

    2013-09-30

    Illegal drug use continues to be a major threat to community health and safety. We used international drug surveillance databases to assess the relationship between multiple long-term estimates of illegal drug price and purity. We systematically searched for longitudinal measures of illegal drug supply indicators to assess the long-term impact of enforcement-based supply reduction interventions. Data from identified illegal drug surveillance systems were analysed using an a priori defined protocol in which we sought to present annual estimates beginning in 1990. Data were then subjected to trend analyses. Data were obtained from government surveillance systems assessing price, purity and/or seizure quantities of illegal drugs; systems with at least 10 years of longitudinal data assessing price, purity/potency or seizures were included. We identified seven regional/international metasurveillance systems with longitudinal measures of price or purity/potency that met eligibility criteria. In the USA, the average inflation-adjusted and purity-adjusted prices of heroin, cocaine and cannabis decreased by 81%, 80% and 86%, respectively, between 1990 and 2007, whereas average purity increased by 60%, 11% and 161%, respectively. Similar trends were observed in Europe, where during the same period the average inflation-adjusted price of opiates and cocaine decreased by 74% and 51%, respectively. In Australia, the average inflation-adjusted price of cocaine decreased 14%, while the inflation-adjusted price of heroin and cannabis both decreased 49% between 2000 and 2010. During this time, seizures of these drugs in major production regions and major domestic markets generally increased. With few exceptions and despite increasing investments in enforcement-based supply reduction efforts aimed at disrupting global drug supply, illegal drug prices have generally decreased while drug purity has generally increased since 1990. These findings suggest that expanding efforts at controlling the global illegal drug market through law enforcement are failing.

  4. A normative price for energy from an electricity generation system: An Owner-dependent Methodology for Energy Generation (system) Assessment (OMEGA). Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.; Mcmaster, K. M.

    1981-01-01

    The utility owned solar electric system methodology is generalized and updated. The net present value of the system is determined by consideration of all financial benefits and costs (including a specified return on investment). Life cycle costs, life cycle revenues, and residual system values are obtained. Break even values of system parameters are estimated by setting the net present value to zero. While the model was designed for photovoltaic generators with a possible thermal energy byproduct, it applicability is not limited to such systems. The resulting owner-dependent methodology for energy generation system assessment consists of a few equations that can be evaluated without the aid of a high-speed computer.

  5. Generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test.

    PubMed

    Munir, Mohammad

    2018-06-01

    Generalized sensitivity functions characterize the sensitivity of the parameter estimates with respect to the nominal parameters. We observe from the generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test that the measurements of insulin, 62 min after the administration of the glucose bolus into the experimental subject's body, possess no information about the parameter estimates. The glucose measurements possess the information about the parameter estimates up to three hours. These observations have been verified by the parameter estimation of the minimal model. The standard errors of the estimates and crude Monte Carlo process also confirm this observation. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. The Effectiveness of Child Restraint Systems for Children Aged 3 Years or Younger During Motor Vehicle Collisions: 1996 to 2005

    PubMed Central

    Anderson, Craig L.

    2009-01-01

    Objectives. We estimated the effectiveness of child restraints in preventing death during motor vehicle collisions among children 3 years or younger. Methods. We conducted a matched cohort study using Fatality Analysis Reporting System data from 1996 to 2005. We estimated death risk ratios using conditional Poisson regression, bootstrapping, multiple imputation, and a sensitivity analysis of misclassification bias. We examined possible effect modification by selected factors. Results. The estimated death risk ratios comparing child safety seats with no restraint were 0.27 (95% confidence interval [CI] = 0.21, 0.34) for infants, 0.24 (95% CI = 0.19, 0.30) for children aged 1 year, 0.40 (95% CI = 0.32, 0.51) for those aged 2 years, and 0.41 (95% CI = 0.33, 0.52) for those aged 3 years. Estimated safety seat effectiveness was greater during rollover collisions, in rural environments, and in light trucks. We estimated seat belts to be as effective as safety seats in preventing death for children aged 2 and 3 years. Conclusions. Child safety seats are highly effective in reducing the risk of death during severe traffic collisions and generally outperform seat belts. Parents should be encouraged to use child safety seats in favor of seat belts. PMID:19059860

  7. Address-based versus random-digit-dial surveys: comparison of key health and risk indicators.

    PubMed

    Link, Michael W; Battaglia, Michael P; Frankel, Martin R; Osborn, Larry; Mokdad, Ali H

    2006-11-15

    Use of random-digit dialing (RDD) for conducting health surveys is increasingly problematic because of declining participation rates and eroding frame coverage. Alternative survey modes and sampling frames may improve response rates and increase the validity of survey estimates. In a 2005 pilot study conducted in six states as part of the Behavioral Risk Factor Surveillance System, the authors administered a mail survey to selected household members sampled from addresses in a US Postal Service database. The authors compared estimates based on data from the completed mail surveys (n = 3,010) with those from the Behavioral Risk Factor Surveillance System telephone surveys (n = 18,780). The mail survey data appeared reasonably complete, and estimates based on data from the two survey modes were largely equivalent. Differences found, such as differences in the estimated prevalences of binge drinking (mail = 20.3%, telephone = 13.1%) or behaviors linked to human immunodeficiency virus transmission (mail = 7.1%, telephone = 4.2%), were consistent with previous research showing that, for questions about sensitive behaviors, self-administered surveys generally produce higher estimates than interviewer-administered surveys. The mail survey also provided access to cell-phone-only households and households without telephones, which cannot be reached by means of standard RDD surveys.

  8. Brucellosis Seropositivity in Animals and Humans in Ethiopia: A Meta-analysis

    PubMed Central

    Tadesse, Getachew

    2016-01-01

    Background The objectives of this study were to assess the heterogeneities of estimates and to estimate the seroprevalence of brucellosis in animals and humans in Ethiopia. Methods/Principal findings Data from 70 studies covering 75879 animals and 2223 humans were extracted. Rose Bengal Plate Test (RBPT) and Complement Fixation Test (CFT) in series were the most frequently used serological tests. A random effects model was used to calculate pooled prevalence estimates. The overall True Prevalence of brucellosis seropositivity in goats and sheep were estimated at 5.3% (95%CI = 3.5, 7.5) and 2.7% (95%CI = 1.8, 3.4), respectively, and 2.9% for each of camels and cattle. The prevalence was higher in post-pubertal than in pre-pubertal animals (OR = 3.1, 95% CI = 2.6, 3.7) and in the pastoral than in the mixed crop-livestock production system (OR = 2.8, 95%CI = 2.5, 3.2). The incidence rates of brucellosis in humans of pastoral and sedentary system origins were estimated at 160 and 28 per 100 000 person years, respectively. Conclusions The seroprevalence of brucellosis is higher in goats than in other species. Its occurrence is evocative of its importance in the country in general and in the pastoral system in particular. Public awareness creation could reduce the transmission of Brucella spp. from animals to humans and the potential of livestock vaccination as a means of control of brucellosis needs to be assessed. PMID:27792776

  9. Brucellosis Seropositivity in Animals and Humans in Ethiopia: A Meta-analysis.

    PubMed

    Tadesse, Getachew

    2016-10-01

    The objectives of this study were to assess the heterogeneities of estimates and to estimate the seroprevalence of brucellosis in animals and humans in Ethiopia. Data from 70 studies covering 75879 animals and 2223 humans were extracted. Rose Bengal Plate Test (RBPT) and Complement Fixation Test (CFT) in series were the most frequently used serological tests. A random effects model was used to calculate pooled prevalence estimates. The overall True Prevalence of brucellosis seropositivity in goats and sheep were estimated at 5.3% (95%CI = 3.5, 7.5) and 2.7% (95%CI = 1.8, 3.4), respectively, and 2.9% for each of camels and cattle. The prevalence was higher in post-pubertal than in pre-pubertal animals (OR = 3.1, 95% CI = 2.6, 3.7) and in the pastoral than in the mixed crop-livestock production system (OR = 2.8, 95%CI = 2.5, 3.2). The incidence rates of brucellosis in humans of pastoral and sedentary system origins were estimated at 160 and 28 per 100 000 person years, respectively. The seroprevalence of brucellosis is higher in goats than in other species. Its occurrence is evocative of its importance in the country in general and in the pastoral system in particular. Public awareness creation could reduce the transmission of Brucella spp. from animals to humans and the potential of livestock vaccination as a means of control of brucellosis needs to be assessed.

  10. Application of the precipitation-runoff modeling system to the Ah- shi-sle-pah Wash watershed, San Juan County, New Mexico

    USGS Publications Warehouse

    Hejl, H.R.

    1989-01-01

    The precipitation-runoff modeling system was applied to the 8.21 sq-mi drainage area of the Ah-shi-sle-pah Wash watershed in northwestern New Mexico. The calibration periods were May to September of 1981 and 1982, and the verification period was May to September 1983. Twelve storms were available for calibration and 8 storms were available for verification. For calibration A (hydraulic conductivity estimated from onsite data and other storm-mode parameters optimized), the computed standard error of estimate was 50% for runoff volumes and 72% of peak discharges. Calibration B included hydraulic conductivity in the optimization, which reduced the standard error of estimate to 28 % for runoff volumes and 50% for peak discharges. Optimized values for hydraulic conductivity resulted in reductions from 1.00 to 0.26 in/h and 0.20 to 0.03 in/h for the 2 general soils groups in the calibrations. Simulated runoff volumes using 7 of 8 storms occurring during the verification period had a standard error of estimate of 40% for verification A and 38% for verification B. Simulated peak discharge had a standard error of estimate of 120% for verification A and 56% for verification B. Including the eighth storm which had a relatively small magnitude in the verification analysis more than doubled the standard error of estimating volumes and peaks. (USGS)

  11. Estimation of Instantaneous Gas Exchange in Flow-Through Respirometry Systems: A Modern Revision of Bartholomew's Z-Transform Method

    PubMed Central

    Pendar, Hodjat; Socha, John J.

    2015-01-01

    Flow-through respirometry systems provide accurate measurement of gas exchange over long periods of time. However, these systems have limitations in tracking rapid changes. When an animal infuses a metabolic gas into the respirometry chamber in a short burst, diffusion and airflow in the chamber gradually alter the original signal before it arrives at the gas analyzer. For single or multiple bursts, the recorded signal is smeared or mixed, which may result in dramatically altered recordings compared to the emitted signal. Recovering the original metabolic signal is a difficult task because of the inherent ill conditioning problem. Here, we present two new methods to recover the fast dynamics of metabolic patterns from recorded data. We first re-derive the equations of the well-known Z-transform method (ZT method) to show the source of imprecision in this method. Then, we develop a new model of analysis for respirometry systems based on the experimentally determined impulse response, which is the response of the system to a very short unit input. As a result, we present a major modification of the ZT method (dubbed the ‘EZT method’) by using a new model for the impulse response, enhancing its precision to recover the true metabolic signals. The second method, the generalized Z-transform (GZT) method, was then developed by generalizing the EZT method; it can be applied to any flow-through respirometry system with any arbitrary impulse response. Experiments verified that the accuracy of recovering the true metabolic signals is significantly improved by the new methods. These new methods can be used more broadly for input estimation in variety of physiological systems. PMID:26466361

  12. Lateral eddy diffusivity estimates from simulated and observed drifter trajectories: a case study for the Agulhas Current system

    NASA Astrophysics Data System (ADS)

    Rühs, Siren; Zhurbas, Victor; Durgadoo, Jonathan V.; Biastoch, Arne

    2017-04-01

    The Lagrangian description of fluid motion by sets of individual particle trajectories is extensively used to characterize connectivity between distinct oceanic locations. One important factor influencing the connectivity is the average rate of particle dispersal, generally quantified as Lagrangian diffusivity. In addition to Lagrangian observing programs, Lagrangian analyses are performed by advecting particles with the simulated flow field of ocean general circulation models (OGCMs). However, depending on the spatio-temporal model resolution, not all scale-dependent processes are explicitly resolved in the simulated velocity fields. Consequently, the dispersal of advective Lagrangian trajectories has been assumed not to be sufficiently diffusive compared to observed particle spreading. In this study we present a detailed analysis of the spatially variable lateral eddy diffusivity characteristics of advective drifter trajectories simulated with realistically forced OGCMs and compare them with estimates based on observed drifter trajectories. The extended Agulhas Current system around South Africa, known for its intricate mesoscale dynamics, serves as a test case. We show that a state-of-the-art eddy-resolving OGCM indeed features theoretically derived dispersion characteristics for diffusive regimes and realistically represents Lagrangian eddy diffusivity characteristics obtained from observed surface drifter trajectories. The estimates for the maximum and asymptotic lateral single-particle eddy diffusivities obtained from the observed and simulated drifter trajectories show a good agreement in their spatial pattern and magnitude. We further assess the sensitivity of the simulated lateral eddy diffusivity estimates to the temporal and lateral OGCM output resolution and examine the impact of the different eddy diffusivity characteristics on the Lagrangian connectivity between the Indian Ocean and the South Atlantic.

  13. Modification of a rainfall-runoff model for distributed modeling in a GIS and its validation

    NASA Astrophysics Data System (ADS)

    Nyabeze, W. R.

    A rainfall-runoff model, which can be inter-faced with a Geographical Information System (GIS) to integrate definition, measurement, calculating parameter values for spatial features, presents considerable advantages. The modification of the GWBasic Wits Rainfall-Runoff Erosion Model (GWBRafler) to enable parameter value estimation in a GIS (GISRafler) is presented in this paper. Algorithms are applied to estimate parameter values reducing the number of input parameters and the effort to populate them. The use of a GIS makes the relationship between parameter estimates and cover characteristics more evident. This paper has been produced as part of research to generalize the GWBRafler on a spatially distributed basis. Modular data structures are assumed and parameter values are weighted relative to the module area and centroid properties. Modifications to the GWBRafler enable better estimation of low flows, which are typical in drought conditions.

  14. Lifetime Prevalence of Investigating Child Maltreatment Among US Children.

    PubMed

    Kim, Hyunil; Wildeman, Christopher; Jonson-Reid, Melissa; Drake, Brett

    2017-02-01

    To estimate the lifetime prevalence of official investigations for child maltreatment among children in the United States. We used the National Child Abuse and Neglect Data System Child Files (2003-2014) and Census data to develop synthetic cohort life tables to estimate the cumulative prevalence of reported childhood maltreatment. We extend previous work, which explored only confirmed rates of maltreatment, and we add new estimations of maltreatment by subtype, age, and ethnicity. We estimate that 37.4% of all children experience a child protective services investigation by age 18 years. Consistent with previous literature, we found a higher rate for African American children (53.0%) and the lowest rate for Asians/Pacific Islanders (10.2%). Child maltreatment investigations are more common than is generally recognized when viewed across the lifespan. Building on other recent work, our data suggest a critical need for increased preventative and treatment resources in the area of child maltreatment.

  15. Estimation of nonpoint source loadings of phosphorus for lakes in the Puget Sound region, Washington

    USGS Publications Warehouse

    Gilliom, Robert J.

    1983-01-01

    Control of eutrophication of lakes in watersheds undergoing development is facilitated by estimates of the amounts of phosphorus (P) that reach the lakes from areas under various types of land use. Using a mass-balance model, the author calculated P loadings from present-day P concentrations measured in lake water and from other easily measured physical characteristics in a total of 28 lakes in drainage basins that contain only forest and residential land. The loadings from background sources (forest-land drainage and bulk precipitation) to each of the lakes were estimated by methods developed in a previous study. Differences between estimated present-day P loadings and loadings from background sources were attributed to changes in land use. The mean increase in annual P yield resulting from conversion of forest to residential land use was 7 kilograms per square kilometer, not including septic tank system contributions. Calculated loadings from septic systems were found to correlate best with the number of near-shore dwellings around each lake in 1940. The regression equation expressing this relationship explained 36 percent of the sample variance. There was no significant correlation between estimated septic tank system P loadings and number of dwellings present in 1960 or 1970. The evidence indicates that older systems might contribute more phosphorus to lakes than newer systems, and that there may be substantial time lags between septic system installation and significant impacts on lake-water P concentrations. For lakes in basins that contain agricultural land, the P loading attributable to agriculture can be calculated as the difference between the estimated total loading and the sum of estimated loadings from nonagricultural sources. A comprehensive system for evaluating errors in all loading estimates is presented. The empirical relationships developed allow preliminary approximations of the cumulative impact development has had on P loading and the amounts of P loading from generalized land-use categories for Puget Sound lowland lakes. In addition, the sensitivity of a lake to increased loading can be evaluated using the mass-balance model. The data required are presently available for most lakes. Estimates of P loading are useful in developing water-quality goals, setting priorities for lake studies, and designing studies of individual lakes. The suitability of a method for management of individual lakes will often be limited by relatively high levels of uncertainty, especially if the method is used to evaluate relatively small increases in P loading.

  16. A general dead-time correction method based on live-time stamping. Application to the measurement of short-lived radionuclides.

    PubMed

    Chauvenet, B; Bobin, C; Bouchard, J

    2017-12-01

    Dead-time correction formulae are established in the general case of superimposed non-homogeneous Poisson processes. Based on the same principles as conventional live-timed counting, this method exploits the additional information made available using digital signal processing systems, and especially the possibility to store the time stamps of live-time intervals. No approximation needs to be made to obtain those formulae. Estimates of the variances of corrected rates are also presented. This method is applied to the activity measurement of short-lived radionuclides. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. SSA Sensor Calibration Best Practices

    NASA Astrophysics Data System (ADS)

    Johnson, T.

    Best practices for calibrating orbit determination sensors in general and space situational awareness (SSA) sensors in particular are presented. These practices were developed over the last ten years within AGI and most recently applied to over 70 sensors in AGI's Commercial Space Operations Center (ComSpOC) and the US Air Force Space Command (AFSPC) Space Surveillance Network (SSN) to evaluate and configure new sensors and perform on-going system calibration. They are generally applicable to any SSA sensor and leverage some unique capabilities of an SSA estimation approach using an optimal sequential filter and smoother. Real world results are presented and analyzed.

  18. LakeVOC; A Deterministic Model to Estimate Volatile Organic Compound Concentrations in Reservoirs and Lakes

    USGS Publications Warehouse

    Bender, David A.; Asher, William E.; Zogorski, John S.

    2003-01-01

    This report documents LakeVOC, a model to estimate volatile organic compound (VOC) concentrations in lakes and reservoirs. LakeVOC represents the lake or reservoir as a two-layer system and estimates VOC concentrations in both the epilimnion and hypolimnion. The air-water flux of a VOC is characterized in LakeVOC in terms of the two-film model of air-water exchange. LakeVOC solves the system of coupled differential equations for the VOC concentration in the epilimnion, the VOC concentration in the hypolimnion, the total mass of the VOC in the lake, the volume of the epilimnion, and the volume of the hypolimnion. A series of nine simulations were conducted to verify LakeVOC representation of mixing, dilution, and gas exchange characteristics in a hypothetical lake, and two additional estimates of lake volume and MTBE concentrations were done in an actual reservoir under environmental conditions. These 11 simulations showed that LakeVOC correctly handled mixing, dilution, and gas exchange. The model also adequately estimated VOC concentrations within the epilimnion in an actual reservoir with daily input parameters. As the parameter-input time scale increased (from daily to weekly to monthly, for example), the differences between the measured-averaged concentrations and the model-estimated concentrations generally increased, especially for the hypolimnion. This may be because as the time scale is increased from daily to weekly to monthly, the averaging of model inputs may cause a loss of detail in the model estimates.

  19. Prediction of Flutter Boundary Using Flutter Margin for The Discrete-Time System

    NASA Astrophysics Data System (ADS)

    Dwi Saputra, Angga; Wibawa Purabaya, R.

    2018-04-01

    Flutter testing in a wind tunnel is generally conducted at subcritical speeds to avoid damages. Hence, The flutter speed has to be predicted from the behavior some of its stability criteria estimated against the dynamic pressure or flight speed. Therefore, it is quite important for a reliable flutter prediction method to estimates flutter boundary. This paper summarizes the flutter testing of a wing cantilever model in a wind tunnel. The model has two degree of freedom; they are bending and torsion modes. The flutter test was conducted in a subsonic wind tunnel. The dynamic data responses was measured by two accelerometers that were mounted on leading edge and center of wing tip. The measurement was repeated while the wind speed increased. The dynamic responses were used to determine the parameter flutter margin for the discrete-time system. The flutter boundary of the model was estimated using extrapolation of the parameter flutter margin against the dynamic pressure. The parameter flutter margin for the discrete-time system has a better performance for flutter prediction than the modal parameters. A model with two degree freedom and experiencing classical flutter, the parameter flutter margin for the discrete-time system gives a satisfying result in prediction of flutter boundary on subsonic wind tunnel test.

  20. Nonlinear Decoupling Control With ANFIS-Based Unmodeled Dynamics Compensation for a Class of Complex Industrial Processes.

    PubMed

    Zhang, Yajun; Chai, Tianyou; Wang, Hong; Wang, Dianhui; Chen, Xinkai

    2018-06-01

    Complex industrial processes are multivariable and generally exhibit strong coupling among their control loops with heavy nonlinear nature. These make it very difficult to obtain an accurate model. As a result, the conventional and data-driven control methods are difficult to apply. Using a twin-tank level control system as an example, a novel multivariable decoupling control algorithm with adaptive neural-fuzzy inference system (ANFIS)-based unmodeled dynamics (UD) compensation is proposed in this paper for a class of complex industrial processes. At first, a nonlinear multivariable decoupling controller with UD compensation is introduced. Different from the existing methods, the decomposition estimation algorithm using ANFIS is employed to estimate the UD, and the desired estimating and decoupling control effects are achieved. Second, the proposed method does not require the complicated switching mechanism which has been commonly used in the literature. This significantly simplifies the obtained decoupling algorithm and its realization. Third, based on some new lemmas and theorems, the conditions on the stability and convergence of the closed-loop system are analyzed to show the uniform boundedness of all the variables. This is then followed by the summary on experimental tests on a heavily coupled nonlinear twin-tank system that demonstrates the effectiveness and the practicability of the proposed method.

  1. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    DOE PAGES

    An, Zhe; Rey, Daniel; Ye, Jingxin; ...

    2017-01-16

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of themore » full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. Here, we show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.« less

  2. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Zhe; Rey, Daniel; Ye, Jingxin

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of themore » full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. Here, we show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.« less

  3. Treatment of systematic errors in land data assimilation systems

    NASA Astrophysics Data System (ADS)

    Crow, W. T.; Yilmaz, M.

    2012-12-01

    Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of land surface states. Such differences are commonly resolved prior to data assimilation through implementation of a pre-processing rescaling step whereby observations are scaled (or non-linearly transformed) to somehow "match" comparable predictions made by an assimilation model. While the rationale for removing systematic differences in means (i.e., bias) between models and observations is well-established, relatively little theoretical guidance is currently available to determine the appropriate treatment of higher-order moments during rescaling. This talk presents a simple analytical argument to define an optimal linear-rescaling strategy for observations prior to their assimilation into a land surface model. While a technique based on triple collocation theory is shown to replicate this optimal strategy, commonly-applied rescaling techniques (e.g., so called "least-squares regression" and "variance matching" approaches) are shown to represent only sub-optimal approximations to it. Since the triple collocation approach is likely infeasible in many real-world circumstances, general advice for deciding between various feasible (yet sub-optimal) rescaling approaches will be presented with an emphasis of the implications of this work for the case of directly assimilating satellite radiances. While the bulk of the analysis will deal with linear rescaling techniques, its extension to nonlinear cases will also be discussed.

  4. Distribution of malaria exposure in endemic countries in Africa considering country levels of effective treatment.

    PubMed

    Penny, Melissa A; Maire, Nicolas; Bever, Caitlin A; Pemberton-Ross, Peter; Briët, Olivier J T; Smith, David L; Gething, Peter W; Smith, Thomas A

    2015-10-05

    Malaria prevalence, clinical incidence, treatment, and transmission rates are dynamically interrelated. Prevalence is often considered a measure of malaria transmission, but treatment of clinical malaria reduces prevalence, and consequently also infectiousness to the mosquito vector and onward transmission. The impact of the frequency of treatment on prevalence in a population is generally not considered. This can lead to potential underestimation of malaria exposure in settings with good health systems. Furthermore, these dynamical relationships between prevalence, treatment, and transmission have not generally been taken into account in estimates of burden. Using prevalence as an input, estimates of disease incidence and transmission [as the distribution of the entomological inoculation rate (EIR)] for Plasmodium falciparum have now been made for 43 countries in Africa using both empirical relationships (that do not allow for treatment) and OpenMalaria dynamic micro-simulation models (that explicitly include the effects of treatment). For each estimate, prevalence inputs were taken from geo-statistical models fitted for the year 2010 by the Malaria Atlas Project to all available observed prevalence data. National level estimates of the effectiveness of case management in treating clinical attacks were used as inputs to the estimation of both EIR and disease incidence by the dynamic models. When coverage of effective treatment is taken into account, higher country level estimates of average EIR and thus higher disease burden, are obtained for a given prevalence level, especially where access to treatment is high, and prevalence relatively low. These methods provide a unified framework for comparison of both the immediate and longer-term impacts of case management and of preventive interventions.

  5. Robust state estimation for uncertain fuzzy bidirectional associative memory networks with time-varying delays

    NASA Astrophysics Data System (ADS)

    Vadivel, P.; Sakthivel, R.; Mathiyalagan, K.; Arunkumar, A.

    2013-09-01

    This paper addresses the issue of robust state estimation for a class of fuzzy bidirectional associative memory (BAM) neural networks with time-varying delays and parameter uncertainties. By constructing the Lyapunov-Krasovskii functional, which contains the triple-integral term and using the free-weighting matrix technique, a set of sufficient conditions are derived in terms of linear matrix inequalities (LMIs) to estimate the neuron states through available output measurements such that the dynamics of the estimation error system is robustly asymptotically stable. In particular, we consider a generalized activation function in which the traditional assumptions on the boundedness, monotony and differentiability of the activation functions are removed. More precisely, the design of the state estimator for such BAM neural networks can be obtained by solving some LMIs, which are dependent on the size of the time derivative of the time-varying delays. Finally, a numerical example with simulation result is given to illustrate the obtained theoretical results.

  6. Quantifying rainfall-derived inflow and infiltration in sanitary sewer systems based on conductivity monitoring

    NASA Astrophysics Data System (ADS)

    Zhang, Mingkai; Liu, Yanchen; Cheng, Xun; Zhu, David Z.; Shi, Hanchang; Yuan, Zhiguo

    2018-03-01

    Quantifying rainfall-derived inflow and infiltration (RDII) in a sanitary sewer is difficult when RDII and overflow occur simultaneously. This study proposes a novel conductivity-based method for estimating RDII. The method separately decomposes rainfall-derived inflow (RDI) and rainfall-induced infiltration (RII) on the basis of conductivity data. Fast Fourier transform was adopted to analyze variations in the flow and water quality during dry weather. Nonlinear curve fitting based on the least squares algorithm was used to optimize parameters in the proposed RDII model. The method was successfully applied to real-life case studies, in which inflow and infiltration were successfully estimated for three typical rainfall events with total rainfall volumes of 6.25 mm (light), 28.15 mm (medium), and 178 mm (heavy). Uncertainties of model parameters were estimated using the generalized likelihood uncertainty estimation (GLUE) method and were found to be acceptable. Compared with traditional flow-based methods, the proposed approach exhibits distinct advantages in estimating RDII and overflow, particularly when the two processes happen simultaneously.

  7. Investigation of air transportation technology at Princeton University, 1988-1989

    NASA Technical Reports Server (NTRS)

    Stengel, Robert F.

    1990-01-01

    The Air Transportation Technology Program at Princeton University, a program emphasizing graduate and undergraduate student research, proceeded along several avenues during the past year. A study of optimal trajectories for penetration of microbursts when encounter is unavoidable was conducted. The emphasis of current wind shear research is on developing an expert system for wind shear avoidance. A knowledge-based reconfigurable flight control system that is implemented with the Pascal programming language using parallel microprocessors was developed. This expert system could be considered a prototype for a failure-tolerant control system that can be constructed using existing hardware. Development of a real-time cockpit simulator continued during the year. The simulator provides a single-person crew station with both conventional and advanced control devices; it currently is programmed to simulate the Navion single-engine general aviation airplane. Alternatives for the air traffic control system giving particular attention to the institutional structure of the FAA are analyzed. A simple numerical procedure for estimating the stochastic robustness of control systems is being investigated. The revitalization of the general aviation industry is also discussed.

  8. The use of generalized estimating equations in the analysis of motor vehicle crash data.

    PubMed

    Hutchings, Caroline B; Knight, Stacey; Reading, James C

    2003-01-01

    The purpose of this study was to determine if it is necessary to use generalized estimating equations (GEEs) in the analysis of seat belt effectiveness in preventing injuries in motor vehicle crashes. The 1992 Utah crash dataset was used, excluding crash participants where seat belt use was not appropriate (n=93,633). The model used in the 1996 Report to Congress [Report to congress on benefits of safety belts and motorcycle helmets, based on data from the Crash Outcome Data Evaluation System (CODES). National Center for Statistics and Analysis, NHTSA, Washington, DC, February 1996] was analyzed for all occupants with logistic regression, one level of nesting (occupants within crashes), and two levels of nesting (occupants within vehicles within crashes) to compare the use of GEEs with logistic regression. When using one level of nesting compared to logistic regression, 13 of 16 variance estimates changed more than 10%, and eight of 16 parameter estimates changed more than 10%. In addition, three of the independent variables changed from significant to insignificant (alpha=0.05). With the use of two levels of nesting, two of 16 variance estimates and three of 16 parameter estimates changed more than 10% from the variance and parameter estimates in one level of nesting. One of the independent variables changed from insignificant to significant (alpha=0.05) in the two levels of nesting model; therefore, only two of the independent variables changed from significant to insignificant when the logistic regression model was compared to the two levels of nesting model. The odds ratio of seat belt effectiveness in preventing injuries was 12% lower when a one-level nested model was used. Based on these results, we stress the need to use a nested model and GEEs when analyzing motor vehicle crash data.

  9. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis.

    PubMed

    Tashkova, Katerina; Korošec, Peter; Silc, Jurij; Todorovski, Ljupčo; Džeroski, Sašo

    2011-10-11

    We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These results hold for both real and artificial data, for all observability scenarios considered, and for all amounts of noise added to the artificial data. In sum, the meta-heuristic methods considered are suitable for estimating the parameters in the ODE model of the dynamics of endocytosis under a range of conditions: With the model and conditions being representative of parameter estimation tasks in ODE models of biochemical systems, our results clearly highlight the promise of bio-inspired meta-heuristic methods for parameter estimation in dynamic system models within system biology.

  10. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis

    PubMed Central

    2011-01-01

    Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These results hold for both real and artificial data, for all observability scenarios considered, and for all amounts of noise added to the artificial data. In sum, the meta-heuristic methods considered are suitable for estimating the parameters in the ODE model of the dynamics of endocytosis under a range of conditions: With the model and conditions being representative of parameter estimation tasks in ODE models of biochemical systems, our results clearly highlight the promise of bio-inspired meta-heuristic methods for parameter estimation in dynamic system models within system biology. PMID:21989196

  11. W-band spaceborne radar observations of atmospheric river events

    NASA Astrophysics Data System (ADS)

    Matrosov, S. Y.

    2010-12-01

    While the main objective of the world first W-band radar aboard the CloudSat satellite is to provide vertically resolved information on clouds, it proved to be a valuable tool for observing precipitation. The CloudSat radar is generally able to resolve precipitating cloud systems in their vertical entirety. Although measurements from the liquid hydrometer layer containing rainfall are strongly attenuated, special retrieval approaches can be used to estimate rainfall parameters. These approaches are based on vertical gradients of observed radar reflectivity factor rather than on absolute estimates of reflectivity. Concurrent independent estimations of ice cloud parameters in the same vertical column allow characterization of precipitating systems and provide information on coupling between clouds and rainfall they produce. The potential of CloudSat for observations atmospheric river events affecting the West Coast of North America is evaluated. It is shown that spaceborne radar measurements can provide high resolution information on the height of the freezing level thus separating areas of rainfall and snowfall. CloudSat precipitation rate estimates complement information from the surface-based radars. Observations of atmospheric rivers at different locations above the ocean and during landfall help to understand evolutions of atmospheric rivers and their structures.

  12. Global identification of stochastic dynamical systems under different pseudo-static operating conditions: The functionally pooled ARMAX case

    NASA Astrophysics Data System (ADS)

    Sakellariou, J. S.; Fassois, S. D.

    2017-01-01

    The identification of a single global model for a stochastic dynamical system operating under various conditions is considered. Each operating condition is assumed to have a pseudo-static effect on the dynamics and be characterized by a single measurable scheduling variable. Identification is accomplished within a recently introduced Functionally Pooled (FP) framework, which offers a number of advantages over Linear Parameter Varying (LPV) identification techniques. The focus of the work is on the extension of the framework to include the important FP-ARMAX model case. Compared to their simpler FP-ARX counterparts, FP-ARMAX models are much more general and offer improved flexibility in describing various types of stochastic noise, but at the same time lead to a more complicated, non-quadratic, estimation problem. Prediction Error (PE), Maximum Likelihood (ML), and multi-stage estimation methods are postulated, and the PE estimator optimality, in terms of consistency and asymptotic efficiency, is analytically established. The postulated estimators are numerically assessed via Monte Carlo experiments, while the effectiveness of the approach and its superiority over its FP-ARX counterpart are demonstrated via an application case study pertaining to simulated railway vehicle suspension dynamics under various mass loading conditions.

  13. Approximation of optimal filter for Ornstein-Uhlenbeck process with quantised discrete-time observation

    NASA Astrophysics Data System (ADS)

    Bania, Piotr; Baranowski, Jerzy

    2018-02-01

    Quantisation of signals is a ubiquitous property of digital processing. In many cases, it introduces significant difficulties in state estimation and in consequence control. Popular approaches either do not address properly the problem of system disturbances or lead to biased estimates. Our intention was to find a method for state estimation for stochastic systems with quantised and discrete observation, that is free of the mentioned drawbacks. We have formulated a general form of the optimal filter derived by a solution of Fokker-Planck equation. We then propose the approximation method based on Galerkin projections. We illustrate the approach for the Ornstein-Uhlenbeck process, and derive analytic formulae for the approximated optimal filter, also extending the results for the variant with control. Operation is illustrated with numerical experiments and compared with classical discrete-continuous Kalman filter. Results of comparison are substantially in favour of our approach, with over 20 times lower mean squared error. The proposed filter is especially effective for signal amplitudes comparable to the quantisation thresholds. Additionally, it was observed that for high order of approximation, state estimate is very close to the true process value. The results open the possibilities of further analysis, especially for more complex processes.

  14. Cotton growth modeling and assessment using unmanned aircraft system visual-band imagery

    NASA Astrophysics Data System (ADS)

    Chu, Tianxing; Chen, Ruizhi; Landivar, Juan A.; Maeda, Murilo M.; Yang, Chenghai; Starek, Michael J.

    2016-07-01

    This paper explores the potential of using unmanned aircraft system (UAS)-based visible-band images to assess cotton growth. By applying the structure-from-motion algorithm, the cotton plant height (ph) and canopy cover (cc) information were retrieved from the point cloud-based digital surface models (DSMs) and orthomosaic images. Both UAS-based ph and cc follow a sigmoid growth pattern as confirmed by ground-based studies. By applying an empirical model that converts the cotton ph to cc, the estimated cc shows strong correlation (R2=0.990) with the observed cc. An attempt for modeling cotton yield was carried out using the ph and cc information obtained on June 26, 2015, the date when sigmoid growth curves for both ph and cc tended to decline in slope. In a cross-validation test, the correlation between the ground-measured yield and the estimated equivalent derived from the ph and/or cc was compared. Generally, combining ph and cc, the performance of the yield estimation is most comparable against the observed yield. On the other hand, the observed yield and cc-based estimation produce the second strongest correlation, regardless of the complexity of the models.

  15. Inverse estimation of parameters for an estuarine eutrophication model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, J.; Kuo, A.Y.

    1996-11-01

    An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulationsmore » with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.« less

  16. Phase Retrieval System for Assessing Diamond Turning and Optical Surface Defects

    NASA Technical Reports Server (NTRS)

    Dean, Bruce; Maldonado, Alex; Bolcar, Matthew

    2011-01-01

    An optical design is presented for a measurement system used to assess the impact of surface errors originating from diamond turning artifacts. Diamond turning artifacts are common by-products of optical surface shaping using the diamond turning process (a diamond-tipped cutting tool used in a lathe configuration). Assessing and evaluating the errors imparted by diamond turning (including other surface errors attributed to optical manufacturing techniques) can be problematic and generally requires the use of an optical interferometer. Commercial interferometers can be expensive when compared to the simple optical setup developed here, which is used in combination with an image-based sensing technique (phase retrieval). Phase retrieval is a general term used in optics to describe the estimation of optical imperfections or aberrations. This turnkey system uses only image-based data and has minimal hardware requirements. The system is straightforward to set up, easy to align, and can provide nanometer accuracy on the measurement of optical surface defects.

  17. Revisiting competition in a classic model system using formal links between theory and data.

    PubMed

    Hart, Simon P; Burgin, Jacqueline R; Marshall, Dustin J

    2012-09-01

    Formal links between theory and data are a critical goal for ecology. However, while our current understanding of competition provides the foundation for solving many derived ecological problems, this understanding is fractured because competition theory and data are rarely unified. Conclusions from seminal studies in space-limited benthic marine systems, in particular, have been very influential for our general understanding of competition, but rely on traditional empirical methods with limited inferential power and compatibility with theory. Here we explicitly link mathematical theory with experimental field data to provide a more sophisticated understanding of competition in this classic model system. In contrast to predictions from conceptual models, our estimates of competition coefficients show that a dominant space competitor can be equally affected by interspecific competition with a poor competitor (traditionally defined) as it is by intraspecific competition. More generally, the often-invoked competitive hierarchies and intransitivities in this system might be usefully revisited using more sophisticated empirical and analytical approaches.

  18. A path integral methodology for obtaining thermodynamic properties of nonadiabatic systems using Gaussian mixture distributions

    NASA Astrophysics Data System (ADS)

    Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel

    2018-05-01

    We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.

  19. Enhancing the performance of coherent OTDR systems with polarization diversity complementary codes.

    PubMed

    Dorize, Christian; Awwad, Elie

    2018-05-14

    Monitoring the optical phase change in a fiber enables a wide range of applications where fast phase variations are induced by acoustic signals or by vibrations in general. However, the quality of the estimated fiber response strongly depends on the method used to modulate the light sent to the fiber and capture the variations of the optical field. In this paper, we show that distributed optical fiber sensing systems can advantageously exploit techniques from the telecommunication domain, as those used in coherent optical transmission, to enhance their performance in detecting mechanical events, while jointly offering a simpler setup than widespread pulse-cloning or spectral-sweep based schemes with acousto-optic modulators. We periodically capture an overall fiber Jones matrix estimate thanks to a novel probing technique using two mutually orthogonal complementary (Golay) pairs of binary sequences applied simultaneously in phase and quadrature on two orthogonal polarization states. A perfect channel response estimation of the sensor array is achieved, subject to conditions detailed in the paper, thus enhancing the sensitivity and bandwidth of coherent ϕ-OTDR systems. High sensitivity, linear response, and bandwidth coverage up to 18 kHz are demonstrated with a sensor array composed of 10 fiber Bragg gratings (FBGs).

  20. Estimation of hospital efficiency--do different definitions and casemix measures for hospital output affect the results?

    PubMed

    Vitikainen, Kirsi; Street, Andrew; Linna, Miika

    2009-02-01

    Hospital efficiency has been the subject of numerous health economics studies, but there is little evidence on how the chosen output and casemix measures affect the efficiency results. The aim of this study is to examine the robustness of efficiency results due to these factors. Comparison is made between activities and episode output measures, and two different output grouping systems (Classic and FullDRG). Non-parametric data envelopment analysis is used as an analysis technique. The data consist of all public acute care hospitals in Finland in 2005 (n=40). Efficiency estimates were not found to be highly sensitive to the choice between episode and activity descriptions of output, but more so to the choice of DRG grouping system. Estimates are most sensitive to scale assumptions, with evidence of decreasing returns to scale in larger hospitals. Episode measures are generally to be preferred to activity measures because these better capture the patient pathway, while FullDRGs are preferred to Classic DRGs particularly because of the better description of outpatient output in the former grouping system. Attention should be paid to reducing the extent of scale inefficiency in Finland.

Top