Holland, Alexander; Aboy, Mateo
2009-07-01
We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.
Defense Communications Agency Cost and Planning Factors Manual. Revised
1983-03-01
the Time-Phased Fiscal Year Funding Schedule. Using estimated leadtimes required for each identifiable milestone, estimate the funding to be incurred...for each fiscal year, making sure to back off the time required for the conceptual phase, the procurement phase, and the training and operational...39-1 (To be published later) 40. FISCAL -YEAR TIME PHASING OF COST ESTIMATE ........... 40-1 (To be published later) 41. DISCOUNTING
Doubova, Svetlana V; Ramírez-Sánchez, Claudine; Figueroa-Lara, Alejandro; Pérez-Cuevas, Ricardo
2013-12-01
To estimate the requirements of human resources (HR) of two models of care for diabetes patients: conventional and specific, also called DiabetIMSS, which are provided in primary care clinics of the Mexican Institute of Social Security (IMSS). An evaluative research was conducted. An expert group identified the HR activities and time required to provide healthcare consistent with the best clinical practices for diabetic patients. HR were estimated by using the evidence-based adjusted service target approach for health workforce planning; then, comparisons between existing and estimated HRs were made. To provide healthcare in accordance with the patients' metabolic control, the conventional model required increasing the number of family doctors (1.2 times) nutritionists (4.2 times) and social workers (4.1 times). The DiabetIMSS model requires greater increase than the conventional model. Increasing HR is required to provide evidence-based healthcare to diabetes patients.
NASA Technical Reports Server (NTRS)
Rediess, Herman A.; Ramnath, Rudrapatna V.; Vrable, Daniel L.; Hirvo, David H.; Mcmillen, Lowell D.; Osofsky, Irving B.
1991-01-01
The results are presented of a study to identify potential real time remote computational applications to support monitoring HRV flight test experiments along with definitions of preliminary requirements. A major expansion of the support capability available at Ames-Dryden was considered. The focus is on the use of extensive computation and data bases together with real time flight data to generate and present high level information to those monitoring the flight. Six examples were considered: (1) boundary layer transition location; (2) shock wave position estimation; (3) performance estimation; (4) surface temperature estimation; (5) critical structural stress estimation; and (6) stability estimation.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-10
.... Estimated Time per Response: 2 hours. Frequency of Response: On occasion reporting requirement. Obligation...,200 responses. Estimated Time per Response: 1 to 1.5 hours. Frequency of Response: On occasion... Time per Response: 2 to 5 hours. Frequency of Response: On occasion reporting requirement; Third party...
Contracted time and expanded space: The impact of circumnavigation on judgements of space and time.
Brunec, Iva K; Javadi, Amir-Homayoun; Zisch, Fiona E L; Spiers, Hugo J
2017-09-01
The ability to estimate distance and time to spatial goals is fundamental for survival. In cases where a region of space must be navigated around to reach a location (circumnavigation), the distance along the path is greater than the straight-line Euclidean distance. To explore how such circumnavigation impacts on estimates of distance and time, we tested participants on their ability to estimate travel time and Euclidean distance to learned destinations in a virtual town. Estimates for approximately linear routes were compared with estimates for routes requiring circumnavigation. For all routes, travel times were significantly underestimated, and Euclidean distances overestimated. For routes requiring circumnavigation, travel time was further underestimated and the Euclidean distance further overestimated. Thus, circumnavigation appears to enhance existing biases in representations of travel time and distance. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Bridgeman, Brent; Laitusis, Cara Cahalan; Cline, Frederick
2007-01-01
The current study used three data sources to estimate time requirements for different item types on the now current SAT Reasoning Test™. First, we estimated times from a computer-adaptive version of the SAT® (SAT CAT) that automatically recorded item times. Second, we observed students as they answered SAT questions under strict time limits and…
Goff, M L; Win, B H
1997-11-01
The postmortem interval for a set of human remains discovered inside a metal tool box was estimated using the development time required for a stratiomyid fly (Diptera: Stratiomyidae), Hermetia illucens, in combination with the time required to establish a colony of the ant Anoplolepsis longipes (Hymenoptera: Formicidae) capable of producing alate (winged) reproductives. This analysis resulted in a postmortem interval estimate of 14 + months, with a period of 14-18 months being the most probable time interval. The victim had been missing for approximately 18 months.
A Role for Memory in Prospective Timing informs Timing in Prospective Memory
Waldum, Emily R; Sahakyan, Lili
2014-01-01
Time-based prospective memory (TBPM) tasks require the estimation of time in passing – known as prospective timing. Prospective timing is said to depend on an attentionally-driven internal clock mechanism, and is thought to be unaffected by memory for interval information (for reviews see, Block, Hancock, & Zakay, 2010; Block & Zakay, 1997). A prospective timing task that required a verbal estimate following the entire interval (Experiment 1) and a TBPM task that required production of a target response during the interval (Experiment 2) were used to test an alternative view that episodic memory does influence prospective timing. In both experiments, participants performed an ongoing lexical decision task of fixed duration while a varying number of songs were played in the background. Experiment 1 results revealed that verbal time estimates became longer the more songs participants remembered from the interval, suggesting that memory for interval information influences prospective time estimates. In Experiment 2, participants who were asked to perform the TBPM task without the aid of an external clock made their target responses earlier as the number of songs increased, indicating that prospective estimates of elapsed time increased as more songs were experienced. For participants who had access to a clock, changes in clock-checking coincided with the occurrence of song boundaries, indicating that participants used both song information and clock information to estimate time. Finally, ongoing task performance and verbal reports in both experiments further substantiate a role for episodic memory in prospective timing. PMID:22984950
NASA Technical Reports Server (NTRS)
West, M. E.
1992-01-01
A real-time estimation filter which reduces sensitivity to system variations and reduces the amount of preflight computation is developed for the instrument pointing subsystem (IPS). The IPS is a three-axis stabilized platform developed to point various astronomical observation instruments aboard the shuttle. Currently, the IPS utilizes a linearized Kalman filter (LKF), with premission defined gains, to compensate for system drifts and accumulated attitude errors. Since the a priori gains are generated for an expected system, variations result in a suboptimal estimation process. This report compares the performance of three real-time estimation filters with the current LKF implementation. An extended Kalman filter and a second-order Kalman filter are developed to account for the system nonlinearities, while a linear Kalman filter implementation assumes that the nonlinearities are negligible. The performance of each of the four estimation filters are compared with respect to accuracy, stability, settling time, robustness, and computational requirements. It is shown, that for the current IPS pointing requirements, the linear Kalman filter provides improved robustness over the LKF with less computational requirements than the two real-time nonlinear estimation filters.
Optimizing focal plane electric field estimation for detecting exoplanets
NASA Astrophysics Data System (ADS)
Groff, T.; Kasdin, N. J.; Riggs, A. J. E.
Detecting extrasolar planets with angular separations and contrast levels similar to Earth requires a large space-based observatory and advanced starlight suppression techniques. This paper focuses on techniques employing an internal coronagraph, which is highly sensitive to optical errors and must rely on focal plane wavefront control techniques to achieve the necessary contrast levels. To maximize the available science time for a coronagraphic mission we demonstrate an estimation scheme using a discrete time Kalman filter. The state estimate feedback inherent to the filter allows us to minimize the number of exposures required to estimate the electric field. We also show progress including a bias estimate into the Kalman filter to eliminate incoherent light from the estimate. Since the exoplanets themselves are incoherent to the star, this has the added benefit of using the control history to gain certainty in the location of exoplanet candidates as the signal-to-noise between the planets and speckles improves. Having established a purely focal plane based wavefront estimation technique, we discuss a sensor fusion concept where alternate wavefront sensors feedforward a time update to the focal plane estimate to improve robustness to time varying speckle. The overall goal of this work is to reduce the time required for wavefront control on a target, thereby improving the observatory's planet detection performance by increasing the number of targets reachable during the lifespan of the mission.
NASA Astrophysics Data System (ADS)
Fu, Deqian; Gao, Lisheng; Jhang, Seong Tae
2012-04-01
The mobile computing device has many limitations, such as relative small user interface and slow computing speed. Usually, augmented reality requires face pose estimation can be used as a HCI and entertainment tool. As far as the realtime implementation of head pose estimation on relatively resource limited mobile platforms is concerned, it is required to face different constraints while leaving enough face pose estimation accuracy. The proposed face pose estimation method met this objective. Experimental results running on a testing Android mobile device delivered satisfactory performing results in the real-time and accurately.
Hauschild, L; Lovatto, P A; Pomar, J; Pomar, C
2012-07-01
The objective of this study was to develop and evaluate a mathematical model used to estimate the daily amino acid requirements of individual growing-finishing pigs. The model includes empirical and mechanistic model components. The empirical component estimates daily feed intake (DFI), BW, and daily gain (DG) based on individual pig information collected in real time. Based on DFI, BW, and DG estimates, the mechanistic component uses classic factorial equations to estimate the optimal concentration of amino acids that must be offered to each pig to meet its requirements. The model was evaluated with data from a study that investigated the effect of feeding pigs with a 3-phase or daily multiphase system. The DFI and BW values measured in this study were compared with those estimated by the empirical component of the model. The coherence of the values estimated by the mechanistic component was evaluated by analyzing if it followed a normal pattern of requirements. Lastly, the proposed model was evaluated by comparing its estimates with those generated by the existing growth model (InraPorc). The precision of the proposed model and InraPorc in estimating DFI and BW was evaluated through the mean absolute error. The empirical component results indicated that the DFI and BW trajectories of individual pigs fed ad libitum could be predicted 1 d (DFI) or 7 d (BW) ahead with the average mean absolute error of 12.45 and 1.85%, respectively. The average mean absolute error obtained with the InraPorc for the average individual of the population was 14.72% for DFI and 5.38% for BW. Major differences were observed when estimates from InraPorc were compared with individual observations. The proposed model, however, was effective in tracking the change in DFI and BW for each individual pig. The mechanistic model component estimated the optimal standardized ileal digestible Lys to NE ratio with reasonable between animal (average CV = 7%) and overtime (average CV = 14%) variation. Thus, the amino acid requirements estimated by model are animal- and time-dependent and follow, in real time, the individual DFI and BW growth patterns. The proposed model can follow the average feed intake and feed weight trajectory of each individual pig in real time with good accuracy. Based on these trajectories and using classical factorial equations, the model makes it possible to estimate dynamically the AA requirements of each animal, taking into account the intake and growth changes of the animal.
Coding “What” and “When” in the Archer Fish Retina
Vasserman, Genadiy; Shamir, Maoz; Ben Simon, Avi; Segev, Ronen
2010-01-01
Traditionally, the information content of the neural response is quantified using statistics of the responses relative to stimulus onset time with the assumption that the brain uses onset time to infer stimulus identity. However, stimulus onset time must also be estimated by the brain, making the utility of such an approach questionable. How can stimulus onset be estimated from the neural responses with sufficient accuracy to ensure reliable stimulus identification? We address this question using the framework of colour coding by the archer fish retinal ganglion cell. We found that stimulus identity, “what”, can be estimated from the responses of best single cells with an accuracy comparable to that of the animal's psychophysical estimation. However, to extract this information, an accurate estimation of stimulus onset is essential. We show that stimulus onset time, “when”, can be estimated using a linear-nonlinear readout mechanism that requires the response of a population of 100 cells. Thus, stimulus onset time can be estimated using a relatively simple readout. However, large nerve cell populations are required to achieve sufficient accuracy. PMID:21079682
Milani, Alessandra; Mazzocco, Ketti; Stucchi, Sara; Magon, Giorgio; Pravettoni, Gabriella; Passoni, Claudia; Ciccarelli, Chiara; Tonali, Alessandra; Profeta, Teresa; Saiani, Luisa
2017-02-01
Few resources are available to quantify clinical trial-associated workload, needed to guide staffing and budgetary planning. The aim of the study is to describe a tool to measure clinical trials nurses' workload expressed in time spent to complete core activities. Clinical trials nurses drew up a list of nursing core activities, integrating results from literature searches with personal experience. The final 30 core activities were timed for each research nurse by an outside observer during daily practice in May and June 2014. Average times spent by nurses for each activity were calculated. The "Nursing Time Required by Clinical Trial-Assessment Tool" was created as an electronic sheet that combines the average times per specified activities and mathematic functions to return the total estimated time required by a research nurse for each specific trial. The tool was tested retrospectively on 141 clinical trials. The increasing complexity of clinical research requires structured approaches to determine workforce requirements. This study provides a tool to describe the activities of a clinical trials nurse and to estimate the associated time required to deliver individual trials. The application of the proposed tool in clinical research practice could provide a consistent structure for clinical trials nursing workload estimation internationally. © 2016 John Wiley & Sons Australia, Ltd.
10 CFR 436.23 - Estimated simple payback time.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...
10 CFR 436.23 - Estimated simple payback time.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...
10 CFR 436.23 - Estimated simple payback time.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...
10 CFR 436.23 - Estimated simple payback time.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...
10 CFR 436.23 - Estimated simple payback time.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Methodology and Procedures for Life Cycle Cost Analyses § 436.23 Estimated simple payback time. The estimated simple payback time is the number of years required for the cumulative value of energy or water cost savings less future non-fuel or non-water costs to equal the investment costs of the building energy or...
Finite-error metrological bounds on multiparameter Hamiltonian estimation
NASA Astrophysics Data System (ADS)
Kura, Naoto; Ueda, Masahito
2018-01-01
Estimation of multiple parameters in an unknown Hamiltonian is investigated. We present upper and lower bounds on the time required to complete the estimation within a prescribed error tolerance δ . The lower bound is given on the basis of the Cramér-Rao inequality, where the quantum Fisher information is bounded by the squared evolution time. The upper bound is obtained by an explicit construction of estimation procedures. By comparing the cases with different numbers of Hamiltonian channels, we also find that the few-channel procedure with adaptive feedback and the many-channel procedure with entanglement are equivalent in the sense that they require the same amount of time resource up to a constant factor.
An evaluation of flow-stratified sampling for estimating suspended sediment loads
Robert B. Thomas; Jack Lewis
1995-01-01
Abstract - Flow-stratified sampling is a new method for sampling water quality constituents such as suspended sediment to estimate loads. As with selection-at-list-time (SALT) and time-stratified sampling, flow-stratified sampling is a statistical method requiring random sampling, and yielding unbiased estimates of load and variance. It can be used to estimate event...
Computer considerations for real time simulation of a generalized rotor model
NASA Technical Reports Server (NTRS)
Howe, R. M.; Fogarty, L. E.
1977-01-01
Scaled equations were developed to meet requirements for real time computer simulation of the rotor system research aircraft. These equations form the basis for consideration of both digital and hybrid mechanization for real time simulation. For all digital simulation estimates of the required speed in terms of equivalent operations per second are developed based on the complexity of the equations and the required intergration frame rates. For both conventional hybrid simulation and hybrid simulation using time-shared analog elements the amount of required equipment is estimated along with a consideration of the dynamic errors. Conventional hybrid mechanization using analog simulation of those rotor equations which involve rotor-spin frequencies (this consititutes the bulk of the equations) requires too much analog equipment. Hybrid simulation using time-sharing techniques for the analog elements appears possible with a reasonable amount of analog equipment. All-digital simulation with affordable general-purpose computers is not possible because of speed limitations, but specially configured digital computers do have the required speed and consitute the recommended approach.
Knopman, Debra S.; Voss, Clifford I.
1988-01-01
Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.
A GIS TECHNIQUE FOR ESTIMATING NATURAL ATTENUATION RATES AND MASS BALANCES
ABSTRACT: Regulatory approval of monitored natural attenuation (MNA) as a component for site remediation often requires a demonstration that contaminant mass has decreased significantly over time. Successful approval of MNA also typically requires an estimate of past and future n...
Estimating times of extinction in the fossil record
Marshall, Charles R.
2016-01-01
Because the fossil record is incomplete, the last fossil of a taxon is a biased estimate of its true time of extinction. Numerous methods have been developed in the palaeontology literature for estimating the true time of extinction using ages of fossil specimens. These methods, which typically give a confidence interval for estimating the true time of extinction, differ in the assumptions they make and the nature and amount of data they require. We review the literature on such methods and make some recommendations for future directions. PMID:27122005
Estimating times of extinction in the fossil record.
Wang, Steve C; Marshall, Charles R
2016-04-01
Because the fossil record is incomplete, the last fossil of a taxon is a biased estimate of its true time of extinction. Numerous methods have been developed in the palaeontology literature for estimating the true time of extinction using ages of fossil specimens. These methods, which typically give a confidence interval for estimating the true time of extinction, differ in the assumptions they make and the nature and amount of data they require. We review the literature on such methods and make some recommendations for future directions. © 2016 The Author(s).
Attitude estimation of earth orbiting satellites by decomposed linear recursive filters
NASA Technical Reports Server (NTRS)
Kou, S. R.
1975-01-01
Attitude estimation of earth orbiting satellites (including Large Space Telescope) subjected to environmental disturbances and noises was investigated. Modern control and estimation theory is used as a tool to design an efficient estimator for attitude estimation. Decomposed linear recursive filters for both continuous-time systems and discrete-time systems are derived. By using this accurate estimation of the attitude of spacecrafts, state variable feedback controller may be designed to achieve (or satisfy) high requirements of system performance.
Budgeting Facilities Operation Costs Using the Facilities Operation Model
2011-06-01
practices that today’s modern buildings have built into them. Several factors can change from the time the requirement is generated to when actual...information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and...BOS required $4.2 billion.2 In FY2012, it is estimated it will reach $4.6 billion.3 Unlike sustainment and modernization , failure to fund facility
Cloutier, L; Pomar, C; Létourneau Montminy, M P; Bernier, J F; Pomar, J
2015-04-01
The implementation of precision feeding in growing-finishing facilities requires accurate estimates of the animals' nutrient requirements. The objectives of the current study was to validate a method for estimating the real-time individual standardized ileal digestible (SID) lysine (Lys) requirements of growing-finishing pigs and the ability of this method to estimate the Lys requirements of pigs with different feed intake and growth patterns. Seventy-five pigs from a terminal cross and 72 pigs from a maternal cross were used in two 28-day experimental phases beginning at 25.8 (±2.5) and 73.3 (±5.2) kg BW, respectively. Treatments were randomly assigned to pigs within each experimental phase according to a 2×4 factorial design in which the two genetic lines and four dietary SID Lys levels (70%, 85%, 100% and 115% of the requirements estimated by the factorial method developed for precision feeding) were the main factors. Individual pigs' Lys requirements were estimated daily using a factorial approach based on their feed intake, BW and weight gain patterns. From 25 to 50 kg BW, this method slightly underestimated the pigs' SID Lys requirements, given that maximum protein deposition and weight gain were achieved at 115% of SID Lys requirements. However, the best gain-to-feed ratio (G : F) was obtained at a level of 85% or more of the estimated Lys requirement. From 70 to 100 kg, the method adequately estimated the pigs' individual requirements, given that maximum performance was achieved at 100% of Lys requirements. Terminal line pigs ate more (P=0.04) during the first experimental phase and tended to eat more (P=0.10) during the second phase than the maternal line pigs but both genetic lines had similar ADG and protein deposition rates during the two phases. The factorial method used in this study to estimate individual daily SID Lys requirements was able to accommodate the small genetic differences in feed intake, and it was concluded that this method can be used in precision feeding systems without adjustments. However, the method's ability to accommodate large genetic differences in feed intake and protein deposition patterns needs to be studied further.
43 CFR 11.73 - Quantification phase-resource recoverability analysis.
Code of Federal Regulations, 2014 CFR
2014-10-01
... analysis. (a) Requirement. The time needed for the injured resources to recover to the state that the... been acquired to baseline levels shall be estimated. The time estimated for recovery or any lesser period of time as determined in the Assessment Plan must be used as the recovery period for purposes of...
Survival curve estimation with dependent left truncated data using Cox's model.
Mackenzie, Todd
2012-10-19
The Kaplan-Meier and closely related Lynden-Bell estimators are used to provide nonparametric estimation of the distribution of a left-truncated random variable. These estimators assume that the left-truncation variable is independent of the time-to-event. This paper proposes a semiparametric method for estimating the marginal distribution of the time-to-event that does not require independence. It models the conditional distribution of the time-to-event given the truncation variable using Cox's model for left truncated data, and uses inverse probability weighting. We report the results of simulations and illustrate the method using a survival study.
Time On Station Requirements: Costs, Policy Change, and Perceptions
2016-12-01
Travel Management Office (2016). .........................................................................6 Table 3. Time it took spouses to find...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA MBA PROFESSIONAL REPORT TIME ON STATION REQUIREMENTS: COSTS, POLICY CHANGE, AND...reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching
76 FR 60853 - Agency Information Collection Activities: Documents Required Aboard Private Aircraft
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-30
... respondents or record keepers from the collection of information (a total of capital/startup costs and.... Estimated Number of Respondents: 120,000. Estimated Number of Annual Responses: 120,000. Estimated Time per...
Multiuser TOA Estimation Algorithm in DS-CDMA Sparse Channel for Radiolocation
NASA Astrophysics Data System (ADS)
Kim, Sunwoo
This letter considers multiuser time delay estimation in a sparse channel environment for radiolocation. The generalized successive interference cancellation (GSIC) algorithm is used to eliminate the multiple access interference (MAI). To adapt GSIC to sparse channels the alternating maximization (AM) algorithm is considered, and the continuous time delay of each path is estimated without requiring a priori known data sequences.
Control system estimation and design for aerospace vehicles with time delay
NASA Technical Reports Server (NTRS)
Allgaier, G. R.; Williams, T. L.
1972-01-01
The problems of estimation and control of discrete, linear, time-varying systems are considered. Previous solutions to these problems involved either approximate techniques, open-loop control solutions, or results which required excessive computation. The estimation problem is solved by two different methods, both of which yield the identical algorithm for determining the optimal filter. The partitioned results achieve a substantial reduction in computation time and storage requirements over the expanded solution, however. The results reduce to the Kalman filter when no delays are present in the system. The control problem is also solved by two different methods, both of which yield identical algorithms for determining the optimal control gains. The stochastic control is shown to be identical to the deterministic control, thus extending the separation principle to time delay systems. The results obtained reduce to the familiar optimal control solution when no time delays are present in the system.
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented
Estimating Surgical Procedure Times Using Anesthesia Billing Data and Operating Room Records.
Burgette, Lane F; Mulcahy, Andrew W; Mehrotra, Ateev; Ruder, Teague; Wynn, Barbara O
2017-02-01
The median time required to perform a surgical procedure is important in determining payment under Medicare's physician fee schedule. Prior studies have demonstrated that the current methodology of using physician surveys to determine surgical times results in overstated times. To measure surgical times more accurately, we developed and validated a methodology using available data from anesthesia billing data and operating room (OR) records. We estimated surgical times using Medicare 2011 anesthesia claims and New York Statewide Planning and Research Cooperative System 2011 OR times. Estimated times were validated using data from the National Surgical Quality Improvement Program. We compared our time estimates to those used by Medicare in the fee schedule. We estimate surgical times via piecewise linear median regression models. Using 3.0 million observations of anesthesia and OR times, we estimated surgical time for 921 procedures. Correlation between these time estimates and directly measured surgical time from the validation database was 0.98. Our estimates of surgical time were shorter than the Medicare fee schedule estimates for 78 percent of procedures. Anesthesia and OR times can be used to measure surgical time and thereby improve the payment for surgical procedures in the Medicare fee schedule. © Health Research and Educational Trust.
Benefits of invasion prevention: Effect of time lags, spread rates, and damage persistence
Rebecca S. Epanchin-Niell; Andrew M. Liebhold
2015-01-01
Quantifying economic damages caused by invasive species is crucial for cost-benefit analyses of biosecurity measures. Most studies focus on short-term damage estimates, but evaluating exclusion or prevention measures requires estimates of total anticipated damages from the time of establishment onward. The magnitude of such damages critically depends on the timing of...
A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
ERIC Educational Resources Information Center
Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.
2011-01-01
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…
Investment opportunity : the FPL low-cost solar dry kiln
George B. Harpole
1988-01-01
Two equations are presented that may be used to estimate a maximum investment limit and working capital requirements for the FPL low-cost solar dry kiln systems. The equations require data for drying cycle time, green lumber cost, and kiln-dried lumber costs. Results are intended to provide a preliminary estimate.
40 CFR 98.416 - Data reporting requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... (16) Where missing data have been estimated pursuant to § 98.415, the reason the data were missing, the length of time the data were missing, the method used to estimate the missing data, and the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Data reporting requirements. 98.416...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-22
.... Estimated Total urden Hours: 222,924. Estimated Cost (Operation and Maintenance): $0. IV. Public... costs) is minimal, collection instruments are clearly understood, and OSHA's estimate of the information... of OSHA's estimate of the burden (time and costs) of the information collection requirements...
Fast maximum likelihood estimation using continuous-time neural point process models.
Lepage, Kyle Q; MacDonald, Christopher J
2015-06-01
A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.
NASA Astrophysics Data System (ADS)
Grenn, Michael W.
This dissertation introduces a theory of information quality to explain macroscopic behavior observed in the systems engineering process. The theory extends principles of Shannon's mathematical theory of communication [1948] and statistical mechanics to information development processes concerned with the flow, transformation, and meaning of information. The meaning of requirements information in the systems engineering context is estimated or measured in terms of the cumulative requirements quality Q which corresponds to the distribution of the requirements among the available quality levels. The requirements entropy framework (REF) implements the theory to address the requirements engineering problem. The REF defines the relationship between requirements changes, requirements volatility, requirements quality, requirements entropy and uncertainty, and engineering effort. The REF is evaluated via simulation experiments to assess its practical utility as a new method for measuring, monitoring and predicting requirements trends and engineering effort at any given time in the process. The REF treats the requirements engineering process as an open system in which the requirements are discrete information entities that transition from initial states of high entropy, disorder and uncertainty toward the desired state of minimum entropy as engineering effort is input and requirements increase in quality. The distribution of the total number of requirements R among the N discrete quality levels is determined by the number of defined quality attributes accumulated by R at any given time. Quantum statistics are used to estimate the number of possibilities P for arranging R among the available quality levels. The requirements entropy H R is estimated using R, N and P by extending principles of information theory and statistical mechanics to the requirements engineering process. The information I increases as HR and uncertainty decrease, and the change in information AI needed to reach the desired state of quality is estimated from the perspective of the receiver. The HR may increase, decrease or remain steady depending on the degree to which additions, deletions and revisions impact the distribution of R among the quality levels. Current requirements trend metrics generally treat additions, deletions and revisions the same and simply measure the quantity of these changes over time. The REF evaluates the quantity of requirements changes over time, distinguishes between their positive and negative effects by calculating their impact on HR, Q, and AI, and forecasts when the desired state will be reached, enabling more accurate assessment of the status and progress of the requirements engineering effort. Results from random variable simulations suggest the REF is an improved leading indicator of requirements trends that can be readily combined with current methods. The increase in I, or decrease in H R and uncertainty, is proportional to the engineering effort E input into the requirements engineering process. The REF estimates the AE needed to transition R from their current state of quality to the desired end state or some other interim state of interest. Simulation results are compared with measured engineering effort data for Department of Defense programs published in the SE literature, and the results suggest the REF is a promising new method for estimation of AE.
Approaches and Data Quality for Global Precipitation Estimation
NASA Astrophysics Data System (ADS)
Huffman, G. J.; Bolvin, D. T.; Nelkin, E. J.
2015-12-01
The space and time scales on which precipitation varies are small compared to the satellite coverage that we have, so it is necessary to merge "all" of the available satellite estimates. Differing retrieval capabilities from the various satellites require inter-calibration for the satellite estimates, while "morphing", i.e., Lagrangian time interpolation, is used to lengthen the period over which time interpolation is valid. Additionally, estimates from geostationary-Earth-orbit infrared data are plentiful, but of sufficiently lower quality compared to low-Earth-orbit passive microwave estimates that they are only used when needed. Finally, monthly surface precipitation gauge data can be used to reduce bias and improve patterns of occurrence for monthly satellite data, and short-interval satellite estimates can be improved with a simple scaling such that they sum to the monthly satellite-gauge combination. The presentation will briefly consider some of the design decisions for practical computation of the Global Precipitation Measurement (GPM) mission product Integrated Multi-satellitE Retrievals for GPM (IMERG), then examine design choices that maximize value for end users. For example, data fields are provided in the output file that provide insight into the basis for the estimated precipitation, including error, sensor providing the estimate, precipitation phase (solid/liquid), and intermediate precipitation estimates. Another important initiative is successive computations for the same data date/time at longer latencies as additional data are received, which for IMERG is currently done at 6 hours, 16 hours, and 3 months after observation time. Importantly, users require long records for each latency, which runs counter to the data archiving practices at most archive sites. As well, the assignment of Digital Object Identifiers (DOI's) for near-real-time data sets (at 6 and 16 hours for IMERG) is not a settled issue.
The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor.
Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-Ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji
2018-03-05
The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes.
Cost estimating methods for advanced space systems
NASA Technical Reports Server (NTRS)
Cyr, Kelley
1994-01-01
NASA is responsible for developing much of the nation's future space technology. Cost estimates for new programs are required early in the planning process so that decisions can be made accurately. Because of the long lead times required to develop space hardware, the cost estimates are frequently required 10 to 15 years before the program delivers hardware. The system design in conceptual phases of a program is usually only vaguely defined and the technology used is so often state-of-the-art or beyond. These factors combine to make cost estimating for conceptual programs very challenging. This paper describes an effort to develop parametric cost estimating methods for space systems in the conceptual design phase. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance and time. The nature of the relationships between the driver variables and cost will be discussed. In particular, the relationship between weight and cost will be examined in detail. A theoretical model of cost will be developed and tested statistically against a historical database of major research and development projects.
Advance Technology Satellites in the Commercial Environment. Volume 2: Final Report
NASA Technical Reports Server (NTRS)
1984-01-01
A forecast of transponder requirements was obtained. Certain assumptions about system configurations are implicit in this process. The factors included are interpolation of baseline year values to produce yearly figures, estimation of satellite capture, effects of peak-hours and the time-zone staggering of peak hours, circuit requirements for acceptable grade of service capacity of satellite transponders, including various compression methods where applicable, and requirements for spare transponders in orbit. The graphical distribution of traffic requirements was estimated.
Li, Zhan; Guiraud, David; Andreu, David; Benoussaad, Mourad; Fattal, Charles; Hayashibe, Mitsuhiro
2016-06-22
Functional electrical stimulation (FES) is a neuroprosthetic technique for restoring lost motor function of spinal cord injured (SCI) patients and motor-impaired subjects by delivering short electrical pulses to their paralyzed muscles or motor nerves. FES induces action potentials respectively on muscles or nerves so that muscle activity can be characterized by the synchronous recruitment of motor units with its compound electromyography (EMG) signal is called M-wave. The recorded evoked EMG (eEMG) can be employed to predict the resultant joint torque, and modeling of FES-induced joint torque based on eEMG is an essential step to provide necessary prediction of the expected muscle response before achieving accurate joint torque control by FES. Previous works on FES-induced torque tracking issues were mainly based on offline analysis. However, toward personalized clinical rehabilitation applications, real-time FES systems are essentially required considering the subject-specific muscle responses against electrical stimulation. This paper proposes a wireless portable stimulator used for estimating/predicting joint torque based on real time processing of eEMG. Kalman filter and recurrent neural network (RNN) are embedded into the real-time FES system for identification and estimation. Prediction results on 3 able-bodied subjects and 3 SCI patients demonstrate promising performances. As estimators, both Kalman filter and RNN approaches show clinically feasible results on estimation/prediction of joint torque with eEMG signals only, moreover RNN requires less computational requirement. The proposed real-time FES system establishes a platform for estimating and assessing the mechanical output, the electromyographic recordings and associated models. It will contribute to open a new modality for personalized portable neuroprosthetic control toward consolidated personal healthcare for motor-impaired patients.
Time frequency requirements for radio interferometric earth physics
NASA Technical Reports Server (NTRS)
Thomas, J. B.; Fliegel, H. F.
1973-01-01
Two systems of VLBI (Very Long Baseline Interferometry) are now applicable to earth physics: an intercontinental baseline system using antennas of the NASA Deep Space Network, now observing at one-month intervals to determine UTI for spacecraft navigation; and a shorter baseline system called ARIES (Astronomical Radio Interferometric Earth Surveying), to be used to measure crustal movement in California for earthquake hazards estimation. On the basis of experience with the existing DSN system, a careful study has been made to estimate the time and frequency requirements of both the improved intercontinental system and of ARIES. Requirements for the two systems are compared and contrasted.
Using GIS to Estimate Lake Volume from Limited Data
Estimates of lake volume are necessary for estimating residence time or modeling pollutants. Modern GIS methods for calculating lake volume improve upon more dated technologies (e.g. planimeters) and do not require potentially inaccurate assumptions (e.g. volume of a frustum of ...
Improving size estimates of open animal populations by incorporating information on age
Manly, Bryan F.J.; McDonald, Trent L.; Amstrup, Steven C.; Regehr, Eric V.
2003-01-01
Around the world, a great deal of effort is expended each year to estimate the sizes of wild animal populations. Unfortunately, population size has proven to be one of the most intractable parameters to estimate. The capture-recapture estimation models most commonly used (of the Jolly-Seber type) are complicated and require numerous, sometimes questionable, assumptions. The derived estimates usually have large variances and lack consistency over time. In capture–recapture studies of long-lived animals, the ages of captured animals can often be determined with great accuracy and relative ease. We show how to incorporate age information into size estimates for open populations, where the size changes through births, deaths, immigration, and emigration. The proposed method allows more precise estimates of population size than the usual models, and it can provide these estimates from two sample occasions rather than the three usually required. Moreover, this method does not require specialized programs for capture-recapture data; researchers can derive their estimates using the logistic regression module in any standard statistical package.
Noren, S.R.; Udevitz, M.S.; Jay, C.V.
2012-01-01
Pacific walruses Odobenus rosmarus divergens use sea ice as a platform for resting, nursing, and accessing extensive benthic foraging grounds. The extent of summer sea ice in the Chukchi Sea has decreased substantially in recent decades, causing walruses to alter habitat use and activity patterns which could affect their energy requirements. We developed a bioenergetics model to estimate caloric demand of female walruses, accounting for maintenance, growth, activity (active in-water and hauled-out resting), molt, and reproductive costs. Estimates for non-reproductive females 0–12 yr old (65−810 kg) ranged from 16359 to 68960 kcal d−1 (74−257 kcal d−1 kg−1) for years with readily available sea ice for which we assumed animals spent 83% of their time in water. This translated into the energy content of 3200–5960 clams per day, equivalent to 7–8% and 14–9% of body mass per day for 5–12 and 2–4 yr olds, respectively. Estimated consumption rates of 12 yr old females were minimally affected by pregnancy, but lactation had a large impact, increasing consumption rates to 15% of body mass per day. Increasing the proportion of time in water to 93%, as might happen if walruses were required to spend more time foraging during ice-free periods, increased daily caloric demand by 6–7% for non-lactating females. We provide the first bioenergetics-based estimates of energy requirements for walruses and a first step towards establishing bioenergetic linkages between demography and prey requirements that can ultimately be used in predicting this population’s response to environmental change.
The author describes a statistical model that can be used to account for the error in estimating the required chloramine concentration times time (C x T) to inactivate Cryptosporidium oocysts with ozone followed by chloramine in drinking water. The safety factor described in the ...
48 CFR 1552.215-72 - Instructions for the Preparation of Proposals.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., “Payments Under Time and Materials and Labor-Hour Contracts,” include in the cost proposal the estimated... to reflect the Government's estimate of the offeror's probable costs. Any inconsistency, whether real... hours are the workable hours required by the Government and do not include release time (i.e., holidays...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-03
... worker to obtain and post information for hoists. Total Burden Hours: 20,957. Estimated Cost (Operation... information is in the desired format, reporting burden (time and costs) is minimal, collection instruments are... accuracy of OSHA's estimate of the burden (time and costs) of the information collection requirements...
Asymptotic stability estimates near an equilibrium point
NASA Astrophysics Data System (ADS)
Dumas, H. Scott; Meyer, Kenneth R.; Palacián, Jesús F.; Yanguas, Patricia
2017-07-01
We use the error bounds for adiabatic invariants found in the work of Chartier, Murua and Sanz-Serna [3] to bound the solutions of a Hamiltonian system near an equilibrium over exponentially long times. Our estimates depend only on the linearized system and not on the higher order terms as in KAM theory, nor do we require any steepness or convexity conditions as in Nekhoroshev theory. We require that the equilibrium point where our estimate applies satisfy a type of formal stability called Lie stability.
Methods for determining time of death.
Madea, Burkhard
2016-12-01
Medicolegal death time estimation must estimate the time since death reliably. Reliability can only be provided empirically by statistical analysis of errors in field studies. Determining the time since death requires the calculation of measurable data along a time-dependent curve back to the starting point. Various methods are used to estimate the time since death. The current gold standard for death time estimation is a previously established nomogram method based on the two-exponential model of body cooling. Great experimental and practical achievements have been realized using this nomogram method. To reduce the margin of error of the nomogram method, a compound method was developed based on electrical and mechanical excitability of skeletal muscle, pharmacological excitability of the iris, rigor mortis, and postmortem lividity. Further increasing the accuracy of death time estimation involves the development of conditional probability distributions for death time estimation based on the compound method. Although many studies have evaluated chemical methods of death time estimation, such methods play a marginal role in daily forensic practice. However, increased precision of death time estimation has recently been achieved by considering various influencing factors (i.e., preexisting diseases, duration of terminal episode, and ambient temperature). Putrefactive changes may be used for death time estimation in water-immersed bodies. Furthermore, recently developed technologies, such as H magnetic resonance spectroscopy, can be used to quantitatively study decompositional changes. This review addresses the gold standard method of death time estimation in forensic practice and promising technological and scientific developments in the field.
Temporal and spatial binning of TCSPC data to improve signal-to-noise ratio and imaging speed
NASA Astrophysics Data System (ADS)
Walsh, Alex J.; Beier, Hope T.
2016-03-01
Time-correlated single photon counting (TCSPC) is the most robust method for fluorescence lifetime imaging using laser scanning microscopes. However, TCSPC is inherently slow making it ineffective to capture rapid events due to the single photon product per laser pulse causing extensive acquisition time limitations and the requirement of low fluorescence emission efficiency to avoid bias of measurement towards short lifetimes. Furthermore, thousands of photons per pixel are required for traditional instrument response deconvolution and fluorescence lifetime exponential decay estimation. Instrument response deconvolution and fluorescence exponential decay estimation can be performed in several ways including iterative least squares minimization and Laguerre deconvolution. This paper compares the limitations and accuracy of these fluorescence decay analysis techniques to accurately estimate double exponential decays across many data characteristics including various lifetime values, lifetime component weights, signal-to-noise ratios, and number of photons detected. Furthermore, techniques to improve data fitting, including binning data temporally and spatially, are evaluated as methods to improve decay fits and reduce image acquisition time. Simulation results demonstrate that binning temporally to 36 or 42 time bins, improves accuracy of fits for low photon count data. Such a technique reduces the required number of photons for accurate component estimation if lifetime values are known, such as for commercial fluorescent dyes and FRET experiments, and improve imaging speed 10-fold.
DOT National Transportation Integrated Search
2001-07-01
International trade occurs in physical space and moving goods requires time. This paper examines the importance of time as a trade barrier, estimates the magnitude of time costs, and relates these to patterns of trade and the international organizati...
The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor †
Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji
2018-01-01
The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes. PMID:29510599
A Framework for Automating Cost Estimates in Assembly Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calton, T.L.; Peters, R.R.
1998-12-09
When a product concept emerges, the manufacturing engineer is asked to sketch out a production strategy and estimate its cost. The engineer is given an initial product design, along with a schedule of expected production volumes. The engineer then determines the best approach to manufacturing the product, comparing a variey of alternative production strategies. The engineer must consider capital cost, operating cost, lead-time, and other issues in an attempt to maximize pro$ts. After making these basic choices and sketching the design of overall production, the engineer produces estimates of the required capital, operating costs, and production capacity. 177is process maymore » iterate as the product design is refined in order to improve its pe~ormance or manufacturability. The focus of this paper is on the development of computer tools to aid manufacturing engineers in their decision-making processes. This computer sof~are tool provides aj?amework in which accurate cost estimates can be seamlessly derivedfiom design requirements at the start of any engineering project. Z+e result is faster cycle times through first-pass success; lower ll~e cycie cost due to requirements-driven design and accurate cost estimates derived early in the process.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-21
.... Estimated Cost (Operation and Maintenance): $0. IV. Public Participation--Submission of Comments on This... costs) is minimal, collection instruments are clearly understood, and OSHA's estimate of the information... of OSHA's estimate of the burden (time and costs) of the information collection requirements...
Design of a two-level power system linear state estimator
NASA Astrophysics Data System (ADS)
Yang, Tao
The availability of synchro-phasor data has raised the possibility of a linear state estimator if the inputs are only complex currents and voltages and if there are enough such measurements to meet observability and redundancy requirements. Moreover, the new digital substations can perform some of the computation at the substation itself resulting in a more accurate two-level state estimator. The objective of this research is to develop a two-level linear state estimator processing synchro-phasor data and estimating the states at both the substation level and the control center level. Both the mathematical algorithms that are different from those in the present state estimation procedure and the layered architecture of databases, communications and application programs that are required to support this two-level linear state estimator are described in this dissertation. Besides, as the availability of phasor measurements at substations will increase gradually, this research also describes how the state estimator can be enhanced to handle both the traditional state estimator and the proposed linear state estimator simultaneously. This provides a way to immediately utilize the benefits in those parts of the system where such phasor measurements become available and provides a pathway to transition to the smart grid of the future. The design procedure of the two-level state estimator is applied to two study systems. The first study system is the IEEE-14 bus system. The second one is the 179 bus Western Electricity Coordinating Council (WECC) system. The static database for the substations is constructed from the power flow data of these systems and the real-time measurement database is produced by a power system dynamic simulating tool (TSAT). Time-skew problems that may be caused by communication delays are also considered and simulated. We used the Network Simulator (NS) tool to simulate a simple communication system and analyse its time delay performance. These time delays were too small to affect the results especially since the measurement data is time-stamped and the state estimator for these small systems could be run with subseconf frequency. Keywords: State Estimation, Synchro-Phasor Measurement, Distributed System, Energy Control Center, Substation, Time-skew
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skalski, J. R.; Townsend, Richard L.; Seaburg, Adam
2013-05-01
The purpose of this compliance study was to estimate dam passage survival of subyearling Chinook salmon at The Dalles Dam during summer 2012. Under the 2008 Federal Columbia River Power System Biological Opinion, dam passage survival is required to be greater than or equal to 0.93 and estimated with a standard error (SE) less than or equal to 0.015. The study also estimated survival from the forebay 2 km upstream of the dam and through the tailrace to 2 km downstream of the dam, forebay residence time, tailrace egress time, spill passage efficiency (SPE), and fish passage efficiency (FPE), asmore » required by the 2008 Columbia Basin Fish Accords.« less
Vandergoot, C.S.; Bur, M.T.; Powell, K.A.
2008-01-01
Yellow perch Perca flavescens support economically important recreational and commercial fisheries in Lake Erie and are intensively managed. Age estimation represents an integral component in the management of Lake Erie yellow perch stocks, as age-structured population models are used to set safe harvest levels on an annual basis. We compared the precision associated with yellow perch (N = 251) age estimates from scales, sagittal otoliths, and anal spine sections and evaluated the time required to process and estimate age from each structure. Three readers of varying experience estimated ages. The precision (mean coefficient of variation) of estimates among readers was 1% for sagittal otoliths, 5-6% for anal spines, and 11-13% for scales. Agreement rates among readers were 94-95% for otoliths, 71-76% for anal spines, and 45-50% for scales. Systematic age estimation differences were evident among scale and anal spine readers; less-experienced readers tended to underestimate ages of yellow perch older than age 4 relative to estimates made by an experienced reader. Mean scale age tended to underestimate ages of age-6 and older fish relative to otolith ages estimated by an experienced reader. Total annual mortality estimates based on scale ages were 20% higher than those based on otolith ages; mortality estimates based on anal spine ages were 4% higher than those based on otolith ages. Otoliths required more removal and preparation time than scales and anal spines, but age estimation time was substantially lower for otoliths than for the other two structures. We suggest the use of otoliths or anal spines for age estimation in yellow perch (regardless of length) from Lake Erie and other systems where precise age estimates are necessary, because age estimation errors resulting from the use of scales could generate incorrect management decisions. ?? Copyright by the American Fisheries Society 2008.
Autonomous Object Characterization with Large Datasets
2015-10-18
desk, where a substantial amount of effort is required to transform raw photometry into a data product, minimizing the amount of time the analyst has...were used to explore concepts in satellite characterization and satellite state change. The first algorithm provides real- time stability estimation... Timely and effective space object (SO) characterization is a challenge, and requires advanced data processing techniques. Detection and identification
Improved Regional Seismic Event Locations Using 3-D Velocity Models
1999-12-15
regional velocity model to estimate event hypocenters. Travel times for the regional phases are calculated using a sophisticated eikonal finite...can greatly improve estimates of event locations. Our algorithm calculates travel times using a finite difference approximation of the eikonal ...such as IASP91 or J-B. 3-D velocity models require more sophisticated travel time modeling routines; thus, we use a 3-D eikonal equation solver
On-field mounting position estimation of a lidar sensor
NASA Astrophysics Data System (ADS)
Khan, Owes; Bergelt, René; Hardt, Wolfram
2017-10-01
In order to retrieve a highly accurate view of their environment, autonomous cars are often equipped with LiDAR sensors. These sensors deliver a three dimensional point cloud in their own co-ordinate frame, where the origin is the sensor itself. However, the common co-ordinate system required by HAD (Highly Autonomous Driving) software systems has its origin at the center of the vehicle's rear axle. Thus, a transformation of the acquired point clouds to car co-ordinates is necessary, and thereby the determination of the exact mounting position of the LiDAR system in car coordinates is required. Unfortunately, directly measuring this position is a time-consuming and error-prone task. Therefore, different approaches have been suggested for its estimation which mostly require an exhaustive test-setup and are again time-consuming to prepare. When preparing a high number of LiDAR mounted test vehicles for data acquisition, most approaches fall short due to time or money constraints. In this paper we propose an approach for mounting position estimation which features an easy execution and setup, thus making it feasible for on-field calibration.
Effect of wet bulb depression on heat sterilization time of slash pine lumber
William T. Simpson
For international trade, heat sterilization of wood products is often required to prevent the spread of insects and pathogens. Application of heat sterilization requires estimates of the time necessary to heat the center of the wood configuration to the temperature required to kill the insect or other pest. The nature of the heating medium was found to have a...
Time maintenance system for the BMDO MSX spacecraft
NASA Technical Reports Server (NTRS)
Hermes, Martin J.
1994-01-01
The Johns Hopkins University Applied Physics Laboratory (APL) is responsible for designing and implementing a clock maintenance system for the Ballistic Missile Defense Organizations (BMDO) Midcourse Space Experiment (MSX) spacecraft. The MSX spacecraft has an on-board clock that will be used to control execution of time-dependent commands and to time tag all science and housekeeping data received from the spacecraft. MSX mission objectives have dictated that this spacecraft time, UTC(MSX), maintain a required accuracy with respect to UTC(USNO) of +/- 10 ms with a +/- 1 ms desired accuracy. APL's atomic time standards and the downlinked spacecraft time were used to develop a time maintenance system that will estimate the current MSX clock time offset during an APL pass and make estimates of the clock's drift and aging using the offset estimates from many passes. Using this information, the clock's accuracy will be maintained by uplinking periodic clock correction commands. The resulting time maintenance system is a combination of offset measurement, command/telemetry, and mission planning hardware and computing assets. All assets provide necessary inputs for deciding when corrections to the MSX spacecraft clock must be made to maintain its required accuracy without inhibiting other mission objectives. The MSX time maintenance system is described as a whole and the clock offset measurement subsystem, a unique combination of precision time maintenance and measurement hardware controlled by a Macintosh computer, is detailed. Simulations show that the system estimates the MSX clock offset to less than+/- 33 microseconds.
Why are You Late?: Investigating the Role of Time Management in Time-Based Prospective Memory
Waldum, Emily R; McDaniel, Mark A.
2016-01-01
Time-based prospective memory tasks (TBPM) are those that are to be performed at a specific future time. Contrary to typical laboratory TBPM tasks (e.g., “hit the “z” key every 5 minutes”), many real-world TBPM tasks require more complex time-management processes. For instance to attend an appointment on time, one must estimate the duration of the drive to the appointment and then utilize this estimate to create and execute a secondary TBPM intention (e.g., “I need to start driving by 1:30 to make my 2:00 appointment on time”). Future under- and overestimates of drive time can lead to inefficient TBPM performance with the former lending to missed appointments and the latter to long stints in the waiting room. Despite the common occurrence of complex TBPM tasks in everyday life, to date, no studies have investigated how components of time management, including time estimation, affect behavior in such complex TBPM tasks. Therefore, the current study aimed to investigate timing biases in both older and younger adults and further to determine how such biases along with additional time management components including planning and plan fidelity influence complex TBPM performance. Results suggest for the first time that younger and older adults do not always utilize similar timing strategies, and as a result, can produce differential timing biases under the exact same environmental conditions. These timing biases, in turn, play a vital role in how efficiently both younger and older adults perform a later TBPM task that requires them to utilize their earlier time estimate. PMID:27336325
A switched systems approach to image-based estimation
NASA Astrophysics Data System (ADS)
Parikh, Anup
With the advent of technological improvements in imaging systems and computational resources, as well as the development of image-based reconstruction techniques, it is necessary to understand algorithm performance when subject to real world conditions. Specifically, this dissertation focuses on the stability and performance of a class of image-based observers in the presence of intermittent measurements, caused by e.g., occlusions, limited FOV, feature tracking losses, communication losses, or finite frame rates. Observers or filters that are exponentially stable under persistent observability may have unbounded error growth during intermittent sensing, even while providing seemingly accurate state estimates. In Chapter 3, dwell time conditions are developed to guarantee state estimation error convergence to an ultimate bound for a class of observers while undergoing measurement loss. Bounds are developed on the unstable growth of the estimation errors during the periods when the object being tracked is not visible. A Lyapunov-based analysis for the switched system is performed to develop an inequality in terms of the duration of time the observer can view the moving object and the duration of time the object is out of the field of view. In Chapter 4, a motion model is used to predict the evolution of the states of the system while the object is not visible. This reduces the growth rate of the bounding function to an exponential and enables the use of traditional switched systems Lyapunov analysis techniques. The stability analysis results in an average dwell time condition to guarantee state error convergence with a known decay rate. In comparison with the results in Chapter 3, the estimation errors converge to zero rather than a ball, with relaxed switching conditions, at the cost of requiring additional information about the motion of the feature. In some applications, a motion model of the object may not be available. Numerous adaptive techniques have been developed to compensate for unknown parameters or functions in system dynamics; however, persistent excitation (PE) conditions are typically required to ensure parameter convergence, i.e., learning. Since the motion model is needed in the predictor, model learning is desired; however, PE is difficult to insure a priori and infeasible to check online for nonlinear systems. Concurrent learning (CL) techniques have been developed to use recorded data and a relaxed excitation condition to ensure convergence. In CL, excitation is only required for a finite period of time, and the recorded data can be checked to determine if it is sufficiently rich. However, traditional CL requires knowledge of state derivatives, which are typically not measured and require extensive filter design and tuning to develop satisfactory estimates. In Chapter 5 of this dissertation, a novel formulation of CL is developed in terms of an integral (ICL), removing the need to estimate state derivatives while preserving parameter convergence properties. Using ICL, an estimator is developed in Chapter 6 for simultaneously estimating the pose of an object as well as learning a model of its motion for use in a predictor when the object is not visible. A switched systems analysis is provided to demonstrate the stability of the estimation and prediction with learning scheme. Dwell time conditions as well as excitation conditions are developed to ensure estimation errors converge to an arbitrarily small bound. Experimental results are provided to illustrate the performance of each of the developed estimation schemes. The dissertation concludes with a discussion of the contributions and limitations of the developed techniques, as well as avenues for future extensions.
SIMULATING ATMOSPHERIC EXPOSURE USING AN INNOVATIVE METEOROLOGICAL SAMPLING SCHEME
Multimedia Risk assessments require the temporal integration of atmospheric concentration and deposition estimates with other media modules. However, providing an extended time series of estimates is computationally expensive. An alternative approach is to substitute long-ter...
Estimating psychiatric manpower requirements based on patients' needs.
Faulkner, L R; Goldman, C R
1997-05-01
To provide a better understanding of the complexities of estimating psychiatric manpower requirements, the authors describe several approaches to estimation and present a method based on patients' needs. A five-step method for psychiatric manpower estimation is used, with estimates of data pertinent to each step, to calculate the total psychiatric manpower requirements for the United States. The method is also used to estimate the hours of psychiatric service per patient per year that might be available under current psychiatric practice and under a managed care scenario. Depending on assumptions about data at each step in the method, the total psychiatric manpower requirements for the U.S. population range from 2,989 to 358,696 full-time-equivalent psychiatrists. The number of available hours of psychiatric service per patient per year is 14.1 hours under current psychiatric practice and 2.8 hours under the managed care scenario. The key to psychiatric manpower estimation lies in clarifying the assumptions that underlie the specific method used. Even small differences in assumptions mean large differences in estimates. Any credible manpower estimation process must include discussions and negotiations between psychiatrists, other clinicians, administrators, and patients and families to clarify the treatment needs of patients and the roles, responsibilities, and job description of psychiatrists.
DOT National Transportation Integrated Search
2008-08-01
ODOTs policy for Dynamic Message Sign : utilization requires travel time(s) to be displayed as : a default message. The current method of : calculating travel time involves a workstation : operator estimating the travel time based upon : observati...
System modeling of the Thirty Meter Telescope alignment and phasing system
NASA Astrophysics Data System (ADS)
Dekens, Frank G.; Seo, Byoung-Joon; Troy, Mitchell
2014-08-01
We have developed a system model using the System Modeling Language (SysML) for the Alignment and Phasing System (APS) on the Thirty Meter Telescope (TMT). APS is a Shack-Hartmann wave-front sensor that will be used to measure the alignment and phasing of the primary mirror segments, and the alignment of the secondary and tertiary mirrors. The APS system model contains the ow-down of the Level 1 TMT requirements to APS (Level 2) requirements, and from there to the APS sub-systems (Level 3) requirements. The model also contains the operating modes and scenarios for various activities, such as maintenance alignment, post-segment exchange alignment, and calibration activities. The requirements ow-down is captured in SysML requirements diagrams, and we describe the process of maintaining the DOORS database as the single-source-of-truth for requirements, while using the SysML model to capture the logic and notes associated with the ow-down. We also use the system model to capture any needed communications from APS to other TMT systems, and between the APS sub-systems. The operations are modeled using SysML activity diagrams, and will be used to specify the APS interface documents. The modeling tool can simulate the top level activities to produce sequence diagrams, which contain all the communications between the system and subsystem needed for that activity. By adding time estimates for the lowest level APS activities, a robust estimate for the total time on-sky that APS requires to align and phase the telescope can be obtained. This estimate will be used to verify that the time APS requires on-sky meets the Level 1 TMT requirements.
Estimating aboveground biomass of mariola (Parthenium incanum) from plant dimensions
Carlos Villalobos
2007-01-01
The distribution and abundance of plant biomass in space and time are important properties of rangeland ecosystem. Land managers and researchers require reliable shrub weight estimates to evaluate site productivity, food abundance, treatment effects, and stocking rates. Rapid, nondestructive methods are needed to estimate shrub biomass in semi-arid ecosystems. Shrub...
40 CFR 98.126 - Data reporting requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... fluorinated GHG emitted from equipment leaks (metric tons). (d) Reporting for missing data. Where missing data have been estimated pursuant to § 98.125, you must report the reason the data were missing, the length of time the data were missing, the method used to estimate the missing data, and the estimates of...
40 CFR 98.126 - Data reporting requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... fluorinated GHG emitted from equipment leaks (metric tons). (d) Reporting for missing data. Where missing data have been estimated pursuant to § 98.125, you must report the reason the data were missing, the length of time the data were missing, the method used to estimate the missing data, and the estimates of...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-20
.... Estimated Cost (Operation and Maintenance): $54,197 IV. Public Participation--Submission of Comments on This... costs) is minimal, collection instruments are clearly understood, and OSHA's estimate of the information... accuracy of OSHA's estimate of the burden (time and costs) of the information collection requirements...
Code of Federal Regulations, 2012 CFR
2012-01-01
... for decommissioning costs and on a demonstration that the applicant or licensee passes the financial... of at least $50 million, or at least 30 times the total current decommissioning cost estimate (or the... least 100 times the total current decommissioning cost estimate (or the current amount required if...
Correlation Lengths for Estimating the Large-Scale Carbon and Heat Content of the Southern Ocean
NASA Astrophysics Data System (ADS)
Mazloff, M. R.; Cornuelle, B. D.; Gille, S. T.; Verdy, A.
2018-02-01
The spatial correlation scales of oceanic dissolved inorganic carbon, heat content, and carbon and heat exchanges with the atmosphere are estimated from a realistic numerical simulation of the Southern Ocean. Biases in the model are assessed by comparing the simulated sea surface height and temperature scales to those derived from optimally interpolated satellite measurements. While these products do not resolve all ocean scales, they are representative of the climate scale variability we aim to estimate. Results show that constraining the carbon and heat inventory between 35°S and 70°S on time-scales longer than 90 days requires approximately 100 optimally spaced measurement platforms: approximately one platform every 20° longitude by 6° latitude. Carbon flux has slightly longer zonal scales, and requires a coverage of approximately 30° by 6°. Heat flux has much longer scales, and thus a platform distribution of approximately 90° by 10° would be sufficient. Fluxes, however, have significant subseasonal variability. For all fields, and especially fluxes, sustained measurements in time are required to prevent aliasing of the eddy signals into the longer climate scale signals. Our results imply a minimum of 100 biogeochemical-Argo floats are required to monitor the Southern Ocean carbon and heat content and air-sea exchanges on time-scales longer than 90 days. However, an estimate of formal mapping error using the current Argo array implies that in practice even an array of 600 floats (a nominal float density of about 1 every 7° longitude by 3° latitude) will result in nonnegligible uncertainty in estimating climate signals.
Estimating Development Cost of an Interactive Website Based Cancer Screening Promotion Program
Lairson, David R.; Chung, Tong Han; Smith, Lisa G.; Springston, Jeffrey K.; Champion, Victoria L.
2015-01-01
Objectives The aim of this study was to estimate the initial development costs for an innovative talk show format tailored intervention delivered via the interactive web, for increasing cancer screening in women 50 to 75 who were non-adherent to screening guidelines for colorectal cancer and/or breast cancer. Methods The cost of the intervention development was estimated from a societal perspective. Micro costing methods plus vendor contract costs were used to estimate cost. Staff logs were used to track personnel time. Non-personnel costs include all additional resources used to produce the intervention. Results Development cost of the interactive web based intervention was $.39 million, of which 77% was direct cost. About 98% of the cost was incurred in personnel time cost, contract cost and overhead cost. Conclusions The new web-based disease prevention medium required substantial investment in health promotion and media specialist time. The development cost was primarily driven by the high level of human capital required. The cost of intervention development is important information for assessing and planning future public and private investments in web-based health promotion interventions. PMID:25749548
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-01
... of a Supported Direct FDA Work Hour for FY 2013 FDA is required to estimate 100 percent of its costs... operating costs. A. Estimating the Full Cost per Direct Work Hour in FY 2011 In general, the starting point for estimating the full cost per direct work hour is to estimate the cost of a full-time-equivalent...
Why are you late? Investigating the role of time management in time-based prospective memory.
Waldum, Emily R; McDaniel, Mark A
2016-08-01
Time-based prospective memory tasks (TBPM) are those that are to be performed at a specific future time. Contrary to typical laboratory TBPM tasks (e.g., hit the Z key every 5 min), many real-world TBPM tasks require more complex time-management processes. For instance, to attend an appointment on time, one must estimate the duration of the drive to the appointment and then use this estimate to create and execute a secondary TBPM intention (e.g., "I need to start driving by 1:30 to make my 2:00 appointment on time"). Future under- and overestimates of drive time can lead to inefficient TBPM performance with the former lending to missed appointments and the latter to long stints in the waiting room. Despite the common occurrence of complex TBPM tasks in everyday life, to date, no studies have investigated how components of time management, including time estimation, affect behavior in such complex TBPM tasks. Therefore, the current study aimed to investigate timing biases in both older and younger adults and, further, to determine how such biases along with additional time management components including planning and plan fidelity influence complex TBPM performance. Results suggest for the first time that younger and older adults do not always utilize similar timing strategies, and as a result, can produce differential timing biases under the exact same environmental conditions. These timing biases, in turn, play a vital role in how efficiently both younger and older adults perform a later TBPM task that requires them to utilize their earlier time estimate. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Gomes Junior, Saint Clair Santos; Almeida, Rosimary Terezinha
2009-02-01
To develop a simulation model using public data to estimate the cancer care infrastructure required by the public health system in the state of São Paulo, Brazil. Public data from the Unified Health System database regarding cancer surgery, chemotherapy, and radiation therapy, from January 2002-January 2004, were used to estimate the number of cancer cases in the state. The percentages recorded for each therapy in the Hospital Cancer Registry of Brazil were combined with the data collected from the database to estimate the need for services. Mixture models were used to identify subgroups of cancer cases with regard to the length of time that chemotherapy and radiation therapy were required. A simulation model was used to estimate the infrastructure required taking these parameters into account. The model indicated the need for surgery in 52.5% of the cases, radiation therapy in 42.7%, and chemotherapy in 48.5%. The mixture models identified two subgroups for radiation therapy and four subgroups for chemotherapy with regard to mean usage time for each. These parameters allowed the following estimated infrastructure needs to be made: 147 operating rooms, 2 653 operating beds, 297 chemotherapy chairs, and 102 radiation therapy devices. These estimates suggest the need for a 1.2-fold increase in the number of chemotherapy services and a 2.4-fold increase in the number of radiation therapy services when compared with the parameters currently used by the public health system. A simulation model, such as the one used in the present study, permits better distribution of health care resources because it is based on specific, local needs.
A Bayesian perspective on magnitude estimation.
Petzschner, Frederike H; Glasauer, Stefan; Stephan, Klaas E
2015-05-01
Our representation of the physical world requires judgments of magnitudes, such as loudness, distance, or time. Interestingly, magnitude estimates are often not veridical but subject to characteristic biases. These biases are strikingly similar across different sensory modalities, suggesting common processing mechanisms that are shared by different sensory systems. However, the search for universal neurobiological principles of magnitude judgments requires guidance by formal theories. Here, we discuss a unifying Bayesian framework for understanding biases in magnitude estimation. This Bayesian perspective enables a re-interpretation of a range of established psychophysical findings, reconciles seemingly incompatible classical views on magnitude estimation, and can guide future investigations of magnitude estimation and its neurobiological mechanisms in health and in psychiatric diseases, such as schizophrenia. Copyright © 2015 Elsevier Ltd. All rights reserved.
Wavefront correction with Kalman filtering for the WFIRST-AFTA coronagraph instrument
NASA Astrophysics Data System (ADS)
Riggs, A. J. Eldorado; Kasdin, N. Jeremy; Groff, Tyler D.
2015-09-01
The only way to characterize most exoplanets spectrally is via direct imaging. For example, the Coronagraph Instrument (CGI) on the proposed Wide-Field Infrared Survey Telescope-Astrophysics Focused Telescope Assets (WFIRST-AFTA) mission plans to image and characterize several cool gas giants around nearby stars. The integration time on these faint exoplanets will be many hours to days. A crucial assumption for mission planning is that the time required to dig a dark hole (a region of high star-to-planet contrast) with deformable mirrors is small compared to science integration time. The science camera must be used as the wavefront sensor to avoid non-common path aberrations, but this approach can be quite time intensive. Several estimation images are required to build an estimate of the starlight electric field before it can be partially corrected, and this process is repeated iteratively until high contrast is reached. Here we present simulated results of batch process and recursive wavefront estimation schemes. In particular, we test a Kalman filter and an iterative extended Kalman filter (IEKF) to reduce the total exposure time and improve the robustness of wavefront correction for the WFIRST-AFTA CGI. An IEKF or other nonlinear filter also allows recursive, real-time estimation of sources incoherent with the star, such as exoplanets and disks, and may therefore reduce detection uncertainty.
Development of regional stump-to-mill logging cost estimators
Chris B. LeDoux; John E. Baumgras
1989-01-01
Planning logging operations requires estimating the logging costs for the sale or tract being harvested. Decisions need to be made on equipment selection and its application to terrain. In this paper a methodology is described that has been developed and implemented to solve the problem of accurately estimating logging costs by region. The methodology blends field time...
Code of Federal Regulations, 2014 CFR
2014-01-01
... Services (Based on NASA Escalation Estimate) Time: Project conceptualization (at least two years before... TDRSS Standard Services (Based on NASA Escalation Estimate) A Appendix A to Part 1215 Aeronautics and... the service requirements by NASA Headquarters, communications for the reimbursable development of a...
Code of Federal Regulations, 2013 CFR
2013-01-01
... Services (Based on NASA Escalation Estimate) Time: Project conceptualization (at least two years before... TDRSS Standard Services (Based on NASA Escalation Estimate) A Appendix A to Part 1215 Aeronautics and... the service requirements by NASA Headquarters, communications for the reimbursable development of a...
Sample size calculation for stepped wedge and other longitudinal cluster randomised trials.
Hooper, Richard; Teerenstra, Steven; de Hoop, Esther; Eldridge, Sandra
2016-11-20
The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross-section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
23 CFR 1340.4 - Population, demographic, and time/day requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... population of interest is required. However, in order to assist in the evaluation of trends, it is recommended that data be collected in such a way that restraint use estimates can be reported separately for...
23 CFR 1340.4 - Population, demographic, and time/day requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... population of interest is required. However, in order to assist in the evaluation of trends, it is recommended that data be collected in such a way that restraint use estimates can be reported separately for...
Smooth time-dependent receiver operating characteristic curve estimators.
Martínez-Camblor, Pablo; Pardo-Fernández, Juan Carlos
2018-03-01
The receiver operating characteristic curve is a popular graphical method often used to study the diagnostic capacity of continuous (bio)markers. When the considered outcome is a time-dependent variable, two main extensions have been proposed: the cumulative/dynamic receiver operating characteristic curve and the incident/dynamic receiver operating characteristic curve. In both cases, the main problem for developing appropriate estimators is the estimation of the joint distribution of the variables time-to-event and marker. As usual, different approximations lead to different estimators. In this article, the authors explore the use of a bivariate kernel density estimator which accounts for censored observations in the sample and produces smooth estimators of the time-dependent receiver operating characteristic curves. The performance of the resulting cumulative/dynamic and incident/dynamic receiver operating characteristic curves is studied by means of Monte Carlo simulations. Additionally, the influence of the choice of the required smoothing parameters is explored. Finally, two real-applications are considered. An R package is also provided as a complement to this article.
Alternative nuclear technologies
NASA Astrophysics Data System (ADS)
Schubert, E.
1981-10-01
The lead times required to develop a select group of nuclear fission reactor types and fuel cycles to the point of readiness for full commercialization are compared. Along with lead times, fuel material requirements and comparative costs of producing electric power were estimated. A conservative approach and consistent criteria for all systems were used in estimates of the steps required and the times involved in developing each technology. The impact of the inevitable exhaustion of the low- or reasonable-cost uranium reserves in the United States on the desirability of completing the breeder reactor program, with its favorable long-term result on fission fuel supplies, is discussed. The long times projected to bring the most advanced alternative converter reactor technologies the heavy water reactor and the high-temperature gas-cooled reactor into commercial deployment when compared to the time projected to bring the breeder reactor into equivalent status suggest that the country's best choice is to develop the breeder. The perceived diversion-proliferation problems with the uranium plutonium fuel cycle have workable solutions that can be developed which will enable the use of those materials at substantially reduced levels of diversion risk.
A model for the cost of doing a cost estimate
NASA Technical Reports Server (NTRS)
Remer, D. S.; Buchanan, H. R.
1992-01-01
A model for estimating the cost required to do a cost estimate for Deep Space Network (DSN) projects that range from $0.1 to $100 million is presented. The cost of the cost estimate in thousands of dollars, C(sub E), is found to be approximately given by C(sub E) = K((C(sub p))(sup 0.35)) where C(sub p) is the cost of the project being estimated in millions of dollars and K is a constant depending on the accuracy of the estimate. For an order-of-magnitude estimate, K = 24; for a budget estimate, K = 60; and for a definitive estimate, K = 115. That is, for a specific project, the cost of doing a budget estimate is about 2.5 times as much as that for an order-of-magnitude estimate, and a definitive estimate costs about twice as much as a budget estimate. Use of this model should help provide the level of resources required for doing cost estimates and, as a result, provide insights towards more accurate estimates with less potential for cost overruns.
The fossilized birth–death process for coherent calibration of divergence-time estimates
Heath, Tracy A.; Huelsenbeck, John P.; Stadler, Tanja
2014-01-01
Time-calibrated species phylogenies are critical for addressing a wide range of questions in evolutionary biology, such as those that elucidate historical biogeography or uncover patterns of coevolution and diversification. Because molecular sequence data are not informative on absolute time, external data—most commonly, fossil age estimates—are required to calibrate estimates of species divergence dates. For Bayesian divergence time methods, the common practice for calibration using fossil information involves placing arbitrarily chosen parametric distributions on internal nodes, often disregarding most of the information in the fossil record. We introduce the “fossilized birth–death” (FBD) process—a model for calibrating divergence time estimates in a Bayesian framework, explicitly acknowledging that extant species and fossils are part of the same macroevolutionary process. Under this model, absolute node age estimates are calibrated by a single diversification model and arbitrary calibration densities are not necessary. Moreover, the FBD model allows for inclusion of all available fossils. We performed analyses of simulated data and show that node age estimation under the FBD model results in robust and accurate estimates of species divergence times with realistic measures of statistical uncertainty, overcoming major limitations of standard divergence time estimation methods. We used this model to estimate the speciation times for a dataset composed of all living bears, indicating that the genus Ursus diversified in the Late Miocene to Middle Pliocene. PMID:25009181
The effect of atmospheric drag on the design of solar-cell power systems for low Earth orbit
NASA Technical Reports Server (NTRS)
Kyser, A. C.
1983-01-01
The feasibility of reducing the atmospheric drag of low orbit solar powered satellites by operating the solar-cell array in a minimum-drag attitude, rather than in the conventional Sun pointing attitude was determined. The weights of the solar array, the energy storage batteries, and the fuel required to overcome the drag of the solar array for a range of design life times in orbit were considered. The drag of the array was estimated by free molecule flow theory, and the system weights were calculated from unit weight estimates for 1990 technology. The trailing, minimum drag system was found to require 80% more solar array area, and 30% more battery capacity, the system weights for reasonable life times were dominated by the thruster fuel requirements.
Kaur, Gurpreet; English, Coralie; Hillier, Susan
2013-03-01
How accurately do physiotherapists estimate how long stroke survivors spend in physiotherapy sessions and the amount of time stroke survivors are engaged in physical activity during physiotherapy sessions? Does the mode of therapy (individual sessions or group circuit classes) affect the accuracy of therapists' estimates? Observational study embedded within a randomised trial. People who participated in the CIRCIT trial after having a stroke. 47 therapy sessions scheduled and supervised by physiotherapists (n = 8) and physiotherapy assistants (n = 4) for trial participants were video-recorded. Therapists' estimations of therapy time were compared to the video-recorded times. The agreement between therapist-estimated and video-recorded data for total therapy time and active time was excellent, with intraclass correlation coefficients (ICC) of 0.90 (95% CI 0.83 to 0.95) and 0.83 (95% CI 0.73 to 0.93) respectively. Agreement between therapist-estimated and video-recorded data for inactive time was good (ICC score 0.62, 95% CI 0.40 to 0.77). The mean (SD) difference between therapist-estimated and video-recorded total therapy time, active time, and inactive time for all sessions was 7.7 (10.5), 14.1 (10.3) and -6.9 (9.5) minutes respectively. Bland-Altman analyses revealed a systematic bias of overestimation of total therapy time and total active time, and underestimation of inactive time by therapists. Compared to individual therapy sessions, therapists estimated total circuit class therapy duration more accurately, but estimated active time within circuit classes less accurately. Therapists are inaccurate in their estimation of the amount of time stroke survivors are active during therapy sessions. When accurate therapy data are required, use of objective measures is recommended. Copyright © 2013 Australian Physiotherapy Association. Published by .. All rights reserved.
Beck, H J; Birch, G F
2013-06-01
Stormwater contaminant loading estimates using event mean concentration (EMC), rainfall/runoff relationship calculations and computer modelling (Model of Urban Stormwater Infrastructure Conceptualisation--MUSIC) demonstrated high variability in common methods of water quality assessment. Predictions of metal, nutrient and total suspended solid loadings for three highly urbanised catchments in Sydney estuary, Australia, varied greatly within and amongst methods tested. EMC and rainfall/runoff relationship calculations produced similar estimates (within 1 SD) in a statistically significant number of trials; however, considerable variability within estimates (∼50 and ∼25 % relative standard deviation, respectively) questions the reliability of these methods. Likewise, upper and lower default inputs in a commonly used loading model (MUSIC) produced an extensive range of loading estimates (3.8-8.3 times above and 2.6-4.1 times below typical default inputs, respectively). Default and calibrated MUSIC simulations produced loading estimates that agreed with EMC and rainfall/runoff calculations in some trials (4-10 from 18); however, they were not frequent enough to statistically infer that these methods produced the same results. Great variance within and amongst mean annual loads estimated by common methods of water quality assessment has important ramifications for water quality managers requiring accurate estimates of the quantities and nature of contaminants requiring treatment.
Real-time estimation of incident delay in dynamic and stochastic networks
DOT National Transportation Integrated Search
1997-01-01
The ability to predict the link travel times is a necessary requirement for most intelligent transportation systems (ITS) applications such as route guidance systems. In an urban traffic environment, these travel times are dynamic and stochastic and ...
The Beta-Geometric Model Applied to Fecundability in a Sample of Married Women
NASA Astrophysics Data System (ADS)
Adekanmbi, D. B.; Bamiduro, T. A.
2006-10-01
The time required to achieve pregnancy among married couples termed fecundability has been proposed to follow a beta-geometric distribution. The accuracy of the method used in estimating the parameters of the model has an implication on the goodness of fit of the model. In this study, the parameters of the model are estimated using the Method of Moments and Newton-Raphson estimation procedure. The goodness of fit of the model was considered, using estimates from the two methods of estimation, as well as the asymptotic relative efficiency of the estimates. A noticeable improvement in the fit of the model to the data on time to conception was observed, when the parameters are estimated by Newton-Raphson procedure, and thereby estimating reasonable expectations of fecundability for married female population in the country.
Real-time hydraulic interval state estimation for water transport networks: a case study
NASA Astrophysics Data System (ADS)
Vrachimis, Stelios G.; Eliades, Demetrios G.; Polycarpou, Marios M.
2018-03-01
Hydraulic state estimation in water distribution networks is the task of estimating water flows and pressures in the pipes and nodes of the network based on some sensor measurements. This requires a model of the network as well as knowledge of demand outflow and tank water levels. Due to modeling and measurement uncertainty, standard state estimation may result in inaccurate hydraulic estimates without any measure of the estimation error. This paper describes a methodology for generating hydraulic state bounding estimates based on interval bounds on the parametric and measurement uncertainties. The estimation error bounds provided by this method can be applied to determine the existence of unaccounted-for water in water distribution networks. As a case study, the method is applied to a modified transport network in Cyprus, using actual data in real time.
Zhang, Fang; Wagner, Anita K; Soumerai, Stephen B; Ross-Degnan, Dennis
2009-02-01
Interrupted time series (ITS) is a strong quasi-experimental research design, which is increasingly applied to estimate the effects of health services and policy interventions. We describe and illustrate two methods for estimating confidence intervals (CIs) around absolute and relative changes in outcomes calculated from segmented regression parameter estimates. We used multivariate delta and bootstrapping methods (BMs) to construct CIs around relative changes in level and trend, and around absolute changes in outcome based on segmented linear regression analyses of time series data corrected for autocorrelated errors. Using previously published time series data, we estimated CIs around the effect of prescription alerts for interacting medications with warfarin on the rate of prescriptions per 10,000 warfarin users per month. Both the multivariate delta method (MDM) and the BM produced similar results. BM is preferred for calculating CIs of relative changes in outcomes of time series studies, because it does not require large sample sizes when parameter estimates are obtained correctly from the model. Caution is needed when sample size is small.
McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J
2014-01-17
A new method for estimating the thermodynamic parameters of ΔH(T0), ΔS(T0), and ΔCP for use in thermodynamic modeling of GC×GC separations has been developed. The method is an alternative to the traditional isothermal separations required to fit a three-parameter thermodynamic model to retention data. Herein, a non-linear optimization technique is used to estimate the parameters from a series of temperature-programmed separations using the Nelder-Mead simplex algorithm. With this method, the time required to obtain estimates of thermodynamic parameters a series of analytes is significantly reduced. This new method allows for precise predictions of retention time with the average error being only 0.2s for 1D separations. Predictions for GC×GC separations were also in agreement with experimental measurements; having an average relative error of 0.37% for (1)tr and 2.1% for (2)tr. Copyright © 2013 Elsevier B.V. All rights reserved.
Estimation of excitation forces for wave energy converters control using pressure measurements
NASA Astrophysics Data System (ADS)
Abdelkhalik, O.; Zou, S.; Robinett, R.; Bacelli, G.; Wilson, D.
2017-08-01
Most control algorithms of wave energy converters require prediction of wave elevation or excitation force for a short future horizon, to compute the control in an optimal sense. This paper presents an approach that requires the estimation of the excitation force and its derivatives at present time with no need for prediction. An extended Kalman filter is implemented to estimate the excitation force. The measurements in this approach are selected to be the pressures at discrete points on the buoy surface, in addition to the buoy heave position. The pressures on the buoy surface are more directly related to the excitation force on the buoy as opposed to wave elevation in front of the buoy. These pressure measurements are also more accurate and easier to obtain. A singular arc control is implemented to compute the steady-state control using the estimated excitation force. The estimated excitation force is expressed in the Laplace domain and substituted in the control, before the latter is transformed to the time domain. Numerical simulations are presented for a Bretschneider wave case study.
Estimating survival rates with time series of standing age‐structure data
Udevitz, Mark S.; Gogan, Peter J.
2012-01-01
It has long been recognized that age‐structure data contain useful information for assessing the status and dynamics of wildlife populations. For example, age‐specific survival rates can be estimated with just a single sample from the age distribution of a stable, stationary population. For a population that is not stable, age‐specific survival rates can be estimated using techniques such as inverse methods that combine time series of age‐structure data with other demographic data. However, estimation of survival rates using these methods typically requires numerical optimization, a relatively long time series of data, and smoothing or other constraints to provide useful estimates. We developed general models for possibly unstable populations that combine time series of age‐structure data with other demographic data to provide explicit maximum likelihood estimators of age‐specific survival rates with as few as two years of data. As an example, we applied these methods to estimate survival rates for female bison (Bison bison) in Yellowstone National Park, USA. This approach provides a simple tool for monitoring survival rates based on age‐structure data.
Qian, Siyu; Yu, Ping; Hailey, David M; Wang, Ning
2016-04-01
To examine nursing time spent on administration of medications in a residential aged care (RAC) home, and to determine factors that influence the time to medicate a resident. Information on nursing time spent on medication administration is useful for planning and implementation of nursing resources. Nurses were observed over 12 morning medication rounds using a time-motion observational method and field notes, at two high-care units in an Australian RAC home. Nurses spent between 2.5 and 4.5 hours in a medication round. Administration of medication averaged 200 seconds per resident. Four factors had significant impact on medication time: number of types of medication, number of tablets taken by a resident, methods used by a nurse to prepare tablets and methods to provide tablets. Administration of medication consumed a substantial, though variable amount of time in the RAC home. Nursing managers need to consider the factors that influenced the nursing time required for the administration of medication in their estimation of nursing workload and required resources. To ensure safe medication administration for older people, managers should regularly assess the changes in the factors influencing nursing time on the administration of medication when estimating nursing workload and required resources. © 2015 John Wiley & Sons Ltd.
Method for detection and correction of errors in speech pitch period estimates
NASA Technical Reports Server (NTRS)
Bhaskar, Udaya (Inventor)
1989-01-01
A method of detecting and correcting received values of a pitch period estimate of a speech signal for use in a speech coder or the like. An average is calculated of the nonzero values of received pitch period estimate since the previous reset. If a current pitch period estimate is within a range of 0.75 to 1.25 times the average, it is assumed correct, while if not, a correction process is carried out. If correction is required successively for more than a preset number of times, which will most likely occur when the speaker changes, the average is discarded and a new average calculated.
Regression Analysis of a Disease Onset Distribution Using Diagnosis Data
Young, Jessica G.; Jewell, Nicholas P.; Samuels, Steven J.
2008-01-01
Summary We consider methods for estimating the effect of a covariate on a disease onset distribution when the observed data structure consists of right-censored data on diagnosis times and current status data on onset times amongst individuals who have not yet been diagnosed. Dunson and Baird (2001, Biometrics 57, 306–403) approached this problem using maximum likelihood, under the assumption that the ratio of the diagnosis and onset distributions is monotonic nondecreasing. As an alternative, we propose a two-step estimator, an extension of the approach of van der Laan, Jewell, and Petersen (1997, Biometrika 84, 539–554) in the single sample setting, which is computationally much simpler and requires no assumptions on this ratio. A simulation study is performed comparing estimates obtained from these two approaches, as well as that from a standard current status analysis that ignores diagnosis data. Results indicate that the Dunson and Baird estimator outperforms the two-step estimator when the monotonicity assumption holds, but the reverse is true when the assumption fails. The simple current status estimator loses only a small amount of precision in comparison to the two-step procedure but requires monitoring time information for all individuals. In the data that motivated this work, a study of uterine fibroids and chemical exposure to dioxin, the monotonicity assumption is seen to fail. Here, the two-step and current status estimators both show no significant association between the level of dioxin exposure and the hazard for onset of uterine fibroids; the two-step estimator of the relative hazard associated with increasing levels of exposure has the least estimated variance amongst the three estimators considered. PMID:17680832
Leacock, William B.; Eby, Lisa A.; Stanford, Jack A.
2016-01-01
Accurately estimating population sizes is often a critical component of fisheries research and management. Although there is a growing appreciation of the importance of small-scale salmon population dynamics to the stability of salmon stock-complexes, our understanding of these populations is constrained by a lack of efficient and cost-effective monitoring tools for streams. Weirs are expensive, labor intensive, and can disrupt natural fish movements. While conventional video systems avoid some of these shortcomings, they are expensive and require excessive amounts of labor to review footage for data collection. Here, we present a novel method for quantifying salmon in small streams (<15 m wide, <1 m deep) that uses both time-lapse photography and video in a model-based double sampling scheme. This method produces an escapement estimate nearly as accurate as a video-only approach, but with substantially less labor, money, and effort. It requires servicing only every 14 days, detects salmon 24 h/day, is inexpensive, and produces escapement estimates with confidence intervals. In addition to escapement estimation, we present a method for estimating in-stream salmon abundance across time, data needed by researchers interested in predator--prey interactions or nutrient subsidies. We combined daily salmon passage estimates with stream specific estimates of daily mortality developed using previously published data. To demonstrate proof of concept for these methods, we present results from two streams in southwest Kodiak Island, Alaska in which high densities of sockeye salmon spawn. PMID:27326378
26 CFR 1.6073-1 - Time and place for filing declarations of estimated income tax by individuals.
Code of Federal Regulations, 2014 CFR
2014-04-01
... aliens who do not have wages subject to withholding under Chapter 24 of the code and are not treated as..., these aliens are not required to file a declaration of estimated tax before June 15th. (b) Farmers or..., and shrimps), sponges, seaweeds, or other aquatic forms of animal and vegetable life. The estimated...
26 CFR 1.6073-1 - Time and place for filing declarations of estimated income tax by individuals.
Code of Federal Regulations, 2012 CFR
2012-04-01
... aliens who do not have wages subject to withholding under Chapter 24 of the code and are not treated as..., these aliens are not required to file a declaration of estimated tax before June 15th. (b) Farmers or..., and shrimps), sponges, seaweeds, or other aquatic forms of animal and vegetable life. The estimated...
26 CFR 1.6073-1 - Time and place for filing declarations of estimated income tax by individuals.
Code of Federal Regulations, 2013 CFR
2013-04-01
... aliens who do not have wages subject to withholding under Chapter 24 of the code and are not treated as..., these aliens are not required to file a declaration of estimated tax before June 15th. (b) Farmers or..., and shrimps), sponges, seaweeds, or other aquatic forms of animal and vegetable life. The estimated...
26 CFR 1.6073-1 - Time and place for filing declarations of estimated income tax by individuals.
Code of Federal Regulations, 2011 CFR
2011-04-01
... aliens who do not have wages subject to withholding under Chapter 24 of the code and are not treated as..., these aliens are not required to file a declaration of estimated tax before June 15th. (b) Farmers or..., and shrimps), sponges, seaweeds, or other aquatic forms of animal and vegetable life. The estimated...
Impossible Certainty: Cost Risk Analysis for Air Force Systems
2006-01-01
the estimated cost of weapon systems , which typically take many years to acquire and remain in operation for a long time . To make those esti- mates... times , uncertain, undefined, or unknown when estimates are prepared. New system development may involve further uncer- tainty due to unproven or...risk (a system requiring more money to complete than was forecasted ) and opera- tional risk (a vital capability becoming unaffordable as the program
AnimalFinder: A semi-automated system for animal detection in time-lapse camera trap images
Price Tack, Jennifer L.; West, Brian S.; McGowan, Conor P.; Ditchkoff, Stephen S.; Reeves, Stanley J.; Keever, Allison; Grand, James B.
2017-01-01
Although the use of camera traps in wildlife management is well established, technologies to automate image processing have been much slower in development, despite their potential to drastically reduce personnel time and cost required to review photos. We developed AnimalFinder in MATLAB® to identify animal presence in time-lapse camera trap images by comparing individual photos to all images contained within the subset of images (i.e. photos from the same survey and site), with some manual processing required to remove false positives and collect other relevant data (species, sex, etc.). We tested AnimalFinder on a set of camera trap images and compared the presence/absence results with manual-only review with white-tailed deer (Odocoileus virginianus), wild pigs (Sus scrofa), and raccoons (Procyon lotor). We compared abundance estimates, model rankings, and coefficient estimates of detection and abundance for white-tailed deer using N-mixture models. AnimalFinder performance varied depending on a threshold value that affects program sensitivity to frequently occurring pixels in a series of images. Higher threshold values led to fewer false negatives (missed deer images) but increased manual processing time, but even at the highest threshold value, the program reduced the images requiring manual review by ~40% and correctly identified >90% of deer, raccoon, and wild pig images. Estimates of white-tailed deer were similar between AnimalFinder and the manual-only method (~1–2 deer difference, depending on the model), as were model rankings and coefficient estimates. Our results show that the program significantly reduced data processing time and may increase efficiency of camera trapping surveys.
NASA Astrophysics Data System (ADS)
Spaans, K.; Hooper, A. J.
2017-12-01
The short revisit time and high data acquisition rates of current satellites have resulted in increased interest in the development of deformation monitoring and rapid disaster response capability, using InSAR. Fast, efficient data processing methodologies are required to deliver the timely results necessary for this, and also to limit computing resources required to process the large quantities of data being acquired. Contrary to volcano or earthquake applications, urban monitoring requires high resolution processing, in order to differentiate movements between buildings, or between buildings and the surrounding land. Here we present Rapid time series InSAR (RapidSAR), a method that can efficiently update high resolution time series of interferograms, and demonstrate its effectiveness over urban areas. The RapidSAR method estimates the coherence of pixels on an interferogram-by-interferogram basis. This allows for rapid ingestion of newly acquired images without the need to reprocess the earlier acquired part of the time series. The coherence estimate is based on ensembles of neighbouring pixels with similar amplitude behaviour through time, which are identified on an initial set of interferograms, and need be re-evaluated only occasionally. By taking into account scattering properties of points during coherence estimation, a high quality coherence estimate is achieved, allowing point selection at full resolution. The individual point selection maximizes the amount of information that can be extracted from each interferogram, as no selection compromise has to be reached between high and low coherence interferograms. In other words, points do not have to be coherent throughout the time series to contribute to the deformation time series. We demonstrate the effectiveness of our method over urban areas in the UK. We show how the algorithm successfully extracts high density time series from full resolution Sentinel-1 interferograms, and distinguish clearly between buildings and surrounding vegetation or streets. The fact that new interferograms can be processed separately from the remainder of the time series helps manage the high data volumes, both in space and time, generated by current missions.
[Energy requirements in adolescents playing basketball in Russian Olympic reserve team].
Martinchik, A N; Baturin, A K; Petukhov, A B; Baeva, V S; Zemlianskaia, T A; Sokolov, A I; Peskova, E V; Tysiachnaia, E M
2003-01-01
The energy expenditure and requirements and dietary intake were studied in basketball players aged 14-16 years during 3 week-training period. The subjects of study were 14 boys and 18 girls as of the members of reserve of Russian Olympic basketball team. The dietary intake was estimated by dietary record of all food consumed within 24 hours last 7 days of training period. The energy expenditure was estimated by registration of time on different physical activity of team and multiplication on physical activity coefficient. The decrease of body mass and body mass index were observed in boys with height 195 cm and more to the end of training period. These tall boys did not consume enough food to satisfy the estimated energy requirement. It is estimated that energy need of tall basketball players is no less then 5000 kcal for boys and 3100 kcal for girls.
NASA Astrophysics Data System (ADS)
An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.
2017-01-01
The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.
Liu, Y; Wickens, C D
1994-11-01
The evaluation of mental workload is becoming increasingly important in system design and analysis. The present study examined the structure and assessment of mental workload in performing decision and monitoring tasks by focusing on two mental workload measurements: subjective assessment and time estimation. The task required the assignment of a series of incoming customers to the shortest of three parallel service lines displayed on a computer monitor. The subject was either in charge of the customer assignment (manual mode) or was monitoring an automated system performing the same task (automatic mode). In both cases, the subjects were required to detect the non-optimal assignments that they or the computer had made. Time pressure was manipulated by the experimenter to create fast and slow conditions. The results revealed a multi-dimensional structure of mental workload and a multi-step process of subjective workload assessment. The results also indicated that subjective workload was more influenced by the subject's participatory mode than by the factor of task speed. The time estimation intervals produced while performing the decision and monitoring tasks had significantly greater length and larger variability than those produced while either performing no other tasks or performing a well practised customer assignment task. This result seemed to indicate that time estimation was sensitive to the presence of perceptual/cognitive demands, but not to response related activities to which behavioural automaticity has developed.
Tactical radar technology study. Volume 1: Executive summary
NASA Astrophysics Data System (ADS)
Rosien, R.; Cardone, L.; Hammers, D.; Klein, A.; Nozawa, E.
1980-03-01
This report presents results of a study to identify new technology required to provide advanced multi-threat performance capabilities in future tactical surveillance radar designs. A baseline design with optional subsystem characteristics has been synthesized to provide both functional and operational survivability in a dynamic and hostile situation postulated for the post 1985 time frame. Comparisons have been made of available technology with that required by the new baseline design to identify new technology requirements. Recommendations are presented for critical new technology programs including estimates of technical risks, costs and required development time.
2011-01-01
Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html. PMID:21851598
Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp
2011-08-18
Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.
Head movement compensation in real-time magnetoencephalographic recordings.
Little, Graham; Boe, Shaun; Bardouille, Timothy
2014-01-01
Neurofeedback- and brain-computer interface (BCI)-based interventions can be implemented using real-time analysis of magnetoencephalographic (MEG) recordings. Head movement during MEG recordings, however, can lead to inaccurate estimates of brain activity, reducing the efficacy of the intervention. Most real-time applications in MEG have utilized analyses that do not correct for head movement. Effective means of correcting for head movement are needed to optimize the use of MEG in such applications. Here we provide preliminary validation of a novel analysis technique, real-time source estimation (rtSE), that measures head movement and generates corrected current source time course estimates in real-time. rtSE was applied while recording a calibrated phantom to determine phantom position localization accuracy and source amplitude estimation accuracy under stationary and moving conditions. Results were compared to off-line analysis methods to assess validity of the rtSE technique. The rtSE method allowed for accurate estimation of current source activity at the source-level in real-time, and accounted for movement of the source due to changes in phantom position. The rtSE technique requires modifications and specialized analysis of the following MEG work flow steps.•Data acquisition•Head position estimation•Source localization•Real-time source estimation This work explains the technical details and validates each of these steps.
Cross-validation of resting metabolic rate prediction equations
USDA-ARS?s Scientific Manuscript database
Background: Knowledge of the resting metabolic rate (RMR) is necessary for determining individual total energy requirements. Measurement of RMR is time consuming and requires specialized equipment. Prediction equations provide an easy method to estimate RMR; however, the accuracy of these equations...
A Nonlinear Least Squares Approach to Time of Death Estimation Via Body Cooling.
Rodrigo, Marianito R
2016-01-01
The problem of time of death (TOD) estimation by body cooling is revisited by proposing a nonlinear least squares approach that takes as input a series of temperature readings only. Using a reformulation of the Marshall-Hoare double exponential formula and a technique for reducing the dimension of the state space, an error function that depends on the two cooling rates is constructed, with the aim of minimizing this function. Standard nonlinear optimization methods that are used to minimize the bivariate error function require an initial guess for these unknown rates. Hence, a systematic procedure based on the given temperature data is also proposed to determine an initial estimate for the rates. Then, an explicit formula for the TOD is given. Results of numerical simulations using both theoretical and experimental data are presented, both yielding reasonable estimates. The proposed procedure does not require knowledge of the temperature at death nor the body mass. In fact, the method allows the estimation of the temperature at death once the cooling rates and the TOD have been calculated. The procedure requires at least three temperature readings, although more measured readings could improve the estimates. With the aid of computerized recording and thermocouple detectors, temperature readings spaced 10-15 min apart, for example, can be taken. The formulas can be straightforwardly programmed and installed on a hand-held device for field use. © 2015 American Academy of Forensic Sciences.
Using GIS to Estimate Lake Volume from Limited Data (Lake and Reservoir Management)
Estimates of lake volume are necessary for calculating residence time and modeling pollutants. Modern GIS methods for calculating lake volume improve upon more dated technologies (e.g. planimeters) and do not require potentially inaccurate assumptions (e.g. volume of a frustum of...
Autonomous Aerobraking: A Design, Development, and Feasibility Study
NASA Technical Reports Server (NTRS)
Prince, Jill L. H.; Powell, Richard W.; Murri, Dan
2011-01-01
Aerobraking has been used four times to decrease the apoapsis of a spacecraft in a captured orbit around a planetary body with a significant atmosphere utilizing atmospheric drag to decelerate the spacecraft. While aerobraking requires minimum fuel, the long time required for aerobraking requires both a large operations staff, and large Deep Space Network resources. A study to automate aerobraking has been sponsored by the NASA Engineering and Safety Center to determine initial feasibility of equipping a spacecraft with the onboard capability for autonomous aerobraking, thus saving millions of dollars incurred by a large aerobraking operations workforce and continuous DSN coverage. This paper describes the need for autonomous aerobraking, the development of the Autonomous Aerobraking Development Software that includes an ephemeris estimator, an atmospheric density estimator, and maneuver calculation, and the plan forward for continuation of this study.
Developing a Crew Time Model for Human Exploration Missions to Mars
NASA Technical Reports Server (NTRS)
Battfeld, Bryan; Stromgren, Chel; Shyface, Hilary; Cirillo, William; Goodliff, Kandyce
2015-01-01
Candidate human missions to Mars require mission lengths that could extend beyond those that have previously been demonstrated during crewed Lunar (Apollo) and International Space Station (ISS) missions. The nature of the architectures required for deep space human exploration will likely necessitate major changes in how crews operate and maintain the spacecraft. The uncertainties associated with these shifts in mission constructs - including changes to habitation systems, transit durations, and system operations - raise concerns as to the ability of the crew to complete required overhead activities while still having time to conduct a set of robust exploration activities. This paper will present an initial assessment of crew operational requirements for human missions to the Mars surface. The presented results integrate assessments of crew habitation, system maintenance, and utilization to present a comprehensive analysis of potential crew time usage. Destination operations were assessed for a short (approx. 50 day) and long duration (approx. 500 day) surface habitation case. Crew time allocations are broken out by mission segment, and the availability of utilization opportunities was evaluated throughout the entire mission progression. To support this assessment, the integrated crew operations model (ICOM) was developed. ICOM was used to parse overhead, maintenance and system repair, and destination operations requirements within each mission segment - outbound transit, Mars surface duration, and return transit - to develop a comprehensive estimation of exploration crew time allocations. Overhead operational requirements included daily crew operations, health maintenance activities, and down time. Maintenance and repair operational allocations are derived using the Exploration Maintainability and Analysis Tool (EMAT) to develop a probabilistic estimation of crew repair time necessary to maintain systems functionality throughout the mission.
2009-12-01
events. Work associated with aperiodic tasks have the same statistical behavior and the same timing requirements. The timing deadlines are soft. • Sporadic...answers, but it is possible to calculate how precise the estimates are. Simulation-based performance analysis of a model includes a statistical ...to evaluate all pos- sible states in a timely manner. This is the principle reason for resorting to simulation and statistical analysis to evaluate
NASA Technical Reports Server (NTRS)
Kanning, G.; Cicolani, L. S.; Schmidt, S. F.
1983-01-01
Translational state estimation in terminal area operations, using a set of commonly available position, air data, and acceleration sensors, is described. Kalman filtering is applied to obtain maximum estimation accuracy from the sensors but feasibility in real-time computations requires a variety of approximations and devices aimed at minimizing the required computation time with only negligible loss of accuracy. Accuracy behavior throughout the terminal area, its relation to sensor accuracy, its effect on trajectory tracking errors and control activity in an automatic flight control system, and its adequacy in terms of existing criteria for various terminal area operations are examined. The principal investigative tool is a simulation of the system.
Robust double gain unscented Kalman filter for small satellite attitude estimation
NASA Astrophysics Data System (ADS)
Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun
2017-08-01
Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).
Absolute colorimetric characterization of a DSLR camera
NASA Astrophysics Data System (ADS)
Guarnera, Giuseppe Claudio; Bianco, Simone; Schettini, Raimondo
2014-03-01
A simple but effective technique for absolute colorimetric camera characterization is proposed. It offers a large dynamic range requiring just a single, off-the-shelf target and a commonly available controllable light source for the characterization. The characterization task is broken down in two modules, respectively devoted to absolute luminance estimation and to colorimetric characterization matrix estimation. The characterized camera can be effectively used as a tele-colorimeter, giving an absolute estimation of the XYZ data in cd=m2. The user is only required to vary the f - number of the camera lens or the exposure time t, to better exploit the sensor dynamic range. The estimated absolute tristimulus values closely match the values measured by a professional spectro-radiometer.
Lindqvist, R
2006-07-01
Turbidity methods offer possibilities for generating data required for addressing microorganism variability in risk modeling given that the results of these methods correspond to those of viable count methods. The objectives of this study were to identify the best approach for determining growth parameters based on turbidity data and use of a Bioscreen instrument and to characterize variability in growth parameters of 34 Staphylococcus aureus strains of different biotypes isolated from broiler carcasses. Growth parameters were estimated by fitting primary growth models to turbidity growth curves or to detection times of serially diluted cultures either directly or by using an analysis of variance (ANOVA) approach. The maximum specific growth rates in chicken broth at 17 degrees C estimated by time to detection methods were in good agreement with viable count estimates, whereas growth models (exponential and Richards) underestimated growth rates. Time to detection methods were selected for strain characterization. The variation of growth parameters among strains was best described by either the logistic or lognormal distribution, but definitive conclusions require a larger data set. The distribution of the physiological state parameter ranged from 0.01 to 0.92 and was not significantly different from a normal distribution. Strain variability was important, and the coefficient of variation of growth parameters was up to six times larger among strains than within strains. It is suggested to apply a time to detection (ANOVA) approach using turbidity measurements for convenient and accurate estimation of growth parameters. The results emphasize the need to consider implications of strain variability for predictive modeling and risk assessment.
Bruise chromophore concentrations over time
NASA Astrophysics Data System (ADS)
Duckworth, Mark G.; Caspall, Jayme J.; Mappus, Rudolph L., IV; Kong, Linghua; Yi, Dingrong; Sprigle, Stephen H.
2008-03-01
During investigations of potential child and elder abuse, clinicians and forensic practitioners are often asked to offer opinions about the age of a bruise. A commonality between existing methods of bruise aging is analysis of bruise color or estimation of chromophore concentration. Relative chromophore concentration is an underlying factor that determines bruise color. We investigate a method of chromophore concentration estimation that can be employed in a handheld imaging spectrometer with a small number of wavelengths. The method, based on absorbance properties defined by Beer-Lambert's law, allows estimation of differential chromophore concentration between bruised and normal skin. Absorption coefficient data for each chromophore are required to make the estimation. Two different sources of this data are used in the analysis- generated using Independent Component Analysis and taken from published values. Differential concentration values over time, generated using both sources, show correlation to published models of bruise color change over time and total chromophore concentration over time.
Huang, Lei
2015-01-01
To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skalski, J. R.; Townsend, Richard L.; Seaburg, Adam
2013-05-01
The purpose of this compliance study was to estimate dam passage survival of subyearling Chinook salmon at Bonneville Dam during summer 2012, as required by the 2008 Federal Columbia River Power System Biological Opinion. The study also estimated smolt passage survival from the forebay 2 km upstream of the dam to the tailrace 1 km below the dam, as well as forebay residence time, tailrace egress, and spill passage efficiency, as required in the 2008 Columbia Basin Fish Accords.
15 CFR 921.13 - Management plan and environmental impact statement development.
Code of Federal Regulations, 2010 CFR
2010-01-01
... simple property interest (e.g., conservation easement), fee simple property acquisition, or a combination... simple options) to establish adequate long-term state control; an estimate of the fair market value of any property interest—which is proposed for acquisition; a schedule estimating the time required to...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-18
... import explosive materials or ammonium nitrate must, when required by the Director, furnish samples of such explosive materials or ammonium nitrate; information on chemical composition of those products... ammonium nitrate. (5) An estimate of the total number of respondents and the amount of time estimated for...
Gap filling strategies and error in estimating annual soil respiration
USDA-ARS?s Scientific Manuscript database
Soil respiration (Rsoil) is one of the largest CO2 fluxes in the global carbon (C) cycle. Estimation of annual Rsoil requires extrapolation of survey measurements or gap-filling of automated records to produce a complete time series. While many gap-filling methodologies have been employed, there is ...
78 FR 54513 - Proposed Information Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-04
.... Estimated Time per Respondent: 1 hr. Estimated Total Annual Burden Hours: 1. Title: Indoor Tanning Services... (124 Stat. 119 (2010)) to impose an excise tax on indoor tanning services. This information is required to be maintained in order for providers to accurately calculate the tax on indoor tanning services...
75 FR 57283 - Agency Information Collection Activities: Passenger and Crew Manifest
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-20
... private aircraft flights. Specific data elements required for each passenger and crew member include: Full... expiration date; and alien registration number where applicable. APIS is authorized under the Aviation and.... Estimated Time per Response: 1 minute. Estimated Total Annual Burden Hours: 3,128,861. Private Aircraft...
Bore, Thierry; Wagner, Norman; Delepine Lesoille, Sylvie; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique
2016-01-01
Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865
Bore, Thierry; Wagner, Norman; Lesoille, Sylvie Delepine; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique
2016-04-18
Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling.
Ecology and thermal inactivation of microbes in and on interplanetary space vehicle components
NASA Technical Reports Server (NTRS)
Reyes, A. L.; Campbell, J. E.
1976-01-01
The heat resistance of Bacillus subtilis var. niger was measured from 85 to 125 C using moisture levels of % RH or = 0.001 to 100. Curves are presented which characterize thermal destruction using thermal death times defined as F values at a given combination of three moisture and temperature conditions. The times required at 100 C for reductions of 99.99% of the initial population were estimated for the three moisture conditions. The linear model (from which estimates of D are obtained) was satisfactory for estimating thermal death times (% RH or = 0.07) in the plate count range. Estimates based on observed thermal death times and D values for % RH = 100 diverged so that D values generally gave a more conservative estimate over the temperature range 90 to 125 C. Estimates of Z sub F and Z sub L ranged from 32.1 to 58.3 C for % RH of or = 0.07 and 100. A Z sub D = 30.0 was obtained for data observed at % RH or = 0.07.
Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior
NASA Technical Reports Server (NTRS)
Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.
2017-01-01
A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.
2016-11-15
A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layermore » density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.« less
DOT National Transportation Integrated Search
1997-01-01
The success of Advanced Traveler Information Systems (ATIS) and Advanced Traffic Management Systems (ATMS) depends on the availability and dissemination of timely and accurate estimates of current and emerging traffic network conditions. Real-time Dy...
ERIC Educational Resources Information Center
Huang, Tracy; Loft, Shayne; Humphreys, Michael S.
2014-01-01
"Time-based prospective memory" (PM) refers to performing intended actions at a future time. Participants with time-based PM tasks can be slower to perform ongoing tasks (costs) than participants without PM tasks because internal control is required to maintain the PM intention or to make prospective-timing estimates. However, external…
NASA Astrophysics Data System (ADS)
Marques, Haroldo; Monico, João; Aquino, Marcio; Melo, Weyller
2014-05-01
The real time PPP method requires the availability of real time precise orbits and satellites clocks corrections. Currently, it is possible to apply the solutions of clocks and orbits available by BKG within the context of IGS Pilot project or by using the operational predicted IGU ephemeris. The accuracy of the satellite position available in the IGU is enough for several applications requiring good quality. However, the satellites clocks corrections do not provide enough accuracy (3 ns ~ 0.9 m) to accomplish real time PPP with the same level of accuracy. Therefore, for real time PPP application it is necessary to further research and develop appropriated methodologies for estimating the satellite clock corrections in real time with better accuracy. Currently, it is possible to apply the real time solutions of clocks and orbits available by Federal Agency for Cartography and Geodesy (BKG) within the context of IGS Pilot project. The BKG corrections are disseminated by a new proposed format of the RTCM 3.x and can be applied in the broadcasted orbits and clocks. Some investigations have been proposed for the estimation of the satellite clock corrections using GNSS code and phase observable at the double difference level between satellites and epochs (MERVAT, DOUSA, 2007). Another possibility consists of applying a Kalman Filter in the PPP network mode (HAUSCHILD, 2010) and it is also possible the integration of both methods, using network PPP and observables at double difference level in specific time intervals (ZHANG; LI; GUO, 2010). For this work the methodology adopted consists in the estimation of the satellite clock corrections based on the data adjustment in the PPP mode, but for a network of GNSS stations. The clock solution can be solved by using two types of observables: code smoothed by carrier phase or undifferenced code together with carrier phase. In the former, we estimate receiver clock error; satellite clock correction and troposphere, considering that the phase ambiguities are eliminated when applying differences between consecutive epochs. However, when using undifferenced code and phase, the ambiguities may be estimated together with receiver clock errors, satellite clock corrections and troposphere parameters. In both strategies it is also possible to correct the troposphere delay from a Numerical Weather Forecast Model instead of estimating it. The prediction of the satellite clock correction can be performed using a straight line or a second degree polynomial using the time series of the estimated satellites clocks. To estimate satellite clock correction and to accomplish real time PPP two pieces of software have been developed, respectively, "RT_PPP" and "RT_SAT_CLOCK". The system (RT_PPP) is able to process GNSS code and phase data using precise ephemeris and precise satellites clocks corrections together with several corrections required for PPP. In the software RT_SAT_CLOCK we apply a Kalman filter algorithm to estimate satellite clock correction in the network PPP mode. In this case, all PPP corrections must be applied for each station. The experiments were generated in real time and post-processed mode (simulating real time) considering data from the Brazilian continuous GPS network and also from the IGS network in a global satellite clock solution. We have used IGU ephemeris for satellite position and estimated the satellite clock corrections, performing the updates as soon as new ephemeris files were available. Experiments were accomplished in order to assess the accuracy of the estimated clocks when using the Brazilian Numerical Weather Forecast Model (BNWFM) from CPTEC/INPE and also using the ZTD from European Centre for Medium-Range Weather Forecasts (ECMWF) together with Vienna Mapping Function VMF or estimating troposphere with clocks and ambiguities in the Kalman Filter. The daily precision of the estimated satellite clock corrections reached the order of 0.15 nanoseconds. The clocks were applied in the Real Time PPP for Brazilian network stations and also for flight test of the Brazilian airplanes and the results show that it is possible to accomplish real time PPP in the static and kinematic modes with accuracy of the order of 10 to 20 cm, respectively.
Pinsent, Amy; Blake, Isobel M; White, Michael T; Riley, Steven
2014-08-01
Both high and low pathogenic subtype A avian influenza remain ongoing threats to the commercial poultry industry globally. The emergence of a novel low pathogenic H7N9 lineage in China presents itself as a new concern to both human and animal health and may necessitate additional surveillance in commercial poultry operations in affected regions. Sampling data was simulated using a mechanistic model of H7N9 influenza transmission within commercial poultry barns together with a stochastic observation process. Parameters were estimated using maximum likelihood. We assessed the probability of detecting an outbreak at time of slaughter using both real-time polymerase chain reaction (rt-PCR) and a hemagglutinin inhibition assay (HI assay) before considering more intense sampling prior to slaughter. The day of virus introduction and R0 were estimated jointly from weekly flock sampling data. For scenarios where R0 was known, we estimated the day of virus introduction into a barn under different sampling frequencies. If birds were tested at time of slaughter, there was a higher probability of detecting evidence of an outbreak using an HI assay compared to rt-PCR, except when the virus was introduced <2 weeks before time of slaughter. Prior to the initial detection of infection N sample = 50 (1%) of birds were sampled on a weekly basis once, but after infection was detected, N sample = 2000 birds (40%) were sampled to estimate both parameters. We accurately estimated the day of virus introduction in isolation with weekly and 2-weekly sampling. A strong sampling effort would be required to infer both the day of virus introduction and R0. Such a sampling effort would not be required to estimate the day of virus introduction alone once R0 was known, and sampling N sample = 50 of birds in the flock on a weekly or 2 weekly basis would be sufficient.
An attempt to estimate students' workload.
Pogacnik, M; Juznic, P; Kosorok-Drobnic, M; Pogacnik, A; Cestnik, V; Kogovsek, J; Pestevsek, U; Fernandes, Tito
2004-01-01
Following the recent introduction of the European Credit Transfer System (ECTS) into several European university programs, a new interest has developed in determining students' workload. ECTS credits are numerical values describing the student workload required to complete course units; ECTS has the potential to facilitate comparison and create transparency between institutional curricula. ECTS credits are frequently listed alongside institutional credits in course outlines and module summaries. Measuring student workload has been difficult; to a large extent, estimates are based only upon anecdotal and casual information. To gather more systematic information, we asked students at the Veterinary Faculty, University of Ljubljana, to estimate the actual total workload they committed to fulfill their coursework obligations for specific subjects in the veterinary degree program by reporting their attendance at defined contact hours and their estimated time for outside study, including the time required for examinations and other activities. Students also reported the final grades they received for these subjects. The results show that certain courses require much more work than others, independent of credit unit assignment. Generally, the courses with more contact hours tend also to demand more independent work; the best predictor of both actual student workload and student success is the amount of contact time in which they participate. The data failed to show any strong connection between students' total workload and grades they received; rather, they showed some evidence that regular presence at contact hours was the most positive influence on grades. Less frequent presence at lectures tended to indicate less time spent on independent study. It was also found that pre-clinical and clinical courses tended to require more work from students than other, more general subjects. While the present study does not provide conclusive evidence, it does indicate the need for further inquiry into the nature of the relationship between teaching and learning in higher education and for evaluation of the benefits (or otherwise) of more "self-directed" study.
40 CFR 98.266 - Data reporting requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... phosphoric acid process lines. (8) Number of times missing data procedures were used to estimate phosphate... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Data reporting requirements. 98.266... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Phosphoric Acid Production § 98.266 Data reporting...
Beyond Self-Report: Tools to Compare Estimated and Real-World Smartphone Use
Andrews, Sally; Ellis, David A.; Shaw, Heather; Piwek, Lukasz
2015-01-01
Psychologists typically rely on self-report data when quantifying mobile phone usage, despite little evidence of its validity. In this paper we explore the accuracy of using self-reported estimates when compared with actual smartphone use. We also include source code to process and visualise these data. We compared 23 participants’ actual smartphone use over a two-week period with self-reported estimates and the Mobile Phone Problem Use Scale. Our results indicate that estimated time spent using a smartphone may be an adequate measure of use, unless a greater resolution of data are required. Estimates concerning the number of times an individual used their phone across a typical day did not correlate with actual smartphone use. Neither estimated duration nor number of uses correlated with the Mobile Phone Problem Use Scale. We conclude that estimated smartphone use should be interpreted with caution in psychological research. PMID:26509895
Beyond Self-Report: Tools to Compare Estimated and Real-World Smartphone Use.
Andrews, Sally; Ellis, David A; Shaw, Heather; Piwek, Lukasz
2015-01-01
Psychologists typically rely on self-report data when quantifying mobile phone usage, despite little evidence of its validity. In this paper we explore the accuracy of using self-reported estimates when compared with actual smartphone use. We also include source code to process and visualise these data. We compared 23 participants' actual smartphone use over a two-week period with self-reported estimates and the Mobile Phone Problem Use Scale. Our results indicate that estimated time spent using a smartphone may be an adequate measure of use, unless a greater resolution of data are required. Estimates concerning the number of times an individual used their phone across a typical day did not correlate with actual smartphone use. Neither estimated duration nor number of uses correlated with the Mobile Phone Problem Use Scale. We conclude that estimated smartphone use should be interpreted with caution in psychological research.
Strategic Methodologies in Public Health Cost Analyses.
Whittington, Melanie; Atherly, Adam; VanRaemdonck, Lisa; Lampe, Sarah
The National Research Agenda for Public Health Services and Systems Research states the need for research to determine the cost of delivering public health services in order to assist the public health system in communicating financial needs to decision makers, partners, and health reform leaders. The objective of this analysis is to compare 2 cost estimation methodologies, public health manager estimates of employee time spent and activity logs completed by public health workers, to understand to what degree manager surveys could be used in lieu of more time-consuming and burdensome activity logs. Employees recorded their time spent on communicable disease surveillance for a 2-week period using an activity log. Managers then estimated time spent by each employee on a manager survey. Robust and ordinary least squares regression was used to measure the agreement between the time estimated by the manager and the time recorded by the employee. The 2 outcomes for this study included time recorded by the employee on the activity log and time estimated by the manager on the manager survey. This study was conducted in local health departments in Colorado. Forty-one Colorado local health departments (82%) agreed to participate. Seven of the 8 models showed that managers underestimate their employees' time, especially for activities on which an employee spent little time. Manager surveys can best estimate time for time-intensive activities, such as total time spent on a core service or broad public health activity, and yet are less precise when estimating discrete activities. When Public Health Services and Systems Research researchers and health departments are conducting studies to determine the cost of public health services, there are many situations in which managers can closely approximate the time required and produce a relatively precise approximation of cost without as much time investment by practitioners.
Ferguson, B G; Lo, K W
2000-10-01
Flight parameter estimation methods for an airborne acoustic source can be divided into two categories, depending on whether the narrow-band lines or the broadband component of the received signal spectrum is processed to estimate the flight parameters. This paper provides a common framework for the formulation and test of two flight parameter estimation methods: one narrow band, the other broadband. The performances of the two methods are evaluated by applying them to the same acoustic data set, which is recorded by a planar array of passive acoustic sensors during multiple transits of a turboprop fixed-wing aircraft and two types of rotary-wing aircraft. The narrow-band method, which is based on a kinematic model that assumes the source travels in a straight line at constant speed and altitude, requires time-frequency analysis of the acoustic signal received by a single sensor during each aircraft transit. The broadband method is based on the same kinematic model, but requires observing the temporal variation of the differential time of arrival of the acoustic signal at each pair of sensors that comprises the planar array. Generalized cross correlation of each pair of sensor outputs using a cross-spectral phase transform prefilter provides instantaneous estimates of the differential times of arrival of the signal as the acoustic wavefront traverses the array.
When Time Makes a Difference: Addressing Ergodicity and Complexity in Education
ERIC Educational Resources Information Center
Koopmans, Matthijs
2015-01-01
The detection of complexity in behavioral outcomes often requires an estimation of their variability over a prolonged time spectrum to assess processes of stability and transformation. Conventional scholarship typically relies on time-independent measures, "snapshots", to analyze those outcomes, assuming that group means and their…
NASA Astrophysics Data System (ADS)
Flores, Andrés; Wiff, Rodrigo; Díaz, Eduardo; Carvajal, Bernardita
2017-08-01
Fecundity is a key aspect of fish species reproductive biology because it relates directly to total egg production. Yet, despite such importance, fecundity estimates are lacking or scarce for several fish species. The gravimetric method is the most-used one to estimate fecundity by essentially scaling up the oocyte density to the ovary weight. It is a relatively simple and precise technique, but also time consuming because it requires counting all oocytes in an ovary subsample. The auto-diametric method, on the other hand, is relatively new for estimating fecundity, representing a rapid alternative, because it requires only an estimation of mean oocyte density from mean oocyte diameter. Using the extensive database available from commercial fishery and design surveys for southern blue whiting Micromesistius australis australis in the Southeast Pacific, we compared estimates of fecundity using both gravimetric and auto-diametric methods. Temporal variations in potential fecundity from the auto-diametric method were evaluated using generalised linear models considering predictors from maternal characteristics such as female size, condition factor, oocyte size, and gonadosomatic index. A global and time-invariant auto-diametric equation was evaluated using a simulation procedure based on non-parametric bootstrap. Results indicated there were not significant differences regarding fecundity estimates between the gravimetric and auto-diametric method (p > 0.05). Simulation showed the application of a global equation is unbiased and sufficiently precise to estimate time-invariant fecundity of this species. Temporal variations on fecundity were explained by maternal characteristic, revealing signals of fecundity down-regulation. We discuss how oocyte size and nutritional condition (measured as condition factor) are one of the important factors determining fecundity. We highlighted also the relevance of choosing the appropriate sampling period to conduct maturity studies and ensure precise estimates of fecundity of this species.
Fine tuning GPS clock estimation in the MCS
NASA Technical Reports Server (NTRS)
Hutsell, Steven T.
1995-01-01
With the completion of a 24 operational satellite constellation, GPS is fast approaching the critical milestone, Full Operational Capability (FOC). Although GPS is well capable of providing the timing accuracy and stability figures required by system specifications, the GPS community will continue to strive for further improvements in performance. The GPS Master Control Station (MCS) recently demonstrated that timing improvements are always composite Clock, and hence, Kalman Filter state estimation, providing a small improvement to user accuracy.
Development of Parallel Architectures for Sensor Array Processing. Volume 1
1993-08-01
required for the DOA estimation [ 1-7]. The Multiple Signal Classification ( MUSIC ) [ 1] and the Estimation of Signal Parameters by Rotational...manifold and the estimated subspace. Although MUSIC is a high resolution algorithm, it has several drawbacks including the fact that complete knowledge of...thoroughly, MUSIC algorithm was selected to develop special purpose hardware for real time computation. Summary of the MUSIC algorithm is as follows
Robust guaranteed-cost adaptive quantum phase estimation
NASA Astrophysics Data System (ADS)
Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.
2017-05-01
Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.
Excitations for Rapidly Estimating Flight-Control Parameters
NASA Technical Reports Server (NTRS)
Moes, Tim; Smith, Mark; Morelli, Gene
2006-01-01
A flight test on an F-15 airplane was performed to evaluate the utility of prescribed simultaneous independent surface excitations (PreSISE) for real-time estimation of flight-control parameters, including stability and control derivatives. The ability to extract these derivatives in nearly real time is needed to support flight demonstration of intelligent flight-control system (IFCS) concepts under development at NASA, in academia, and in industry. Traditionally, flight maneuvers have been designed and executed to obtain estimates of stability and control derivatives by use of a post-flight analysis technique. For an IFCS, it is required to be able to modify control laws in real time for an aircraft that has been damaged in flight (because of combat, weather, or a system failure). The flight test included PreSISE maneuvers, during which all desired control surfaces are excited simultaneously, but at different frequencies, resulting in aircraft motions about all coordinate axes. The objectives of the test were to obtain data for post-flight analysis and to perform the analysis to determine: 1) The accuracy of derivatives estimated by use of PreSISE, 2) The required durations of PreSISE inputs, and 3) The minimum required magnitudes of PreSISE inputs. The PreSISE inputs in the flight test consisted of stacked sine-wave excitations at various frequencies, including symmetric and differential excitations of canard and stabilator control surfaces and excitations of aileron and rudder control surfaces of a highly modified F-15 airplane. Small, medium, and large excitations were tested in 15-second maneuvers at subsonic, transonic, and supersonic speeds. Typical excitations are shown in Figure 1. Flight-test data were analyzed by use of pEst, which is an industry-standard output-error technique developed by Dryden Flight Research Center. Data were also analyzed by use of Fourier-transform regression (FTR), which was developed for onboard, real-time estimation of the derivatives.
NASA Astrophysics Data System (ADS)
Merrill, S.; Horowitz, J.; Traino, A. C.; Chipkin, S. R.; Hollot, C. V.; Chait, Y.
2011-02-01
Calculation of the therapeutic activity of radioiodine 131I for individualized dosimetry in the treatment of Graves' disease requires an accurate estimate of the thyroid absorbed radiation dose based on a tracer activity administration of 131I. Common approaches (Marinelli-Quimby formula, MIRD algorithm) use, respectively, the effective half-life of radioiodine in the thyroid and the time-integrated activity. Many physicians perform one, two, or at most three tracer dose activity measurements at various times and calculate the required therapeutic activity by ad hoc methods. In this paper, we study the accuracy of estimates of four 'target variables': time-integrated activity coefficient, time of maximum activity, maximum activity, and effective half-life in the gland. Clinical data from 41 patients who underwent 131I therapy for Graves' disease at the University Hospital in Pisa, Italy, are used for analysis. The radioiodine kinetics are described using a nonlinear mixed-effects model. The distributions of the target variables in the patient population are characterized. Using minimum root mean squared error as the criterion, optimal 1-, 2-, and 3-point sampling schedules are determined for estimation of the target variables, and probabilistic bounds are given for the errors under the optimal times. An algorithm is developed for computing the optimal 1-, 2-, and 3-point sampling schedules for the target variables. This algorithm is implemented in a freely available software tool. Taking into consideration 131I effective half-life in the thyroid and measurement noise, the optimal 1-point time for time-integrated activity coefficient is a measurement 1 week following the tracer dose. Additional measurements give only a slight improvement in accuracy.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John O.
2017-01-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
NASA Astrophysics Data System (ADS)
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
NASA Astrophysics Data System (ADS)
Deem, Eric; Cattafesta, Louis; Zhang, Hao; Rowley, Clancy
2016-11-01
Closed-loop control of flow separation requires the spatio-temporal states of the flow to be fed back through the controller in real time. Previously, static and dynamic estimation methods have been employed that provide reduced-order model estimates of the POD-coefficients of the flow velocity using surface pressure measurements. However, this requires a "learning" dataset a priori. This approach is effective as long as the dynamics during control do not stray from the learning dataset. Since only a few dynamical features are required for feedback control of flow separation, many of the details provided by full-field snapshots are superfluous. This motivates a state-observation technique that extracts key dynamical features directly from surface pressure, without requiring PIV snapshots. The results of identifying DMD modes of separated flow through an array of surface pressure sensors in real-time are presented. This is accomplished by employing streaming DMD "on the fly" to surface pressure snapshots. These modal characteristics exhibit striking similarities to those extracted from PIV data and the pressure field obtained via solving Poisson's equation. Progress towards closed-loop separation control based on the dynamic modes of surface pressure will be discussed. Supported by AFOSR Grant FA9550-14-1-0289.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-05
... (ii) to notify distributors and retailers if the products are subject to recall. B. Estimated Burden... total annual burden of 75 hours per year. CPSC staff estimates that the hourly wage for the time... Statistics: Total compensation rates for management, professional, and related occupations in private goods...
Production rates for crews using hand tools on firelines
Lisa Haven; T. Parkin Hunter; Theodore G. Storey
1982-01-01
Reported rates at which hand crews construct firelines can vary widely because of differences in fuels, fire and measurement conditions, and fuel resistance-to-control classification schemes. Real-time fire dispatching and fire simulation planning models, however, require accurate estimates of hand crew productivity. Errors in estimating rate of fireline production...
Testing for Seed Quality in Southern Oaks
F.T. Bonner
1984-01-01
Expressions of germination rate, such as peak value (PV) or mean germination time (MGT), provide good estimates of acorn quality, but test completion requires a minimum of 3 weeks. For more rapid estimates, tetrazolium staining is recommended. Some seed test results were significantly correlated with nursery germination of cherrybark and water oaks, but not with...
Estimating allowable-cut by area-scheduling
William B. Leak
2011-01-01
Estimation of the regulated allowable-cut is an important step in placing a forest property under management and ensuring a continued supply of timber over time. Regular harvests also provide for the maintenance of needed wildlife habitat. There are two basic approaches: (1) volume, and (2) area/volume regulation, with many variations of each. Some require...
40 CFR 98.96 - Data reporting requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., Equation I-16 of this subpart, for each fluorinated heat transfer fluid used. (s) Where missing data... § 98.95(b), the number of times missing data procedures were followed in the reporting year, the method used to estimate the missing data, and the estimates of those data. (t) A brief description of each...
40 CFR 98.96 - Data reporting requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., Equation I-16 of this subpart, for each fluorinated heat transfer fluid used. (s) Where missing data... § 98.95(b), the number of times missing data procedures were followed in the reporting year, the method used to estimate the missing data, and the estimates of those data. (t) A brief description of each...
40 CFR 267.142 - Cost estimate for closure.
Code of Federal Regulations, 2010 CFR
2010-07-01
... that on-site disposal capacity will exist at all times over the life of the facility. (3) The closure...) The owner or operator must keep the following at the facility during the operating life of the... PERMIT Financial Requirements § 267.142 Cost estimate for closure. (a) The owner or operator must have at...
40 CFR 267.142 - Cost estimate for closure.
Code of Federal Regulations, 2011 CFR
2011-07-01
... that on-site disposal capacity will exist at all times over the life of the facility. (3) The closure...) The owner or operator must keep the following at the facility during the operating life of the... PERMIT Financial Requirements § 267.142 Cost estimate for closure. (a) The owner or operator must have at...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-29
... respondents; 409,048 responses. Estimated Time Per Response: .033 hours Frequency of Response: Recordkeeping... previous estimates. Section 90.215 requires station licensees to measure the carrier frequency, output power, and modulation of each transmitter authorized to operate with power in excess of two watts when...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-04
... Cost (Operation and Maintenance): $0. IV. Public Participation--Submission of Comments on This Notice... and costs) is minimal, collection instruments are clearly understood, and OSHA's estimate of the... information is useful; The accuracy of OSHA's estimate of the burden (time and costs) of the information...
Tracking of time-varying genomic regulatory networks with a LASSO-Kalman smoother
2014-01-01
It is widely accepted that cellular requirements and environmental conditions dictate the architecture of genetic regulatory networks. Nonetheless, the status quo in regulatory network modeling and analysis assumes an invariant network topology over time. In this paper, we refocus on a dynamic perspective of genetic networks, one that can uncover substantial topological changes in network structure during biological processes such as developmental growth. We propose a novel outlook on the inference of time-varying genetic networks, from a limited number of noisy observations, by formulating the network estimation as a target tracking problem. We overcome the limited number of observations (small n large p problem) by performing tracking in a compressed domain. Assuming linear dynamics, we derive the LASSO-Kalman smoother, which recursively computes the minimum mean-square sparse estimate of the network connectivity at each time point. The LASSO operator, motivated by the sparsity of the genetic regulatory networks, allows simultaneous signal recovery and compression, thereby reducing the amount of required observations. The smoothing improves the estimation by incorporating all observations. We track the time-varying networks during the life cycle of the Drosophila melanogaster. The recovered networks show that few genes are permanent, whereas most are transient, acting only during specific developmental phases of the organism. PMID:24517200
An, Zhe; Rey, Daniel; Ye, Jingxin; ...
2017-01-16
The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of themore » full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. Here, we show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
An, Zhe; Rey, Daniel; Ye, Jingxin
The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of themore » full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. Here, we show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.« less
Minimum area requirements for an at-risk butterfly based on movement and demography.
Brown, Leone M; Crone, Elizabeth E
2016-02-01
Determining the minimum area required to sustain populations has a long history in theoretical and conservation biology. Correlative approaches are often used to estimate minimum area requirements (MARs) based on relationships between area and the population size required for persistence or between species' traits and distribution patterns across landscapes. Mechanistic approaches to estimating MAR facilitate prediction across space and time but are few. We used a mechanistic MAR model to determine the critical minimum patch size (CMP) for the Baltimore checkerspot butterfly (Euphydryas phaeton), a locally abundant species in decline along its southern range, and sister to several federally listed species. Our CMP is based on principles of diffusion, where individuals in smaller patches encounter edges and leave with higher probability than those in larger patches, potentially before reproducing. We estimated a CMP for the Baltimore checkerspot of 0.7-1.5 ha, in accordance with trait-based MAR estimates. The diffusion rate on which we based this CMP was broadly similar when estimated at the landscape scale (comparing flight path vs. capture-mark-recapture data), and the estimated population growth rate was consistent with observed site trends. Our mechanistic approach to estimating MAR is appropriate for species whose movement follows a correlated random walk and may be useful where landscape-scale distributions are difficult to assess, but demographic and movement data are obtainable from a single site or the literature. Just as simple estimates of lambda are often used to assess population viability, the principles of diffusion and CMP could provide a starting place for estimating MAR for conservation. © 2015 Society for Conservation Biology.
Laurence, Caroline O; Heywood, Troy; Bell, Janice; Atkinson, Kaye; Karnon, Jonathan
2018-03-27
Health workforce planning models have been developed to estimate the future health workforce requirements for a population whom they serve and have been used to inform policy decisions. To adapt and further develop a need-based GP workforce simulation model to incorporate current and estimated geographic distribution of patients and GPs. A need-based simulation model that estimates the supply of GPs and levels of services required in South Australia (SA) was adapted and applied to the Western Australian (WA) workforce. The main outcome measure was the differences in the number of full-time equivalent (FTE) GPs supplied and required from 2013 to 2033. The base scenario estimated a shortage of GPs in WA from 2019 onwards with a shortage of 493 FTE GPs in 2033, while for SA, estimates showed an oversupply over the projection period. The WA urban and rural models estimated an urban shortage of GPs over this period. A reduced international medical graduate recruitment scenario resulted in estimated shortfalls of GPs by 2033 for WA and SA. The WA-specific scenarios of lower population projections and registrar work value resulted in a reduced shortage of FTE GPs in 2033, while unfilled training places increased the shortfall of FTE GPs in 2033. The simulation model incorporates contextual differences to its structure that allows within and cross jurisdictional comparisons of workforce estimations. It also provides greater insights into the drivers of supply and demand and the impact of changes in workforce policy, promoting more informed decision-making.
Studies on spectral analysis of randomly sampled signals: Application to laser velocimetry data
NASA Technical Reports Server (NTRS)
Sree, David
1992-01-01
Spectral analysis is very useful in determining the frequency characteristics of many turbulent flows, for example, vortex flows, tail buffeting, and other pulsating flows. It is also used for obtaining turbulence spectra from which the time and length scales associated with the turbulence structure can be estimated. These estimates, in turn, can be helpful for validation of theoretical/numerical flow turbulence models. Laser velocimetry (LV) is being extensively used in the experimental investigation of different types of flows, because of its inherent advantages; nonintrusive probing, high frequency response, no calibration requirements, etc. Typically, the output of an individual realization laser velocimeter is a set of randomly sampled velocity data. Spectral analysis of such data requires special techniques to obtain reliable estimates of correlation and power spectral density functions that describe the flow characteristics. FORTRAN codes for obtaining the autocorrelation and power spectral density estimates using the correlation-based slotting technique were developed. Extensive studies have been conducted on simulated first-order spectrum and sine signals to improve the spectral estimates. A first-order spectrum was chosen because it represents the characteristics of a typical one-dimensional turbulence spectrum. Digital prefiltering techniques, to improve the spectral estimates from randomly sampled data were applied. Studies show that the spectral estimates can be increased up to about five times the mean sampling rate.
NASA Astrophysics Data System (ADS)
Nair, S. P.; Righetti, R.
2015-05-01
Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.
Methodology for Estimating Times of Remediation Associated with Monitored Natural Attenuation
Chapelle, Francis H.; Widdowson, Mark A.; Brauner, J. Steven; Mendez, Eduardo; Casey, Clifton C.
2003-01-01
Natural attenuation processes combine to disperse, immobilize, and biologically transform anthropogenic contaminants, such as petroleum hydrocarbons and chlorinated ethenes, in ground-water systems. The time required for these processes to lower contaminant concentrations to levels protective of human health and the environment, however, varies widely between different hydrologic systems, different chemical contaminants, and varying amounts of contaminants. This report outlines a method for estimating timeframes required for natural attenuation processes, such as dispersion, sorption, and biodegradation, to lower contaminant concentrations and mass to predetermined regulatory goals in groundwater systems. The time-of-remediation (TOR) problem described in this report is formulated as three interactive components: (1) estimating the length of a contaminant plume once it has achieved a steady-state configuration from a source area of constant contaminant concentration, (2) estimating the time required for a plume to shrink to a smaller, regulatoryacceptable configuration when source-area contaminant concentrations are lowered by engineered methods, and (3) estimating the time needed for nonaqueous phase liquid (NAPL) contaminants to dissolve, disperse, and biodegrade below predetermined levels in contaminant source areas. This conceptualization was used to develop Natural Attenuation Software (NAS), an interactive computer aquifers. NAS was designed as a screening tool and requires the input of detailed site information about hydrogeology, redox conditions, and the distribution of contaminants. Because NAS is based on numerous simplifications of hydrologic, microbial, and geochemical processes, the program may introduce unacceptable errors for highly heterogeneous hydrologic systems. In such cases, application of the TOR framework outlined in this report may require more detailed, site-specific digital modeling. The NAS software may be downloaded from the Web site http://www.cee.vt.edu/NAS/ Application of NAS illustrates several general characteristics shared by all TOR problems. First, the distance of stabilization of a contaminant plume is strongly dependent on the natural attenuation capacity of particular ground-water systems. The time that it takes a plume to reach a steady-state configuration, however, is independent of natural attenuation capacity. Rather, the time of stabilization is most strongly affected by the sorptive capacity of the aquifer, which is dependent on the organic matter content of the aquifer sediments, as well as the sorptive properties of individual contaminants. As a general rule, a high sorptive capacity retards a plume.s growth or shrinkage, and increases the time of stabilization. Finally, the time of NAPL dissolution depends largely on NAPL mass, composition, geometry, and hydrologic factors, such as ground-water flow rates. An example TOR analysis for petroleum hydrocarbon NAPL was performed for the Laurel Bay site in South Carolina. About 500 to 1,000 pounds of gasoline leaked into the aquifer at this site in 1991, and the NAS simulations suggested that TOR would be on the order of 10 years for soluble and poorly sorbed compounds, such as benzene and methyl tertiary-butyl ether (MTBE). Conversely, TOR would be on the order of 40 years for less soluble, more strongly sorbed compounds, such as toluene, ethylbenzene, and xylenes (TEX). These TOR estimates are roughly consistent with contaminant concentrations observed over 10 years of monitoring at this site where benzene and MTBE concentrations were observed to decrease rapidly and are approaching regulatory maximum concentration limits, whereas toluene, ethylbenzene, and xylene concentrations decreased at a slower rate and have remained relatively high. An example TOR analysis for petroleum hydrocarbon NAPL was performed for the Laurel Bay site in South Carolina. About 500 to 1,000 pounds of gasoline leaked into the a
NASA Astrophysics Data System (ADS)
Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.
2013-06-01
In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less smoothing at early time points post-radiopharmaceutical administration but more smoothing and fewer iterations at later time points when the total organ activity was lower. The results of this study demonstrate the importance of using optimal reconstruction and regularization parameters. Optimal results were obtained with different parameters at each time point, but using a single set of parameters for all time points produced near-optimal dose-volume histograms.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-09
... recommendation, and official transcripts. A personal interview must also be conducted. Eligibility requirements.../erecruit/login.jsp ) and then submit paper forms via mail. An in-person interview is also required. III... of Respondents: 1,800. Estimated Time per Response: written applications, 2 hours; interviews, 5...
McShane, Ryan R.; Driscoll, Katelyn P.; Sando, Roy
2017-09-27
Many approaches have been developed for measuring or estimating actual evapotranspiration (ETa), and research over many years has led to the development of remote sensing methods that are reliably reproducible and effective in estimating ETa. Several remote sensing methods can be used to estimate ETa at the high spatial resolution of agricultural fields and the large extent of river basins. More complex remote sensing methods apply an analytical approach to ETa estimation using physically based models of varied complexity that require a combination of ground-based and remote sensing data, and are grounded in the theory behind the surface energy balance model. This report, funded through cooperation with the International Joint Commission, provides an overview of selected remote sensing methods used for estimating water consumed through ETa and focuses on Mapping Evapotranspiration at High Resolution with Internalized Calibration (METRIC) and Operational Simplified Surface Energy Balance (SSEBop), two energy balance models for estimating ETa that are currently applied successfully in the United States. The METRIC model can produce maps of ETa at high spatial resolution (30 meters using Landsat data) for specific areas smaller than several hundred square kilometers in extent, an improvement in practice over methods used more generally at larger scales. Many studies validating METRIC estimates of ETa against measurements from lysimeters have shown model accuracies on daily to seasonal time scales ranging from 85 to 95 percent. The METRIC model is accurate, but the greater complexity of METRIC results in greater data requirements, and the internalized calibration of METRIC leads to greater skill required for implementation. In contrast, SSEBop is a simpler model, having reduced data requirements and greater ease of implementation without a substantial loss of accuracy in estimating ETa. The SSEBop model has been used to produce maps of ETa over very large extents (the conterminous United States) using lower spatial resolution (1 kilometer) Moderate Resolution Imaging Spectroradiometer (MODIS) data. Model accuracies ranging from 80 to 95 percent on daily to annual time scales have been shown in numerous studies that validated ETa estimates from SSEBop against eddy covariance measurements. The METRIC and SSEBop models can incorporate low and high spatial resolution data from MODIS and Landsat, but the high spatiotemporal resolution of ETa estimates using Landsat data over large extents takes immense computing power. Cloud computing is providing an opportunity for processing an increasing amount of geospatial “big data” in a decreasing period of time. For example, Google Earth EngineTM has been used to implement METRIC with automated calibration for regional-scale estimates of ETa using Landsat data. The U.S. Geological Survey also is using Google Earth EngineTM to implement SSEBop for estimating ETa in the United States at a continental scale using Landsat data.
Lightweight, Miniature Inertial Measurement System
NASA Technical Reports Server (NTRS)
Tang, Liang; Crassidis, Agamemnon
2012-01-01
A miniature, lighter-weight, and highly accurate inertial navigation system (INS) is coupled with GPS receivers to provide stable and highly accurate positioning, attitude, and inertial measurements while being subjected to highly dynamic maneuvers. In contrast to conventional methods that use extensive, groundbased, real-time tracking and control units that are expensive, large, and require excessive amounts of power to operate, this method focuses on the development of an estimator that makes use of a low-cost, miniature accelerometer array fused with traditional measurement systems and GPS. Through the use of a position tracking estimation algorithm, onboard accelerometers are numerically integrated and transformed using attitude information to obtain an estimate of position in the inertial frame. Position and velocity estimates are subject to drift due to accelerometer sensor bias and high vibration over time, and so require the integration with GPS information using a Kalman filter to provide highly accurate and reliable inertial tracking estimations. The method implemented here uses the local gravitational field vector. Upon determining the location of the local gravitational field vector relative to two consecutive sensors, the orientation of the device may then be estimated, and the attitude determined. Improved attitude estimates further enhance the inertial position estimates. The device can be powered either by batteries, or by the power source onboard its target platforms. A DB9 port provides the I/O to external systems, and the device is designed to be mounted in a waterproof case for all-weather conditions.
Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo
2016-04-01
Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.
Tumor response estimation in radar-based microwave breast cancer detection.
Kurrant, Douglas J; Fear, Elise C; Westwick, David T
2008-12-01
Radar-based microwave imaging techniques have been proposed for early stage breast cancer detection. A considerable challenge for the successful implementation of these techniques is the reduction of clutter, or components of the signal originating from objects other than the tumor. In particular, the reduction of clutter from the late-time scattered fields is required in order to detect small (subcentimeter diameter) tumors. In this paper, a method to estimate the tumor response contained in the late-time scattered fields is presented. The method uses a parametric function to model the tumor response. A maximum a posteriori estimation approach is used to evaluate the optimal values for the estimates of the parameters. A pattern classification technique is then used to validate the estimation. The ability of the algorithm to estimate a tumor response is demonstrated by using both experimental and simulated data obtained with a tissue sensing adaptive radar system.
Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2016-01-01
A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.
Adaptive tracking of a time-varying field with a quantum sensor
NASA Astrophysics Data System (ADS)
Bonato, Cristian; Berry, Dominic W.
2017-05-01
Sensors based on single spins can enable magnetic-field detection with very high sensitivity and spatial resolution. Previous work has concentrated on sensing of a constant magnetic field or a periodic signal. Here, we instead investigate the problem of estimating a field with nonperiodic variation described by a Wiener process. We propose and study, by numerical simulations, an adaptive tracking protocol based on Bayesian estimation. The tracking protocol updates the probability distribution for the magnetic field based on measurement outcomes and adapts the choice of sensing time and phase in real time. By taking the statistical properties of the signal into account, our protocol strongly reduces the required measurement time. This leads to a reduction of the error in the estimation of a time-varying signal by up to a factor of four compare with protocols that do not take this information into account.
NASA Astrophysics Data System (ADS)
Tao, Laifa; Lu, Chen; Noktehdan, Azadeh
2015-10-01
Battery capacity estimation is a significant recent challenge given the complex physical and chemical processes that occur within batteries and the restrictions on the accessibility of capacity degradation data. In this study, we describe an approach called dynamic spatial time warping, which is used to determine the similarities of two arbitrary curves. Unlike classical dynamic time warping methods, this approach can maintain the invariance of curve similarity to the rotations and translations of curves, which is vital in curve similarity search. Moreover, it utilizes the online charging or discharging data that are easily collected and do not require special assumptions. The accuracy of this approach is verified using NASA battery datasets. Results suggest that the proposed approach provides a highly accurate means of estimating battery capacity at less time cost than traditional dynamic time warping methods do for different individuals and under various operating conditions.
Modulation of Response Timing in ADHD, Effects of Reinforcement Valence and Magnitude
ERIC Educational Resources Information Center
Luman, Marjolein; Oosterlaan, Jaap; Sergeant, Joseph A.
2008-01-01
The present study investigated the impact of reinforcement valence and magnitude on response timing in children with ADHD. Children were required to estimate a 1-s interval, and both the median response time (response tendency) and the intrasubject-variability (response stability) were investigated. In addition, heart rate and skin conductance…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ploskey, Gene R.; Weiland, Mark A.; Carlson, Thomas J.
The purpose of this study was to estimate dam passage and route specific survival rates for subyearling Chinook salmon smolts to a primary survival-detection array located 81 km downstream of the dam, evaluate a BGS located in the B2 forebay, and evaluate effects of two spill treatments. The 2010 study also provided estimates of forebay residence time, tailrace egress time, spill passage efficiency (SPE), and spill + B2 Corner Collector (B2CC) efficiency, as required in the Columbia Basin Fish Accords. In addition, the study estimated forebay passage survival and survival of fish traveling from the forebay entrance array, through themore » dam and downstream through 81 km of tailwater.« less
Sonko, Bakary J; Miller, Leland V; Jones, Richard H; Donnelly, Joseph E; Jacobsen, Dennis J; Hill, James O; Fennessey, Paul V
2003-12-15
Reducing water to hydrogen gas by zinc or uranium metal for determining D/H ratio is both tedious and time consuming. This has forced most energy metabolism investigators to use the "two-point" technique instead of the "Multi-point" technique for estimating total energy expenditure (TEE). Recently, we purchased a new platinum (Pt)-equilibration system that significantly reduces both time and labor required for D/H ratio determination. In this study, we compared TEE obtained from nine overweight but healthy subjects, estimated using the traditional Zn-reduction method to that obtained from the new Pt-equilibration system. Rate constants, pool spaces, and CO2 production rates obtained from use of the two methodologies were not significantly different. Correlation analysis demonstrated that TEEs estimated using the two methods were significantly correlated (r=0.925, p=0.0001). Sample equilibration time was reduced by 66% compared to those of similar methods. The data demonstrated that the Zn-reduction method could be replaced by the Pt-equilibration method when TEE was estimated using the "Multi-Point" technique. Furthermore, D equilibration time was significantly reduced.
Material parameter estimation with terahertz time-domain spectroscopy.
Dorney, T D; Baraniuk, R G; Mittleman, D M
2001-07-01
Imaging systems based on terahertz (THz) time-domain spectroscopy offer a range of unique modalities owing to the broad bandwidth, subpicosecond duration, and phase-sensitive detection of the THz pulses. Furthermore, the possibility exists for combining spectroscopic characterization or identification with imaging because the radiation is broadband in nature. To achieve this, we require novel methods for real-time analysis of THz waveforms. This paper describes a robust algorithm for extracting material parameters from measured THz waveforms. Our algorithm simultaneously obtains both the thickness and the complex refractive index of an unknown sample under certain conditions. In contrast, most spectroscopic transmission measurements require knowledge of the sample's thickness for an accurate determination of its optical parameters. Our approach relies on a model-based estimation, a gradient descent search, and the total variation measure. We explore the limits of this technique and compare the results with literature data for optical parameters of several different materials.
Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn
2013-03-06
Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.
2013-01-01
Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171
Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data
Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.
2015-01-01
We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.
Modeling operators' emergency response time for chemical processing operations.
Murray, Susan L; Harputlu, Emrah; Mentzer, Ray A; Mannan, M Sam
2014-01-01
Operators have a crucial role during emergencies at a variety of facilities such as chemical processing plants. When an abnormality occurs in the production process, the operator often has limited time to either take corrective actions or evacuate before the situation becomes deadly. It is crucial that system designers and safety professionals can estimate the time required for a response before procedures and facilities are designed and operations are initiated. There are existing industrial engineering techniques to establish time standards for tasks performed at a normal working pace. However, it is reasonable to expect the time required to take action in emergency situations will be different than working at a normal production pace. It is possible that in an emergency, operators will act faster compared to a normal pace. It would be useful for system designers to be able to establish a time range for operators' response times for emergency situations. This article develops a modeling approach to estimate the time standard range for operators taking corrective actions or following evacuation procedures in emergency situations. This will aid engineers and managers in establishing time requirements for operators in emergency situations. The methodology used for this study combines a well-established industrial engineering technique for determining time requirements (predetermined time standard system) and adjustment coefficients for emergency situations developed by the authors. Numerous videos of workers performing well-established tasks at a maximum pace were studied. As an example, one of the tasks analyzed was pit crew workers changing tires as quickly as they could during a race. The operations in these videos were decomposed into basic, fundamental motions (such as walking, reaching for a tool, and bending over) by studying the videos frame by frame. A comparison analysis was then performed between the emergency pace and the normal working pace operations to determine performance coefficients. These coefficients represent the decrease in time required for various basic motions in emergency situations and were used to model an emergency response. This approach will make hazardous operations requiring operator response, alarm management, and evacuation processes easier to design and predict. An application of this methodology is included in the article. The time required for an emergency response was roughly a one-third faster than for a normal response time.
Parameter Transient Behavior Analysis on Fault Tolerant Control System
NASA Technical Reports Server (NTRS)
Belcastro, Christine (Technical Monitor); Shin, Jong-Yeob
2003-01-01
In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. This paper illustrates analysis of a FTC system based on estimated fault parameter transient behavior which may include false fault detections during a short time interval. Using Lyapunov function analysis, the upper bound of an induced-L2 norm of the FTC system performance is calculated as a function of a fault detection time and the exponential decay rate of the Lyapunov function.
Methodology for Time-Domain Estimation of Storm-Time Electric Fields Using the 3D Earth Impedance
NASA Astrophysics Data System (ADS)
Kelbert, A.; Balch, C. C.; Pulkkinen, A. A.; Egbert, G. D.; Love, J. J.; Rigler, E. J.; Fujii, I.
2016-12-01
Magnetic storms can induce geoelectric fields in the Earth's electrically conducting interior, interfering with the operations of electric-power grid industry. The ability to estimate these electric fields at Earth's surface in close to real-time and to provide local short-term predictions would improve the ability of the industry to protect their operations. At any given time, the electric field at the Earth's surface is a function of the time-variant magnetic activity (driven by the solar wind), and the local electrical conductivity structure of the Earth's crust and mantle. For this reason, implementation of an operational electric field estimation service requires an interdisciplinary, collaborative effort between space science, real-time space weather operations, and solid Earth geophysics. We highlight in this talk an ongoing collaboration between USGS, NOAA, NASA, Oregon State University, and the Japan Meteorological Agency, to develop algorithms that can be used for scenario analyses and which might be implemented in a real-time, operational setting. We discuss the development of a time domain algorithm that employs discrete time domain representation of the impedance tensor for a realistic 3D Earth, known as the discrete time impulse response (DTIR), convolved with the local magnetic field time series, to estimate the local electric field disturbances. The algorithm is validated against measured storm-time electric field data collected in the United States and Japan. We also discuss our plans for operational real-time electric field estimation using 3D Earth impedances.
Cottenden, Jennielee; Filter, Emily R; Cottreau, Jon; Moore, David; Bullock, Martin; Huang, Weei-Yuarn; Arnason, Thomas
2018-03-01
- Pathologists routinely assess Ki67 immunohistochemistry to grade gastrointestinal and pancreatic neuroendocrine tumors. Unfortunately, manual counts of the Ki67 index are very time consuming and eyeball estimation has been criticized as unreliable. Manual Ki67 counts performed by cytotechnologists could potentially save pathologist time and improve accuracy. - To assess the concordance between manual Ki67 index counts performed by cytotechnologists versus eyeball estimates and manual Ki67 counts by pathologists. - One Ki67 immunohistochemical stain was retrieved from each of 18 archived gastrointestinal or pancreatic neuroendocrine tumor resections. We compared pathologists' Ki67 eyeball estimates on glass slides and printed color images with manual counts performed by 3 cytotechnologists and gold standard manual Ki67 index counts by 3 pathologists. - Tumor grade agreement between pathologist image eyeball estimate and gold standard pathologist manual count was fair (κ = 0.31; 95% CI, 0.030-0.60). In 9 of 20 cases (45%), the mean pathologist eyeball estimate was 1 grade higher than the mean pathologist manual count. There was almost perfect agreement in classifying tumor grade between the mean cytotechnologist manual count and the mean pathologist manual count (κ = 0.910; 95% CI, 0.697-1.00). In 20 cases, there was only 1 grade disagreement between the 2 methods. Eyeball estimation by pathologists required less than 1 minute, whereas manual counts by pathologists required a mean of 17 minutes per case. - Eyeball estimation of the Ki67 index has a high rate of tumor grade misclassification compared with manual counting. Cytotechnologist manual counts are accurate and save pathologist time.
Real-time optical flow estimation on a GPU for a skied-steered mobile robot
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2016-04-01
Accurate egomotion estimation is required for mobile robot navigation. Often the egomotion is estimated using optical flow algorithms. For an accurate estimation of optical flow most of modern algorithms require high memory resources and processor speed. However simple single-board computers that control the motion of the robot usually do not provide such resources. On the other hand, most of modern single-board computers are equipped with an embedded GPU that could be used in parallel with a CPU to improve the performance of the optical flow estimation algorithm. This paper presents a new Z-flow algorithm for efficient computation of an optical flow using an embedded GPU. The algorithm is based on the phase correlation optical flow estimation and provide a real-time performance on a low cost embedded GPU. The layered optical flow model is used. Layer segmentation is performed using graph-cut algorithm with a time derivative based energy function. Such approach makes the algorithm both fast and robust in low light and low texture conditions. The algorithm implementation for a Raspberry Pi Model B computer is discussed. For evaluation of the algorithm the computer was mounted on a Hercules mobile skied-steered robot equipped with a monocular camera. The evaluation was performed using a hardware-in-the-loop simulation and experiments with Hercules mobile robot. Also the algorithm was evaluated using KITTY Optical Flow 2015 dataset. The resulting endpoint error of the optical flow calculated with the developed algorithm was low enough for navigation of the robot along the desired trajectory.
An adaptive bit synchronization algorithm under time-varying environment.
NASA Technical Reports Server (NTRS)
Chow, L. R.; Owen, H. A., Jr.; Wang, P. P.
1973-01-01
This paper presents an adaptive estimation algorithm for bit synchronization, assuming that the parameters of the incoming data process are time-varying. Experiment results have proved that this synchronizer is workable either judged by the amount of data required or the speed of convergence.
Code of Federal Regulations, 2012 CFR
2012-01-01
... will be available for decommissioning costs and on a demonstration that the company passes the... total current decommissioning cost estimate (or the current amount required if certification is used... percent of total assets or at least 10 times the total current decommissioning cost estimate (or the...
NASA Technical Reports Server (NTRS)
Keisler, S. R.; Rhyne, R. H.
1976-01-01
Synthetic time histories were generated and used to assess the effects of prewhitening on the long wavelength portion of power spectra of atmospheric turbulence. Prewhitening is not recommended when using the narrow spectral windows required for determining power spectral estimates below the 'knee' frequency, that is, at very long wavelengths.
USDA-ARS?s Scientific Manuscript database
Quantifying global carbon and water balances requires accurate estimation of gross primary production (GPP) and evapotranspiration (ET), respectively, across space and time. Models that are based on the theory of light use efficiency (LUE) and water use efficiency (WUE) have emerged as efficient met...
Quantile Regression Models for Current Status Data
Ou, Fang-Shu; Zeng, Donglin; Cai, Jianwen
2016-01-01
Current status data arise frequently in demography, epidemiology, and econometrics where the exact failure time cannot be determined but is only known to have occurred before or after a known observation time. We propose a quantile regression model to analyze current status data, because it does not require distributional assumptions and the coefficients can be interpreted as direct regression effects on the distribution of failure time in the original time scale. Our model assumes that the conditional quantile of failure time is a linear function of covariates. We assume conditional independence between the failure time and observation time. An M-estimator is developed for parameter estimation which is computed using the concave-convex procedure and its confidence intervals are constructed using a subsampling method. Asymptotic properties for the estimator are derived and proven using modern empirical process theory. The small sample performance of the proposed method is demonstrated via simulation studies. Finally, we apply the proposed method to analyze data from the Mayo Clinic Study of Aging. PMID:27994307
GPS Imaging of Time-Variable Earthquake Hazard: The Hilton Creek Fault, Long Valley California
NASA Astrophysics Data System (ADS)
Hammond, W. C.; Blewitt, G.
2016-12-01
The Hilton Creek Fault, in Long Valley, California is a down-to-the-east normal fault that bounds the eastern edge of the Sierra Nevada/Great Valley microplate, and lies half inside and half outside the magmatically active caldera. Despite the dense coverage with GPS networks, the rapid and time-variable surface deformation attributable to sporadic magmatic inflation beneath the resurgent dome makes it difficult to use traditional geodetic methods to estimate the slip rate of the fault. While geologic studies identify cumulative offset, constrain timing of past earthquakes, and constrain a Quaternary slip rate to within 1-5 mm/yr, it is not currently possible to use geologic data to evaluate how the potential for slip correlates with transient caldera inflation. To estimate time-variable seismic hazard of the fault we estimate its instantaneous slip rate from GPS data using a new set of algorithms for robust estimation of velocity and strain rate fields and fault slip rates. From the GPS time series, we use the robust MIDAS algorithm to obtain time series of velocity that are highly insensitive to the effects of seasonality, outliers and steps in the data. We then use robust imaging of the velocity field to estimate a gridded time variable velocity field. Then we estimate fault slip rate at each time using a new technique that forms ad-hoc block representations that honor fault geometries, network complexity, connectivity, but does not require labor-intensive drawing of block boundaries. The results are compared to other slip rate estimates that have implications for hazard over different time scales. Time invariant long term seismic hazard is proportional to the long term slip rate accessible from geologic data. Contemporary time-invariant hazard, however, may differ from the long term rate, and is estimated from the geodetic velocity field that has been corrected for the effects of magmatic inflation in the caldera using a published model of a dipping ellipsoidal magma chamber. Contemporary time-variable hazard can be estimated from the time variable slip rate estimated from the evolving GPS velocity field.
Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Moision, Bruce E.
2010-01-01
Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.
An Embedded Device for Real-Time Noninvasive Intracranial Pressure Estimation.
Matthews, Jonathan M; Fanelli, Andrea; Heldt, Thomas
2018-01-01
The monitoring of intracranial pressure (ICP) is indicated for diagnosing and guiding therapy in many neurological conditions. Current monitoring methods, however, are highly invasive, limiting their use to the most critically ill patients only. Our goal is to develop and test an embedded device that performs all necessary mathematical operations in real-time for noninvasive ICP (nICP) estimation based on a previously developed model-based approach that uses cerebral blood flow velocity (CBFV) and arterial blood pressure (ABP) waveforms. The nICP estimation algorithm along with the required preprocessing steps were implemented on an NXP LPC4337 microcontroller unit (MCU). A prototype device using the MCU was also developed, complete with display, recording functionality, and peripheral interfaces for ABP and CBFV monitoring hardware. The device produces an estimate of mean ICP once per minute and performs the necessary computations in 410 ms, on average. Real-time nICP estimates differed from the original batch-mode MATLAB implementation of theestimation algorithm by 0.63 mmHg (root-mean-square error). We have demonstrated that real-time nICP estimation is possible on a microprocessor platform, which offers the advantages of low cost, small size, and product modularity over a general-purpose computer. These attributes take a step toward the goal of real-time nICP estimation at the patient's bedside in a variety of clinical settings.
Predicting lethal entanglements as a consequence of drag from fishing gear.
van der Hoop, Julie M; Corkeron, Peter; Henry, Allison G; Knowlton, Amy R; Moore, Michael J
2017-02-15
Large whales are frequently entangled in fishing gear and sometimes swim while carrying gear for days to years. Entangled whales are subject to additional drag forces requiring increased thrust power and energy expenditure over time. To classify entanglement cases and aid potential disentanglement efforts, it is useful to know how long an entangled whale might survive, given the unique configurations of the gear they are towing. This study establishes an approach to predict drag forces on fishing gear that entangles whales, and applies this method to ten North Atlantic right whale cases to estimate the resulting increase in energy expenditure and the critical entanglement duration that could lead to death. Estimated gear drag ranged 11-275N. Most entanglements were resolved before critical entanglement durations (mean±SD 216±260days) were reached. These estimates can assist real-time development of disentanglement action plans and U.S. Federal Serious Injury assessments required for protected species. Copyright © 2016 Elsevier Ltd. All rights reserved.
Synchronization for Optical PPM with Inter-Symbol Guard Times
NASA Astrophysics Data System (ADS)
Rogalin, R.; Srinivasan, M.
2017-05-01
Deep space optical communications promises orders of magnitude growth in communication capacity, supporting high data rate applications such as video streaming and high-bandwidth science instruments. Pulse position modulation is the modulation format of choice for deep space applications, and by inserting inter-symbol guard times between the symbols, the signal carries the timing information needed by the demodulator. Accurately extracting this timing information is crucial to demodulating and decoding this signal. In this article, we propose a number of timing and frequency estimation schemes for this modulation format, and in particular highlight a low complexity maximum likelihood timing estimator that significantly outperforms the prior art in this domain. This method does not require an explicit synchronization sequence, freeing up channel resources for data transmission.
Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Smith, Mark S.
2008-01-01
Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.
Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Smith, Mark S.
2010-01-01
Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors, prediction cases, and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.
Nondillo, Aline; Redaelli, Luiza R; Botton, Marcos; Pinent, Silvia M J; Gitz, Rogério
2008-01-01
Frankliniella occidentalis (Pergande) is one of the major strawberry pests in southern Brazil. The insect causes russeting and wither in flowers and fruits reducing commercial value. In this work, the thermal requirements of the eggs, larvae and pupae of F. occidentalis were estimated. Thrips development was studied in folioles of strawberry plants at six constant temperatures (16, 19, 22, 25, 28 and 31 degrees C) in controlled conditions (70 +/- 10% R.H. and 12:12 L:D). The number of annual generations of F. occidentalis was estimated for six strawberry production regions of Rio Grande do Sul State based on its thermal requirements. Developmental time of each F. occidentalis stages was proportional to the temperature increase. The best development rate was obtained when insects were reared at 25 masculineC and 28 masculineC. The lower threshold and the thermal requirements for the egg to adult stage were 9.9 masculineC and 211.9 degree-days, respectively. Considering the thermal requirements of F. occidentalis, 10.7, 12.6, 13.1, 13.6, 16.5 and 17.9 generations/year were estimated, respectively, for Vacaria, Caxias do Sul, Farroupilha, Pelotas, Porto Alegre and Taquari producing regions located in Rio Grande do Sul State, Brazil.
NASA Astrophysics Data System (ADS)
Meshgi, Ali; Schmitter, Petra; Babovic, Vladan; Chui, Ting Fong May
2014-11-01
Developing reliable methods to estimate stream baseflow has been a subject of interest due to its importance in catchment response and sustainable watershed management. However, to date, in the absence of complex numerical models, baseflow is most commonly estimated using statistically derived empirical approaches that do not directly incorporate physically-meaningful information. On the other hand, Artificial Intelligence (AI) tools such as Genetic Programming (GP) offer unique capabilities to reduce the complexities of hydrological systems without losing relevant physical information. This study presents a simple-to-use empirical equation to estimate baseflow time series using GP so that minimal data is required and physical information is preserved. A groundwater numerical model was first adopted to simulate baseflow for a small semi-urban catchment (0.043 km2) located in Singapore. GP was then used to derive an empirical equation relating baseflow time series to time series of groundwater table fluctuations, which are relatively easily measured and are physically related to baseflow generation. The equation was then generalized for approximating baseflow in other catchments and validated for a larger vegetation-dominated basin located in the US (24 km2). Overall, this study used GP to propose a simple-to-use equation to predict baseflow time series based on only three parameters: minimum daily baseflow of the entire period, area of the catchment and groundwater table fluctuations. It serves as an alternative approach for baseflow estimation in un-gauged systems when only groundwater table and soil information is available, and is thus complementary to other methods that require discharge measurements.
Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A
2016-09-06
Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mauro, Francisco; Monleon, Vicente J; Temesgen, Hailemariam; Ford, Kevin R
2017-01-01
Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey's height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates.
Monleon, Vicente J.; Temesgen, Hailemariam; Ford, Kevin R.
2017-01-01
Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey’s height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates. PMID:29216290
NASA Astrophysics Data System (ADS)
Zhou, Tao; Luo, Yiqi
2008-09-01
Ecosystem carbon (C) uptake is determined largely by C residence times and increases in net primary production (NPP). Therefore, evaluation of C uptake at a regional scale requires knowledge on spatial patterns of both residence times and NPP increases. In this study, we first applied an inverse modeling method to estimate spatial patterns of C residence times in the conterminous United States. Then we combined the spatial patterns of estimated residence times with a NPP change trend to assess the spatial patterns of regional C uptake in the United States. The inverse analysis was done by using the genetic algorithm and was based on 12 observed data sets of C pools and fluxes. Residence times were estimated by minimizing the total deviation between modeled and observed values. Our results showed that the estimated C residence times were highly heterogeneous over the conterminous United States, with most of the regions having values between 15 and 65 years; and the averaged C residence time was 46 years. The estimated C uptake for the whole conterminous United States was 0.15 P g C a-1. Large portions of the taken C were stored in soil for grassland and cropland (47-70%) but in plant pools for forests and woodlands (73-82%). The proportion of C uptake in soil was found to be determined primarily by C residence times and be independent of the magnitude of NPP increase. Therefore, accurate estimation of spatial patterns of C residence times is crucial for the evaluation of terrestrial ecosystem C uptake.
Bearings Only Air-to-Air Ranging
1988-07-25
directly in fiut of the observer whem first detected, more time will be needed for a good estimate. A sound uinp them is for the observer, having...altitude angle to provide an estimate of the z component. Moving targets commonly require some 60 seconds for good estimates of target location and...fixed target case, where a good strategy for the observer can be determined a priori, highly effective maneuvers for the observer in the case of a moving
A new linear least squares method for T1 estimation from SPGR signals with multiple TRs
NASA Astrophysics Data System (ADS)
Chang, Lin-Ching; Koay, Cheng Guan; Basser, Peter J.; Pierpaoli, Carlo
2009-02-01
The longitudinal relaxation time, T1, can be estimated from two or more spoiled gradient recalled echo x (SPGR) images with two or more flip angles and one or more repetition times (TRs). The function relating signal intensity and the parameters are nonlinear; T1 maps can be computed from SPGR signals using nonlinear least squares regression. A widely-used linear method transforms the nonlinear model by assuming a fixed TR in SPGR images. This constraint is not desirable since multiple TRs are a clinically practical way to reduce the total acquisition time, to satisfy the required resolution, and/or to combine SPGR data acquired at different times. A new linear least squares method is proposed using the first order Taylor expansion. Monte Carlo simulations of SPGR experiments are used to evaluate the accuracy and precision of the estimated T1 from the proposed linear and the nonlinear methods. We show that the new linear least squares method provides T1 estimates comparable in both precision and accuracy to those from the nonlinear method, allowing multiple TRs and reducing computation time significantly.
Efficient Implementation of a Symbol Timing Estimator for Broadband PLC.
Nombela, Francisco; García, Enrique; Mateos, Raúl; Hernández, Álvaro
2015-08-21
Broadband Power Line Communications (PLC) have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.
77 FR 23514 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-19
... purchaser, prior to or at the time of purchase, a signed document acknowledging the restrictions on... with. This requirement enhances the Commission's ability to monitor utilization of and compliance with... does not include in the estimate of average burden hours the time preparing registration statements and...
The detrimental influence of attention on time-to-contact perception.
Baurès, Robin; Balestra, Marianne; Rosito, Maxime; VanRullen, Rufin
2018-04-23
To which extent is attention necessary to estimate the time-to-contact (TTC) of a moving object, that is, determining when the object will reach a specific point? While numerous studies have aimed at determining the visual cues and gaze strategy that allow this estimation, little is known about if and how attention is involved or required in this process. To answer this question, we carried out an experiment in which the participants estimated the TTC of a moving ball, either alone (single-task condition) or concurrently with a Rapid Serial Visual Presentation task embedded within the ball (dual-task condition). The results showed that participants had a better estimation when attention was driven away from the TTC task. This suggests that drawing attention away from the TTC estimation limits cognitive interference, intrusion of knowledge, or expectations that significantly modify the visually-based TTC estimation, and argues in favor of a limited attention to correctly estimate the TTC.
Sampling strategies for estimating acute and chronic exposures of pesticides in streams
Crawford, Charles G.
2004-01-01
The Food Quality Protection Act of 1996 requires that human exposure to pesticides through drinking water be considered when establishing pesticide tolerances in food. Several systematic and seasonally weighted systematic sampling strategies for estimating pesticide concentrations in surface water were evaluated through Monte Carlo simulation, using intensive datasets from four sites in northwestern Ohio. The number of samples for the strategies ranged from 4 to 120 per year. Sampling strategies with a minimal sampling frequency outside the growing season can be used for estimating time weighted mean and percentile concentrations of pesticides with little loss of accuracy and precision, compared to strategies with the same sampling frequency year round. Less frequent sampling strategies can be used at large sites. A sampling frequency of 10 times monthly during the pesticide runoff period at a 90 km 2 basin and four times monthly at a 16,400 km2 basin provided estimates of the time weighted mean, 90th, 95th, and 99th percentile concentrations that fell within 50 percent of the true value virtually all of the time. By taking into account basin size and the periodic nature of pesticide runoff, costs of obtaining estimates of time weighted mean and percentile pesticide concentrations can be minimized.
High throughput toxicity testing (HTT) holds the promise of providing data for tens of thousands of chemicals that currently have no data due to the cost and time required for animal testing. Interpretation of these results require information linking the perturbations seen in vi...
Study of space shuttle EVA/IVA support requirements. Volume 1: Technical summary report
NASA Technical Reports Server (NTRS)
Copeland, R. J.; Wood, P. W., Jr.; Cox, R. L.
1973-01-01
Results are summarized which were obtained for equipment requirements for the space shuttle EVA/IVA pressure suit, life support system, mobility aids, vehicle support provisions, and energy 4 support. An initial study of tasks, guidelines, and constraints and a special task on the impact of a 10 psia orbiter cabin atmosphere are included. Supporting studies not related exclusively to any one group of equipment requirements are also summarized. Representative EVA/IVA task scenarios were defined based on an evaluation of missions and payloads. Analysis of the scenarios resulted in a total of 788 EVA/IVA's in the 1979-1990 time frame, for an average of 1.3 per shuttle flight. Duration was estimated to be under 4 hours on 98% of the EVA/IVA's, and distance from the airlock was determined to be 70 feet or less 96% of the time. Payload water vapor sensitivity was estimated to be significant on 9%-17% of the flights. Further analysis of the scenarios was carried out to determine specific equipment characteristics, such as suit cycle and mobility requirements.
Unsupervised markerless 3-DOF motion tracking in real time using a single low-budget camera.
Quesada, Luis; León, Alejandro J
2012-10-01
Motion tracking is a critical task in many computer vision applications. Existing motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on motion tracking. In this paper, we present a novel three degrees of freedom motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera that can be found installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a nonmodeled unmarked object that may be nonrigid, nonconvex, partially occluded, self-occluded, or motion blurred, given that it is opaque, evenly colored, enough contrasting with the background in each frame, and that it does not rotate. Our system is also able to determine the most relevant object to track in the screen. Our proposal does not impose additional constraints, therefore it allows a market-wide implementation of applications that require the estimation of the three position degrees of freedom of an object.
NASA Technical Reports Server (NTRS)
Beckley, B. D.; Lemoine, F. G.; Zelensky, N. P.; Yang, X.; Holmes, S.; Ray, R. D.; Mitchum, G. T.; Desai, S.; Brown, S.; Haines, B.
2011-01-01
Recent developments in Precise Orbit Determinations (POD) due to in particular to revisions to the terrestrial reference frame realization and the time variable gravity (TVG) continues to provide improvements to the accuracy and stability of the PO directly affecting mean sea level (MSL) estimates. Long-term credible MSL estimates require the development and continued maintenance of a stable reference frame, along with vigilant monitoring of the performance of the independent tracking systems used to calculate the orbits for altimeter spacecrafts. The stringent MSL accuracy requirements of a few tenths of an mm/yr are particularly essential for mass budget closure analysis over the relative short time period of Jason-l &2, GRACE, and Argo coincident measurements. In an effort to adhere to cross mission consistency, we have generated a full time series of experimental orbits (GSFC stdlllO) for TOPEX/Poseidon (TP), Jason-I, and OSTM based on an improved terrestrial reference frame (TRF) realization (ITRF2008), revised static (GGM03s), and time variable gravity field (Eigen6s). In this presentation we assess the impact of the revised precision orbits on inter-mission bias estimates, and resultant global and regional MSL trends. Tide gauge verification results are shown to assess the current stability of the Jason-2 sea surface height time series that suggests a possible discontinuity initiated in early 2010. Although the Jason-2 time series is relatively short (approximately 3 years), a thorough review of the entire suite of geophysical and environmental range corrections is warranted and is underway to maintain the fidelity of the record.
Li, Lingling; Kulldorff, Martin; Russek-Cohen, Estelle; Kawai, Alison Tse; Hua, Wei
2015-12-01
The self-controlled risk interval design is commonly used to assess the association between an acute exposure and an adverse event of interest, implicitly adjusting for fixed, non-time-varying covariates. Explicit adjustment needs to be made for time-varying covariates, for example, age in young children. It can be performed via either a fixed or random adjustment. The random-adjustment approach can provide valid point and interval estimates but requires access to individual-level data for an unexposed baseline sample. The fixed-adjustment approach does not have this requirement and will provide a valid point estimate but may underestimate the variance. We conducted a comprehensive simulation study to evaluate their performance. We designed the simulation study using empirical data from the Food and Drug Administration-sponsored Mini-Sentinel Post-licensure Rapid Immunization Safety Monitoring Rotavirus Vaccines and Intussusception study in children 5-36.9 weeks of age. The time-varying confounder is age. We considered a variety of design parameters including sample size, relative risk, time-varying baseline risks, and risk interval length. The random-adjustment approach has very good performance in almost all considered settings. The fixed-adjustment approach can be used as a good alternative when the number of events used to estimate the time-varying baseline risks is at least the number of events used to estimate the relative risk, which is almost always the case. We successfully identified settings in which the fixed-adjustment approach can be used as a good alternative and provided guidelines on the selection and implementation of appropriate analyses for the self-controlled risk interval design. Copyright © 2015 John Wiley & Sons, Ltd.
Salgado, Diana; Torres, J Antonio; Welti-Chanes, Jorge; Velazquez, Gonzalo
2011-08-01
Consumer demand for food safety and quality improvements, combined with new regulations, requires determining the processor's confidence level that processes lowering safety risks while retaining quality will meet consumer expectations and regulatory requirements. Monte Carlo calculation procedures incorporate input data variability to obtain the statistical distribution of the output of prediction models. This advantage was used to analyze the survival risk of Mycobacterium avium subspecies paratuberculosis (M. paratuberculosis) and Clostridium botulinum spores in high-temperature short-time (HTST) milk and canned mushrooms, respectively. The results showed an estimated 68.4% probability that the 15 sec HTST process would not achieve at least 5 decimal reductions in M. paratuberculosis counts. Although estimates of the raw milk load of this pathogen are not available to estimate the probability of finding it in pasteurized milk, the wide range of the estimated decimal reductions, reflecting the variability of the experimental data available, should be a concern to dairy processors. Knowledge of the C. botulinum initial load and decimal thermal time variability was used to estimate an 8.5 min thermal process time at 110 °C for canned mushrooms reducing the risk to 10⁻⁹ spores/container with a 95% confidence. This value was substantially higher than the one estimated using average values (6.0 min) with an unacceptable 68.6% probability of missing the desired processing objective. Finally, the benefit of reducing the variability in initial load and decimal thermal time was confirmed, achieving a 26.3% reduction in processing time when standard deviation values were lowered by 90%. In spite of novel technologies, commercialized or under development, thermal processing continues to be the most reliable and cost-effective alternative to deliver safe foods. However, the severity of the process should be assessed to avoid under- and over-processing and determine opportunities for improvement. This should include a systematic approach to consider variability in the parameters for the models used by food process engineers when designing a thermal process. The Monte Carlo procedure here presented is a tool to facilitate this task for the determination of process time at a constant lethal temperature. © 2011 Institute of Food Technologists®
Estimates of the maximum time required to originate life
NASA Technical Reports Server (NTRS)
Oberbeck, Verne R.; Fogleman, Guy
1989-01-01
Fossils of the oldest microorganisms exist in 3.5 billion year old rocks and there is indirect evidence that life may have existed 3.8 billion years ago (3.8 Ga). Impacts able to destroy life or interrupt prebiotic chemistry may have occurred after 3.5 Ga. If large impactors vaporized the oceans, sterilized the planets, and interfered with the origination of life, life must have originated in the time interval between these impacts which increased with geologic time. Therefore, the maximum time required for the origination of life is the time that occurred between sterilizing impacts just before 3.8 Ga or 3.5 Ga, depending upon when life first appeared on earth. If life first originated 3.5 Ga, and impacts with kinetic energies between 2 x 10 the the 34th and 2 x 10 to the 35th were able to vaporize the oceans, using the most probable impact flux, it is found that the maximum time required to originate life would have been 67 to 133 million years (My). If life originated 3.8 Ga, the maximum time to originate life was 2.5 to 11 My. Using a more conservative estimate for the flux of impacting objects before 3.8 Ga, a maximum time of 25 My was found for the same range of impactor kinetic energies. The impact model suggests that it is possible that life may have originated more than once.
Takeshima, Hidenori; Saitoh, Kanako; Nitta, Shuhei; Shiodera, Taichiro; Takeguchi, Tomoyuki; Bannae, Shuhei; Kuhara, Shigehide
2018-03-13
Dynamic MR techniques, such as cardiac cine imaging, benefit from shorter acquisition times. The goal of the present study was to develop a method that achieves short acquisition times, while maintaining a cost-effective reconstruction, for dynamic MRI. k - t sensitivity encoding (SENSE) was identified as the base method to be enhanced meeting these two requirements. The proposed method achieves a reduction in acquisition time by estimating the spatiotemporal (x - f) sensitivity without requiring the acquisition of the alias-free signals, typical of the k - t SENSE technique. The cost-effective reconstruction, in turn, is achieved by a computationally efficient estimation of the x - f sensitivity from the band-limited signals of the aliased inputs. Such band-limited signals are suitable for sensitivity estimation because the strongly aliased signals have been removed. For the same reduction factor 4, the net reduction factor 4 for the proposed method was significantly higher than the factor 2.29 achieved by k - t SENSE. The processing time is reduced from 4.1 s for k - t SENSE to 1.7 s for the proposed method. The image quality obtained using the proposed method proved to be superior (mean squared error [MSE] ± standard deviation [SD] = 6.85 ± 2.73) compared to the k - t SENSE case (MSE ± SD = 12.73 ± 3.60) for the vertical long-axis (VLA) view, as well as other views. In the present study, k - t SENSE was identified as a suitable base method to be improved achieving both short acquisition times and a cost-effective reconstruction. To enhance these characteristics of base method, a novel implementation is proposed, estimating the x - f sensitivity without the need for an explicit scan of the reference signals. Experimental results showed that the acquisition, computational times and image quality for the proposed method were improved compared to the standard k - t SENSE method.
NASA Technical Reports Server (NTRS)
Nainiger, J. J.; Burns, R. K.; Easley, A. J.
1982-01-01
A performance and operational economics analysis is presented for an integrated-gasifier, combined-cycle (IGCC) system to meet the steam and baseload electrical requirements. The effect of time variations in steam and electrial requirements is included. The amount and timing of electricity purchases from sales to the electric utility are determined. The resulting expenses for purchased electricity and revenues from electricity sales are estimated by using an assumed utility rate structure model. Cogeneration results for a range of potential IGCC cogeneration system sizes are compared with the fuel consumption and costs of natural gas and electricity to meet requirements without cogeneration. The results indicate that an IGCC cogeneration system could save about 10 percent of the total fuel energy presently required to supply steam and electrical requirements without cogeneration. Also for the assumed future fuel and electricity prices, an annual operating cost savings of 21 percent to 26 percent could be achieved with such a cogeneration system. An analysis of the effects of electricity price, fuel price, and system availability indicates that the IGCC cogeneration system has a good potential for economical operation over a wide range in these assumptions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... SERVICE DEEP SEABED MINING REGULATIONS FOR COMMERCIAL RECOVERY PERMITS Certification of Applications § 971... the pending unresolved issues, the efforts to resolve them, and an estimate of the time required to do...
Thornton, P. K.; Bowen, W. T.; Ravelo, A.C.; Wilkens, P. W.; Farmer, G.; Brock, J.; Brink, J. E.
1997-01-01
Early warning of impending poor crop harvests in highly variable environments can allow policy makers the time they need to take appropriate action to ameliorate the effects of regional food shortages on vulnerable rural and urban populations. Crop production estimates for the current season can be obtained using crop simulation models and remotely sensed estimates of rainfall in real time, embedded in a geographic information system that allows simple analysis of simulation results. A prototype yield estimation system was developed for the thirty provinces of Burkina Faso. It is based on CERES-Millet, a crop simulation model of the growth and development of millet (Pennisetum spp.). The prototype was used to estimate millet production in contrasting seasons and to derive production anomaly estimates for the 1986 season. Provincial yields simulated halfway through the growing season were generally within 15% of their final (end-of-season) values. Although more work is required to produce an operational early warning system of reasonable credibility, the methodology has considerable potential for providing timely estimates of regional production of the major food crops in countries of sub-Saharan Africa.
Contour-based object orientation estimation
NASA Astrophysics Data System (ADS)
Alpatov, Boris; Babayan, Pavel
2016-04-01
Real-time object orientation estimation is an actual problem of computer vision nowadays. In this paper we propose an approach to estimate an orientation of objects lacking axial symmetry. Proposed algorithm is intended to estimate orientation of a specific known 3D object, so 3D model is required for learning. The proposed orientation estimation algorithm consists of 2 stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. It minimizes the training image set. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy (mean error value less than 6°) in all case studies. The real-time performance of the algorithm was also demonstrated.
Model-Based Design of Long-Distance Tracer Transport Experiments in Plants.
Bühler, Jonas; von Lieres, Eric; Huber, Gregor J
2018-01-01
Studies of long-distance transport of tracer isotopes in plants offer a high potential for functional phenotyping, but so far measurement time is a bottleneck because continuous time series of at least 1 h are required to obtain reliable estimates of transport properties. Hence, usual throughput values are between 0.5 and 1 samples h -1 . Here, we propose to increase sample throughput by introducing temporal gaps in the data acquisition of each plant sample and measuring multiple plants one after each other in a rotating scheme. In contrast to common time series analysis methods, mechanistic tracer transport models allow the analysis of interrupted time series. The uncertainties of the model parameter estimates are used as a measure of how much information was lost compared to complete time series. A case study was set up to systematically investigate different experimental schedules for different throughput scenarios ranging from 1 to 12 samples h -1 . Selected designs with only a small amount of data points were found to be sufficient for an adequate parameter estimation, implying that the presented approach enables a substantial increase of sample throughput. The presented general framework for automated generation and evaluation of experimental schedules allows the determination of a maximal sample throughput and the respective optimal measurement schedule depending on the required statistical reliability of data acquired by future experiments.
Gone to the Beach — Using GIS to infer how people value ...
Estimating the non-market value of beaches for saltwater recreation is complex. An individual’s preference for a beach depends on their perception of beach characteristics. When choosing one beach over another, an individual balances these personal preferences with any additional costs including travel time and/or fees to access the beach. This trade-off can be used to infer how people value different beach characteristics; especially when beaches are free to the public, beach value estimates rely heavily on accurate travel times. A current case study focused on public access on Cape Cod, MA will be used to demonstrate how travel costs can be used to determine the service area of different beaches, and model expected use of those beaches based on demographics. We will describe several of the transportation networks and route services available and compare a few based on their ability to meet our specific requirements of scale and seasonal travel time accuracy. We are currently developing a recreational demand model, based on visitation data and beach characteristics, that will allow decision makers to predict the benefits of different levels of water quality improvement. An important part of that model is the time required for potential recreation participants to get to different beaches. This presentation will describe different ways to estimate travel times and the advantages/disadvantages for our particular application. It will go on to outline how freely a
2011-01-01
Background While many pandemic preparedness plans have promoted disease control effort to lower and delay an epidemic peak, analytical methods for determining the required control effort and making statistical inferences have yet to be sought. As a first step to address this issue, we present a theoretical basis on which to assess the impact of an early intervention on the epidemic peak, employing a simple epidemic model. Methods We focus on estimating the impact of an early control effort (e.g. unsuccessful containment), assuming that the transmission rate abruptly increases when control is discontinued. We provide analytical expressions for magnitude and time of the epidemic peak, employing approximate logistic and logarithmic-form solutions for the latter. Empirical influenza data (H1N1-2009) in Japan are analyzed to estimate the effect of the summer holiday period in lowering and delaying the peak in 2009. Results Our model estimates that the epidemic peak of the 2009 pandemic was delayed for 21 days due to summer holiday. Decline in peak appears to be a nonlinear function of control-associated reduction in the reproduction number. Peak delay is shown to critically depend on the fraction of initially immune individuals. Conclusions The proposed modeling approaches offer methodological avenues to assess empirical data and to objectively estimate required control effort to lower and delay an epidemic peak. Analytical findings support a critical need to conduct population-wide serological survey as a prior requirement for estimating the time of peak. PMID:21269441
RVC-CAL library for endmember and abundance estimation in hyperspectral image analysis
NASA Astrophysics Data System (ADS)
Lazcano López, R.; Madroñal Quintín, D.; Juárez Martínez, E.; Sanz Álvaro, C.
2015-10-01
Hyperspectral imaging (HI) collects information from across the electromagnetic spectrum, covering a wide range of wavelengths. Although this technology was initially developed for remote sensing and earth observation, its multiple advantages - such as high spectral resolution - led to its application in other fields, as cancer detection. However, this new field has shown specific requirements; for instance, it needs to accomplish strong time specifications, since all the potential applications - like surgical guidance or in vivo tumor detection - imply real-time requisites. Achieving this time requirements is a great challenge, as hyperspectral images generate extremely high volumes of data to process. Thus, some new research lines are studying new processing techniques, and the most relevant ones are related to system parallelization. In that line, this paper describes the construction of a new hyperspectral processing library for RVC-CAL language, which is specifically designed for multimedia applications and allows multithreading compilation and system parallelization. This paper presents the development of the required library functions to implement two of the four stages of the hyperspectral imaging processing chain--endmember and abundances estimation. The results obtained show that the library achieves speedups of 30%, approximately, comparing to an existing software of hyperspectral images analysis; concretely, the endmember estimation step reaches an average speedup of 27.6%, which saves almost 8 seconds in the execution time. It also shows the existence of some bottlenecks, as the communication interfaces among the different actors due to the volume of data to transfer. Finally, it is shown that the library considerably simplifies the implementation process. Thus, experimental results show the potential of a RVC-CAL library for analyzing hyperspectral images in real-time, as it provides enough resources to study the system performance.
Yan, Xu; Zhou, Minxiong; Ying, Lingfang; Yin, Dazhi; Fan, Mingxia; Yang, Guang; Zhou, Yongdi; Song, Fan; Xu, Dongrong
2013-01-01
Diffusion kurtosis imaging (DKI) is a new method of magnetic resonance imaging (MRI) that provides non-Gaussian information that is not available in conventional diffusion tensor imaging (DTI). DKI requires data acquisition at multiple b-values for parameter estimation; this process is usually time-consuming. Therefore, fewer b-values are preferable to expedite acquisition. In this study, we carefully evaluated various acquisition schemas using different numbers and combinations of b-values. Acquisition schemas that sampled b-values that were distributed to two ends were optimized. Compared to conventional schemas using equally spaced b-values (ESB), optimized schemas require fewer b-values to minimize fitting errors in parameter estimation and may thus significantly reduce scanning time. Following a ranked list of optimized schemas resulted from the evaluation, we recommend the 3b schema based on its estimation accuracy and time efficiency, which needs data from only 3 b-values at 0, around 800 and around 2600 s/mm2, respectively. Analyses using voxel-based analysis (VBA) and region-of-interest (ROI) analysis with human DKI datasets support the use of the optimized 3b (0, 1000, 2500 s/mm2) DKI schema in practical clinical applications. PMID:23735303
Chemical Shift Encoded Water–Fat Separation Using Parallel Imaging and Compressed Sensing
Sharma, Samir D.; Hu, Houchun H.; Nayak, Krishna S.
2013-01-01
Chemical shift encoded techniques have received considerable attention recently because they can reliably separate water and fat in the presence of off-resonance. The insensitivity to off-resonance requires that data be acquired at multiple echo times, which increases the scan time as compared to a single echo acquisition. The increased scan time often requires that a compromise be made between the spatial resolution, the volume coverage, and the tolerance to artifacts from subject motion. This work describes a combined parallel imaging and compressed sensing approach for accelerated water–fat separation. In addition, the use of multiscale cubic B-splines for B0 field map estimation is introduced. The water and fat images and the B0 field map are estimated via an alternating minimization. Coil sensitivity information is derived from a calculated k-space convolution kernel and l1-regularization is imposed on the coil-combined water and fat image estimates. Uniform water–fat separation is demonstrated from retrospectively undersampled data in the liver, brachial plexus, ankle, and knee as well as from a prospectively undersampled acquisition of the knee at 8.6x acceleration. PMID:22505285
NASA Astrophysics Data System (ADS)
Wood, W. T.; Runyan, T. E.; Palmsten, M.; Dale, J.; Crawford, C.
2016-12-01
Natural Gas (primarily methane) and gas hydrate accumulations require certain bio-geochemical, as well as physical conditions, some of which are poorly sampled and/or poorly understood. We exploit recent advances in the prediction of seafloor porosity and heat flux via machine learning techniques (e.g. Random forests and Bayesian networks) to predict the occurrence of gas and subsequently gas hydrate in marine sediments. The prediction (actually guided interpolation) of key parameters we use in this study is a K-nearest neighbor technique. KNN requires only minimal pre-processing of the data and predictors, and requires minimal run-time input so the results are almost entirely data-driven. Specifically we use new estimates of sedimentation rate and sediment type, along with recently derived compaction modeling to estimate profiles of porosity and age. We combined the compaction with seafloor heat flux to estimate temperature with depth and geologic age, which, with estimates of organic carbon, and models of methanogenesis yield limits on the production of methane. Results include geospatial predictions of gas (and gas hydrate) accumulations, with quantitative estimates of uncertainty. The Generic Earth Modeling System (GEMS) we have developed to derive the machine learning estimates is modular and easily updated with new algorithms or data.
Alcalde-Rabanal, Jacqueline Elizabeth; Nigenda, Gustavo; Bärnighausen, Till; Velasco-Mondragón, Héctor Eduardo; Darney, Blair Grant
2017-08-03
The purpose of this study was to estimate the gap between the available and the ideal supply of human resources (physicians, nurses, and health promoters) to deliver the guaranteed package of prevention and health promotion services at urban and rural primary care facilities in Mexico. We conducted a cross-sectional observational study using a convenience sample. We selected 20 primary health facilities in urban and rural areas in 10 states of Mexico. We calculated the available and the ideal supply of human resources in these facilities using estimates of time available, used, and required to deliver health prevention and promotion services. We performed descriptive statistics and bivariate hypothesis testing using Wilcoxon and Friedman tests. Finally, we conducted a sensitivity analysis to test whether the non-normal distribution of our time variables biased estimation of available and ideal supply of human resources. The comparison between available and ideal supply for urban and rural primary health care facilities reveals a low supply of physicians. On average, primary health care facilities are lacking five physicians when they were estimated with time used and nine if they were estimated with time required (P < 0.05). No difference was observed between available and ideal supply of nurses in either urban or rural primary health care facilities. There is a shortage of health promoters in urban primary health facilities (P < 0.05). The available supply of physicians and health promoters is lower than the ideal supply to deliver the guaranteed package of prevention and health promotion services. Policies must address the level and distribution of human resources in primary health facilities.
Ronald E. McRoberts
2014-01-01
Multiple remote sensing-based approaches to estimating gross afforestation, gross deforestation, and net deforestation are possible. However, many of these approaches have severe data requirements in the form of long time series of remotely sensed data and/or large numbers of observations of land cover change to train classifiers and assess the accuracy of...
Using respondent uncertainty to mitigate hypothetical bias in a stated choice experiment
Richard C. Ready; Patricia A. Champ; Jennifer L. Lawton
2010-01-01
In a choice experiment study, willingness to pay for a public good estimated from hypothetical choices was three times as large as willingness to pay estimated from choices requiring actual payment. This hypothetical bias was related to the stated level of certainty of respondents. We develop protocols to measure respondent certainty in the context of a choice...
We produced a scientifically defensible methodology to assess whether a regional system is on a sustainable path. The approach required readily available data, metrics applicable to the relevant scale, and results useful to decision makers. We initiated a pilot project to test ...
USDA-ARS?s Scientific Manuscript database
Accurate estimation of energy expenditure (EE) in children and adolescents is required for a better understanding of physiological, behavioral, and environmental factors affecting energy balance. Cross-sectional time series (CSTS) models, which account for correlation structure of repeated observati...
An analysis and demonstration of clock synchronization by VLBI
NASA Technical Reports Server (NTRS)
Hurd, W. J.
1972-01-01
A prototype of a semireal-time system for synchronizing the DSN station clocks by radio interferometry was successfully demonstrated. The system utilized an approximate maximum likelihood estimation procedure for processing the data, thereby achieving essentially optimum time synchronization estimates for a given amount of data, or equivalently, minimizing the amount of data required for reliable estimation. Synchronization accuracies as good as 100 nsec rms were achieved between DSS 11 and DSS 12, both at Goldstone, California. The accuracy can be improved by increasing the system bandwidth until the fundamental limitations due to position uncertainties of baseline and source and atmospheric effects are reached. These limitations are under ten nsec for transcontinental baselines.
Contrasting Causal Effects of Workplace Interventions.
Izano, Monika A; Brown, Daniel M; Neophytou, Andreas M; Garcia, Erika; Eisen, Ellen A
2018-07-01
Occupational exposure guidelines are ideally based on estimated effects of static interventions that assign constant exposure over a working lifetime. Static effects are difficult to estimate when follow-up extends beyond employment because their identifiability requires additional assumptions. Effects of dynamic interventions that assign exposure while at work, allowing subjects to leave and become unexposed thereafter, are more easily identifiable but result in different estimates. Given the practical implications of exposure limits, we explored the drivers of the differences between static and dynamic interventions in a simulation study where workers could terminate employment because of an intermediate adverse health event that functions as a time-varying confounder. The two effect estimates became more similar with increasing strength of the health event and outcome relationship and with increasing time between health event and employment termination. Estimates were most dissimilar when the intermediate health event occurred early in employment, providing an effective screening mechanism.
HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.
Wiecki, Thomas V; Sofer, Imri; Frank, Michael J
2013-01-01
The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/
Novel applications of the temporal kernel method: Historical and future radiative forcing
NASA Astrophysics Data System (ADS)
Portmann, R. W.; Larson, E.; Solomon, S.; Murphy, D. M.
2017-12-01
We present a new estimate of the historical radiative forcing derived from the observed global mean surface temperature and a model derived kernel function. Current estimates of historical radiative forcing are usually derived from climate models. Despite large variability in these models, the multi-model mean tends to do a reasonable job of representing the Earth system and climate. One method of diagnosing the transient radiative forcing in these models requires model output of top of the atmosphere radiative imbalance and global mean temperature anomaly. It is difficult to apply this method to historical observations due to the lack of TOA radiative measurements before CERES. We apply the temporal kernel method (TKM) of calculating radiative forcing to the historical global mean temperature anomaly. This novel approach is compared against the current regression based methods using model outputs and shown to produce consistent forcing estimates giving confidence in the forcing derived from the historical temperature record. The derived TKM radiative forcing provides an estimate of the forcing time series that the average climate model needs to produce the observed temperature record. This forcing time series is found to be in good overall agreement with previous estimates but includes significant differences that will be discussed. The historical anthropogenic aerosol forcing is estimated as a residual from the TKM and found to be consistent with earlier moderate forcing estimates. In addition, this method is applied to future temperature projections to estimate the radiative forcing required to achieve those temperature goals, such as those set in the Paris agreement.
Preliminary design of a mobile lunar power supply
NASA Technical Reports Server (NTRS)
Schmitz, Paul C.; Kenny, Barbara H.; Fulmer, Christopher R.
1991-01-01
A preliminary design for a Stirling isotope power system for use as a mobile lunar power supply is presented. Performance and mass of the components required for the system are estimated. These estimates are based on power requirements and the operating environment. Optimizations routines are used to determine minimum mass operational points. Shielding for the isotope system are given as a function of the allowed dose, distance from the source, and the time spent near the source. The technologies used in the power conversion and radiator systems are taken from ongoing research in the Civil Space Technology Initiative (CSTI) program.
William T. Simpson
2003-01-01
Heat sterilization is often required to prevent spread of insects and pathogens in wood products in international trade. Heat sterilization requires estimating the time necessary for the center of the wood configuration to reach the temperature required to kill insects or pathogens. In these experiments on 1.0- and 1.8-in.- (25- and 46-mm-) thick slash pine, heating...
Cheng, Qi; Xue, Dabin; Wang, Guanyu; Ochieng, Washington Yotto
2017-01-01
The increasing number of vehicles in modern cities brings the problem of increasing crashes. One of the applications or services of Intelligent Transportation Systems (ITS) conceived to improve safety and reduce congestion is collision avoidance. This safety critical application requires sub-meter level vehicle state estimation accuracy with very high integrity, continuity and availability, to detect an impending collision and issue a warning or intervene in the case that the warning is not heeded. Because of the challenging city environment, to date there is no approved method capable of delivering this high level of performance in vehicle state estimation. In particular, the current Global Navigation Satellite System (GNSS) based collision avoidance systems have the major limitation that the real-time accuracy of dynamic state estimation deteriorates during abrupt acceleration and deceleration situations, compromising the integrity of collision avoidance. Therefore, to provide the Required Navigation Performance (RNP) for collision avoidance, this paper proposes a novel Particle Filter (PF) based model for the integration or fusion of real-time kinematic (RTK) GNSS position solutions with electronic compass and road segment data used in conjunction with an Autoregressive (AR) motion model. The real-time vehicle state estimates are used together with distance based collision avoidance algorithms to predict potential collisions. The algorithms are tested by simulation and in the field representing a low density urban environment. The results show that the proposed algorithm meets the horizontal positioning accuracy requirement for collision avoidance and is superior to positioning accuracy of GNSS only, traditional Constant Velocity (CV) and Constant Acceleration (CA) based motion models, with a significant improvement in the prediction accuracy of potential collision. PMID:29186851
Sun, Rui; Cheng, Qi; Xue, Dabin; Wang, Guanyu; Ochieng, Washington Yotto
2017-11-25
The increasing number of vehicles in modern cities brings the problem of increasing crashes. One of the applications or services of Intelligent Transportation Systems (ITS) conceived to improve safety and reduce congestion is collision avoidance. This safety critical application requires sub-meter level vehicle state estimation accuracy with very high integrity, continuity and availability, to detect an impending collision and issue a warning or intervene in the case that the warning is not heeded. Because of the challenging city environment, to date there is no approved method capable of delivering this high level of performance in vehicle state estimation. In particular, the current Global Navigation Satellite System (GNSS) based collision avoidance systems have the major limitation that the real-time accuracy of dynamic state estimation deteriorates during abrupt acceleration and deceleration situations, compromising the integrity of collision avoidance. Therefore, to provide the Required Navigation Performance (RNP) for collision avoidance, this paper proposes a novel Particle Filter (PF) based model for the integration or fusion of real-time kinematic (RTK) GNSS position solutions with electronic compass and road segment data used in conjunction with an Autoregressive (AR) motion model. The real-time vehicle state estimates are used together with distance based collision avoidance algorithms to predict potential collisions. The algorithms are tested by simulation and in the field representing a low density urban environment. The results show that the proposed algorithm meets the horizontal positioning accuracy requirement for collision avoidance and is superior to positioning accuracy of GNSS only, traditional Constant Velocity (CV) and Constant Acceleration (CA) based motion models, with a significant improvement in the prediction accuracy of potential collision.
An Assessment of Global Organic Carbon Flux Along Continental Margins
NASA Technical Reports Server (NTRS)
Thunell, Robert
2004-01-01
This project was designed to use real-time and historical SeaWiFS and AVHRR data, and real-time MODIS data in order to estimate the global vertical carbon flux along continental margins. This required construction of an empirical model relating surface ocean color and physical variables like temperature and wind to vertical settling flux at sites co-located with sediment trap observations (Santa Barbara Basin, Cariaco Basin, Gulf of California, Hawaii, and Bermuda, etc), and application of the model to imagery in order to obtain spatially-weighted estimates.
SAMICS support study. Volume 1: Cost account catalog
NASA Technical Reports Server (NTRS)
1977-01-01
The Jet Propulsion Laboratory (JPL) is examining the feasibility of a new industry to produce photovoltaic solar energy collectors similar to those used on spacecraft. To do this, a standardized costing procedure was developed. The Solar Array Manufacturing Industry Costing Standards (SAMICS) support study supplies the following information: (1) SAMICS critique; (2) Standard data base--cost account structure, expense item costs, inflation rates, indirect requirements relationships, and standard financial parameter values; (3) Facilities capital cost estimating relationships; (4) Conceptual plant designs; (5) Construction lead times; (6) Production start-up times; (7) Manufacturing price estimates.
Calibrated tree priors for relaxed phylogenetics and divergence time estimation.
Heled, Joseph; Drummond, Alexei J
2012-01-01
The use of fossil evidence to calibrate divergence time estimation has a long history. More recently, Bayesian Markov chain Monte Carlo has become the dominant method of divergence time estimation, and fossil evidence has been reinterpreted as the specification of prior distributions on the divergence times of calibration nodes. These so-called "soft calibrations" have become widely used but the statistical properties of calibrated tree priors in a Bayesian setting hashave not been carefully investigated. Here, we clarify that calibration densities, such as those defined in BEAST 1.5, do not represent the marginal prior distribution of the calibration node. We illustrate this with a number of analytical results on small trees. We also describe an alternative construction for a calibrated Yule prior on trees that allows direct specification of the marginal prior distribution of the calibrated divergence time, with or without the restriction of monophyly. This method requires the computation of the Yule prior conditional on the height of the divergence being calibrated. Unfortunately, a practical solution for multiple calibrations remains elusive. Our results suggest that direct estimation of the prior induced by specifying multiple calibration densities should be a prerequisite of any divergence time dating analysis.
Dimension reduction of frequency-based direct Granger causality measures on short time series.
Siggiridou, Elsa; Kimiskidis, Vasilios K; Kugiumtzis, Dimitris
2017-09-01
The mainstream in the estimation of effective brain connectivity relies on Granger causality measures in the frequency domain. If the measure is meant to capture direct causal effects accounting for the presence of other observed variables, as in multi-channel electroencephalograms (EEG), typically the fit of a vector autoregressive (VAR) model on the multivariate time series is required. For short time series of many variables, the estimation of VAR may not be stable requiring dimension reduction resulting in restricted or sparse VAR models. The restricted VAR obtained by the modified backward-in-time selection method (mBTS) is adapted to the generalized partial directed coherence (GPDC), termed restricted GPDC (RGPDC). Dimension reduction on other frequency based measures, such the direct directed transfer function (dDTF), is straightforward. First, a simulation study using linear stochastic multivariate systems is conducted and RGPDC is favorably compared to GPDC on short time series in terms of sensitivity and specificity. Then the two measures are tested for their ability to detect changes in brain connectivity during an epileptiform discharge (ED) from multi-channel scalp EEG. It is shown that RGPDC identifies better than GPDC the connectivity structure of the simulated systems, as well as changes in the brain connectivity, and is less dependent on the free parameter of VAR order. The proposed dimension reduction in frequency measures based on VAR constitutes an appropriate strategy to estimate reliably brain networks within short-time windows. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Currie, Nancy J.; Rochlis, Jennifer
2004-01-01
International Space Station (ISS) operations will require the on-board crew to perform numerous robotic-assisted assembly, maintenance, and inspection activities. Current estimates for some robotically performed maintenance timelines are disproportionate and potentially exceed crew availability and duty times. Ground-based control of the ISS robotic manipulators, specifically the Special Purpose Dexterous Manipulator (SPDM), is being examined as one potential solution to alleviate the excessive amounts of crew time required for extravehicular robotic maintenance and inspection tasks.
Data Compression With Application to Geo-Location
2010-08-01
wireless sensor network requires the estimation of time-difference-of-arrival (TDOA) parameters using data collected by a set of spatially separated sensors. Compressing the data that is shared among the sensors can provide tremendous savings in terms of the energy and transmission latency. Traditional MSE and perceptual based data compression schemes fail to accurately capture the effects of compression on the TDOA estimation task; therefore, it is necessary to investigate compression algorithms suitable for TDOA parameter estimation. This thesis explores the
Determination of the Time-Space Magnetic Correlation Functions in the Solar Wind
NASA Astrophysics Data System (ADS)
Weygand, J. M.; Matthaeus, W. H.; Kivelson, M.; Dasso, S.
2013-12-01
Magnetic field data from many different intervals and 7 different solar wind spacecraft are employed to estimate the scale-dependent time decorrelation function in the interplanetary magnetic field in both the slow and fast solar wind. This estimation requires correlations varying with both space and time lags. The two point correlation function with no time lag is determined by correlating time series data from multiple spacecraft separated in space and for complete coverage of length scales relies on many intervals with different spacecraft spatial separations. In addition we employ single spacecraft time-lagged correlations, and two spacecraft time lagged correlations to access different spatial and temporal correlation data. Combining these data sets gives estimates of the scale-dependent time decorrelation function, which in principle tells us how rapidly time decorrelation occurs at a given wavelength. For static fields the scale-dependent time decorrelation function is trivially unity, but in turbulence the nonlinear cascade process induces time-decorrelation at a given length scale that occurs more rapidly with decreasing scale. The scale-dependent time decorrelation function is valuable input to theories as well as various applications such as scattering, transport, and study of predictability. It is also a fundamental element of formal turbulence theory. Our results are extension of the Eulerian correlation functions estimated in Matthaeus et al. [2010], Weygand et al [2012; 2013].
Quantum metrology and estimation of Unruh effect
Wang, Jieci; Tian, Zehua; Jing, Jiliang; Fan, Heng
2014-01-01
We study the quantum metrology for a pair of entangled Unruh-Dewitt detectors when one of them is accelerated and coupled to a massless scalar field. Comparing with previous schemes, our model requires only local interaction and avoids the use of cavities in the probe state preparation process. We show that the probe state preparation and the interaction between the accelerated detector and the external field have significant effects on the value of quantum Fisher information, correspondingly pose variable ultimate limit of precision in the estimation of Unruh effect. We find that the precision of the estimation can be improved by a larger effective coupling strength and a longer interaction time. Alternatively, the energy gap of the detector has a range that can provide us a better precision. Thus we may adjust those parameters and attain a higher precision in the estimation. We also find that an extremely high acceleration is not required in the quantum metrology process. PMID:25424772
Methods of adjusting the stable estimates of fertility for the effects of mortality decline.
Abou-Gamrah, H
1976-03-01
Summary The paper shows how stable population methods, based on the age structure and the rate of increase, may be used to estimate the demographic measures of a quasi-stable population. After a discussion of known methods for adjusting the stable estimates to allow for the effects of mortality decline two new methods are presented, the application of which requires less information. The first method does not need any supplementary information, and the second method requires an estimate of the difference between the last two five-year intercensal rates of increase, i.e. five times the annual change of the rate of increase during the last ten years. For these new methods we do not need to know the onset year of mortality decline as in the Coale-Demeny method, or a long series of rates of increase as in Zachariah's method.
Introduction and application of the multiscale coefficient of variation analysis.
Abney, Drew H; Kello, Christopher T; Balasubramaniam, Ramesh
2017-10-01
Quantifying how patterns of behavior relate across multiple levels of measurement typically requires long time series for reliable parameter estimation. We describe a novel analysis that estimates patterns of variability across multiple scales of analysis suitable for time series of short duration. The multiscale coefficient of variation (MSCV) measures the distance between local coefficient of variation estimates within particular time windows and the overall coefficient of variation across all time samples. We first describe the MSCV analysis and provide an example analytical protocol with corresponding MATLAB implementation and code. Next, we present a simulation study testing the new analysis using time series generated by ARFIMA models that span white noise, short-term and long-term correlations. The MSCV analysis was observed to be sensitive to specific parameters of ARFIMA models varying in the type of temporal structure and time series length. We then apply the MSCV analysis to short time series of speech phrases and musical themes to show commonalities in multiscale structure. The simulation and application studies provide evidence that the MSCV analysis can discriminate between time series varying in multiscale structure and length.
29 CFR 18.7 - Prehearing statements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... to reach stipulation to the fullest extent possible; (3) Facts in dispute; (4) Witnesses, except to... location of hearing and estimated time required for presentation of the party's or parties' case; (8) Any...
Change-in-ratio estimators for populations with more than two subclasses
Udevitz, Mark S.; Pollock, Kenneth H.
1991-01-01
Change-in-ratio methods have been developed to estimate the size of populations with two or three population subclasses. Most of these methods require the often unreasonable assumption of equal sampling probabilities for individuals in all subclasses. This paper presents new models based on the weaker assumption that ratios of sampling probabilities are constant over time for populations with three or more subclasses. Estimation under these models requires that a value be assumed for one of these ratios when there are two samples. Explicit expressions are given for the maximum likelihood estimators under models for two samples with three or more subclasses and for three samples with two subclasses. A numerical method using readily available statistical software is described for obtaining the estimators and their standard errors under all of the models. Likelihood ratio tests that can be used in model selection are discussed. Emphasis is on the two-sample, three-subclass models for which Monte-Carlo simulation results and an illustrative example are presented.
Gilaie-Dotan, Sharon; Ashkenazi, Hamutal; Dar, Reuven
2016-01-01
One of the main characteristics of obsessive-compulsive disorder (OCD) is the persistent feeling of uncertainty, affecting many domains of actions and feelings. It was recently hypothesized that OCD uncertainty is related to attenuated access to internal states. As supra-second timing is associated with bodily and interoceptive awareness, we examined whether supra-second timing would be associated with OC tendencies. We measured supra-second (~9 s) and sub-second (~450 ms) timing along with control non-temporal perceptual tasks in a group of 60 university students. Supra-second timing was measured either with fixed criterion tasks requiring to temporally discriminate between two predefined fixed interval durations (9 vs. 9.9 s), or with an open-ended task requiring to discriminate between 9 s and longer intervals which were of varying durations that were not a priori known to the participants. The open-ended task employed an adaptive Bayesian procedure that efficiently estimated the duration difference required to discriminate 9 s from longer intervals. We also assessed symptoms of OCD, depression, and anxiety. Open-ended supra-second temporal sensitivity was correlated with OC tendencies, as predicted (even after controlling for depression and anxiety), whereas the other tasks were not. Higher OC tendencies were associated with lower timing sensitivity to 9 s intervals such that participants with higher OC tendency scores required longer interval differences to discriminate 9 s from longer intervals. While these results need to be substantiated in future research, they suggest that open-ended timing tasks, as those encountered in real-life (e.g., estimating how long it would take to complete a task), might be adversely affected in OCD. PMID:27445725
The radial speed-expansion speed relation for Earth-directed CMEs
NASA Astrophysics Data System (ADS)
Mäkelä, P.; Gopalswamy, N.; Yashiro, S.
2016-05-01
Earth-directed coronal mass ejections (CMEs) are the main drivers of major geomagnetic storms. Therefore, a good estimate of the disturbance arrival time at Earth is required for space weather predictions. The STEREO and SOHO spacecraft were viewing the Sun in near quadrature during January 2010 to September 2012, providing a unique opportunity to study the radial speed (Vrad)-expansion speed (Vexp) relationship of Earth-directed CMEs. This relationship is useful in estimating the Vrad of Earth-directed CMEs, when they are observed from Earth view only. We selected 19 Earth-directed CMEs observed by the Large Angle and Spectrometric Coronagraph (LASCO)/C3 coronagraph on SOHO and the Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI)/COR2 coronagraph on STEREO during January 2010 to September 2012. We found that of the three tested geometric CME models the full ice-cream cone model of the CME describes best the Vrad-Vexp relationship, as suggested by earlier investigations. We also tested the prediction accuracy of the empirical shock arrival (ESA) model proposed by Gopalswamy et al. (2005a), while estimating the CME propagation speeds from the CME expansion speeds. If we use STEREO observations to estimate the CME width required to calculate the Vrad from the Vexp measurements, the mean absolute error (MAE) of the shock arrival times of the ESA model is 8.4 h. If the LASCO measurements are used to estimate the CME width, the MAE still remains below 17 h. Therefore, by using the simple Vrad-Vexp relationship to estimate the Vrad of the Earth-directed CMEs, the ESA model is able to predict the shock arrival times with accuracy comparable to most other more complex models.
Agrillo, Christian; Piffer, Laura; Adriano, Andrea
2013-07-01
A significant debate surrounds the nature of the cognitive mechanisms involved in non-symbolic number estimation. Several studies have suggested the existence of the same cognitive system for estimation of time, space, and number, called "a theory of magnitude" (ATOM). In addition, researchers have proposed the theory that non-symbolic number abilities might support our mathematical skills. Despite the large number of studies carried out, no firm conclusions can be drawn on either topic. In the present study, we correlated the performance of adults on non-symbolic magnitude estimations and symbolic numerical tasks. Non-symbolic magnitude abilities were assessed by asking participants to estimate which auditory tone lasted longer (time), which line was longer (space), and which group of dots was more numerous (number). To assess symbolic numerical abilities, participants were required to perform mental calculations and mathematical reasoning. We found a positive correlation between non-symbolic and symbolic numerical abilities. On the other hand, no correlation was found among non-symbolic estimations of time, space, and number. Our study supports the idea that mathematical abilities rely on rudimentary numerical skills that predate verbal language. By contrast, the lack of correlation among non-symbolic estimations of time, space, and number is incompatible with the idea that these magnitudes are entirely processed by the same cognitive system.
Lloyd, G T; Bapst, D W; Friedman, M; Davis, K E
2016-11-01
Branch lengths-measured in character changes-are an essential requirement of clock-based divergence estimation, regardless of whether the fossil calibrations used represent nodes or tips. However, a separate set of divergence time approaches are typically used to date palaeontological trees, which may lack such branch lengths. Among these methods, sophisticated probabilistic approaches have recently emerged, in contrast with simpler algorithms relying on minimum node ages. Here, using a novel phylogenetic hypothesis for Mesozoic dinosaurs, we apply two such approaches to estimate divergence times for: (i) Dinosauria, (ii) Avialae (the earliest birds) and (iii) Neornithes (crown birds). We find: (i) the plausibility of a Permian origin for dinosaurs to be dependent on whether Nyasasaurus is the oldest dinosaur, (ii) a Middle to Late Jurassic origin of avian flight regardless of whether Archaeopteryx or Aurornis is considered the first bird and (iii) a Late Cretaceous origin for Neornithes that is broadly congruent with other node- and tip-dating estimates. Demonstrating the feasibility of probabilistic time-scaling further opens up divergence estimation to the rich histories of extinct biodiversity in the fossil record, even in the absence of detailed character data. © 2016 The Authors.
NWS Operational Requirements for Ensemble-Based Hydrologic Forecasts
NASA Astrophysics Data System (ADS)
Hartman, R. K.
2008-12-01
Ensemble-based hydrologic forecasts have been developed and issued by National Weather Service (NWS) staff at River Forecast Centers (RFCs) for many years. Used principally for long-range water supply forecasts, only the uncertainty associated with weather and climate have been traditionally considered. As technology and societal expectations of resource managers increase, the use and desire for risk-based decision support tools has also increased. These tools require forecast information that includes reliable uncertainty estimates across all time and space domains. The development of reliable uncertainty estimates associated with hydrologic forecasts is being actively pursued within the United States and internationally. This presentation will describe the challenges, components, and requirements for operational hydrologic ensemble-based forecasts from the perspective of a NOAA/NWS River Forecast Center.
Accurate and efficient calculation of response times for groundwater flow
NASA Astrophysics Data System (ADS)
Carr, Elliot J.; Simpson, Matthew J.
2018-03-01
We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blais, AR; Dekaban, M; Lee, T-Y
2014-08-15
Quantitative analysis of dynamic positron emission tomography (PET) data usually involves minimizing a cost function with nonlinear regression, wherein the choice of starting parameter values and the presence of local minima affect the bias and variability of the estimated kinetic parameters. These nonlinear methods can also require lengthy computation time, making them unsuitable for use in clinical settings. Kinetic modeling of PET aims to estimate the rate parameter k{sub 3}, which is the binding affinity of the tracer to a biological process of interest and is highly susceptible to noise inherent in PET image acquisition. We have developed linearized kineticmore » models for kinetic analysis of dynamic contrast enhanced computed tomography (DCE-CT)/PET imaging, including a 2-compartment model for DCE-CT and a 3-compartment model for PET. Use of kinetic parameters estimated from DCE-CT can stabilize the kinetic analysis of dynamic PET data, allowing for more robust estimation of k{sub 3}. Furthermore, these linearized models are solved with a non-negative least squares algorithm and together they provide other advantages including: 1) only one possible solution and they do not require a choice of starting parameter values, 2) parameter estimates are comparable in accuracy to those from nonlinear models, 3) significantly reduced computational time. Our simulated data show that when blood volume and permeability are estimated with DCE-CT, the bias of k{sub 3} estimation with our linearized model is 1.97 ± 38.5% for 1,000 runs with a signal-to-noise ratio of 10. In summary, we have developed a computationally efficient technique for accurate estimation of k{sub 3} from noisy dynamic PET data.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-07
... material and Government- furnished property are subject to serialized item management. DATES: Comments on... item management (serially-managed items). At this time, DoD is unable to estimate the number of small... collection requirements that require the approval of the Office of Management and Budget under 44 U.S.C. 3501...
ERIC Educational Resources Information Center
Vaughn, Janet L.
The pricing of household work can be based on standardized times established for component parts of the job. Techniques for determining these standardized times and the component parts were developed in a study conducted at Purdue University and supported by a federal grant. After a preliminary survey of homemaker practices in cleaning living…
2010 Army Modernization Strategy
2010-01-01
Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...Science and Technology (S&T) Program, and shortening the time between requirement identification and solution delivery. • Continuously modernize equipment...available, as quickly as possible, so they can succeed anywhere, every time . Our Soldiers deserve nothing less. Army Strong! U.S. Soldiers engage enemy
76 FR 45799 - Agency Information Collection Activities; Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-01
... the 2005 survey so that the results from it can be used as a baseline for a time-series analysis.\\1... 15 minutes to complete the pretest, the same time as that needed for the actual survey. The revised estimate takes further into account the presumed added time required to respond to questions unique to the...
Basis for the power supply reliability study of the 1 MW neutron source
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGhee, D.G.; Fathizadeh, M.
1993-07-01
The Intense Pulsed Neutron Source (IPNS) upgrade to 1 MW requires new power supply designs. This paper describes the tools and the methodology needed to assess the reliability of the power supplies. Both the design and operation of the power supplies in the synchrotron will be taken into account. To develop a reliability budget, the experiments to be conducted with this accelerator are reviewed, and data is collected on the number and duration of interruptions possible before an experiment is required to start over. Once the budget is established, several accelerators of this type will be examined. The budget ismore » allocated to the different accelerator systems based on their operating experience. The accelerator data is usually in terms of machine availability and system down time. It takes into account mean time to failure (MTTF), time to diagnose, time to repair or replace the failed components, and time to get the machine back online. These estimated times are used as baselines for the design. Even though we are in the early stage of design, available data can be analyzed to estimate the MTTF for the power supplies.« less
From Air Temperature to Lake Evaporation on a Daily Time Step: A New Empirical Approach
NASA Astrophysics Data System (ADS)
Welch, C.; Holmes, T. L.; Stadnyk, T. A.
2016-12-01
Lake evaporation is a key component of the water balance in much of Canada due to the vast surface area covered by open water. Hence, incorporating this flux effectively into hydrological simulation frameworks is essential to effective water management. Inclusion has historically been limited by the intensive data required to apply the energy budget methods previously demonstrated to most effectively capture the timing and volume of the evaporative flux. Widespread, consistent, lake water temperature and net radiation data are not available across much of Canada, particularly the sparsely populated boreal shield. We present a method to estimate lake evaporation on a daily time step that consists of a series of empirical equations applicable to lakes of widely varying morphologies. Specifically, estimation methods that require the single meteorological variable of air temperature are presented for lake water temperature, net radiation, and heat flux. The methods were developed using measured data collected at two small Boreal shield lakes, Lake Winnipeg North and South basins, and Lake Superior in 2008 and 2009. The mean average error (MAE) of the lake water temperature estimates is generally 1.5°C, and the MAE of the heat flux method is 50 W m-2. The simulated values are combined to estimate daily lake evaporation using the Priestley-Taylor method. Heat storage within the lake is tracked and limits the potential heat flux from a lake. Five-day running averages compare well to measured evaporation at the two small shield lakes (Bowen Ratio Energy Balance) and adequately to Lake Superior (eddy covariance). In addition to air temperature, the method requires a mean depth for each lake. The method demonstrably improves the timing and volume of evaporative flux in comparison to existing evaporation methods that depend only on temperature. The method will be further tested in a semi-distributed hydrological model to assess the cumulative effects across a lake-dominated catchment in the Lower Nelson River basin.
"Flash" dance: how speed modulates percieved duration in dancers and non-dancers.
Sgouramani, Helena; Vatakis, Argiro
2014-03-01
Speed has been proposed as a modulating factor on duration estimation. However, the different measurement methodologies and experimental designs used have led to inconsistent results across studies, and, thus, the issue of how speed modulates time estimation remains unresolved. Additionally, no studies have looked into the role of expertise on spatiotemporal tasks (tasks requiring high temporal and spatial acuity; e.g., dancing) and susceptibility to modulations of speed in timing judgments. In the present study, therefore, using naturalistic, dynamic dance stimuli, we aimed at defining the role of speed and the interaction of speed and experience on time estimation. We presented videos of a dancer performing identical ballet steps in fast and slow versions, while controlling for the number of changes present. Professional dancers and non-dancers performed duration judgments through a production and a reproduction task. Analysis revealed a significantly larger underestimation of fast videos as compared to slow ones during reproduction. The exact opposite result was true for the production task. Dancers were significantly less variable in their time estimations as compared to non-dancers. Speed and experience, therefore, affect the participants' estimates of time. Results are discussed in association to the theoretical framework of current models by focusing on the role of attention. © 2013 Elsevier B.V. All rights reserved.
Burr, Tom; Hamada, Michael S.; Howell, John; ...
2013-01-01
Process monitoring (PM) for nuclear safeguards sometimes requires estimation of thresholds corresponding to small false alarm rates. Threshold estimation dates to the 1920s with the Shewhart control chart; however, because possible new roles for PM are being evaluated in nuclear safeguards, it is timely to consider modern model selection options in the context of threshold estimation. One of the possible new PM roles involves PM residuals, where a residual is defined as residual = data − prediction. This paper reviews alarm threshold estimation, introduces model selection options, and considers a range of assumptions regarding the data-generating mechanism for PM residuals.more » Two PM examples from nuclear safeguards are included to motivate the need for alarm threshold estimation. The first example involves mixtures of probability distributions that arise in solution monitoring, which is a common type of PM. The second example involves periodic partial cleanout of in-process inventory, leading to challenging structure in the time series of PM residuals.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Gary E.; Carlson, Thomas J.; Skalski, John R.
2010-12-21
The purpose of this compliance study was to estimate dam passage survival of subyearling Chinook salmon smolts at The Dalles Dam during summer 2010. Under the 2008 Federal Columbia River Power System (FCRPS) Biological Opinion (BiOp), dam passage survival should be greater than or equal to 0.93 and estimated with a standard error (SE) less than or equal 0.015. The study also estimated smolt passage survival from the forebay 2 km upstream of the dam to the tailrace 2 km below the dam The forebay-to-tailrace survival estimate satisfies the “BRZ-to-BRZ” survival estimate called for in the Fish Accords. , asmore » well as the forebay residence time, tailrace egress time, and spill passage efficiency, as required in the Columbia Basin Fish Accords. The estimate of dam survival for subyearling Chinook salmon at The Dalles in 2010 was 0.9404 with an associated standard error of 0.0091.« less
14 CFR 21.193 - Experimental certificates: general.
Code of Federal Regulations, 2011 CFR
2011-01-01
... purpose of the experiment; (2) The estimated time or number of flights required for the experiment; (3) The areas over which the experiment will be conducted; and (4) Except for aircraft converted from a...
46 CFR 502.95 - Prehearing statements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... the party or parties have communicated or conferred in a good faith effort to reach stipulation to the... location of hearing and estimated time required for presentation of the party's or parties' case. (8) Any...
77 FR 67026 - Proposed Extension of the Approval of Information Collection Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-08
.... Estimated Time per Response: 30-45 minutes. Frequency: On occasion. Total Burden Cost (capital/startup): $3996. Total Burden Costs (operation/maintenance): $54,732. Dated: October 31, 2012. Mary Ziegler...
Code of Federal Regulations, 2012 CFR
2012-10-01
... Management Reports, the NASA form 533 series, is required on cost-type, price redetermination, and fixed... implemented in the contract based on the estimated final contract value at the time of award. [62 FR 14017...
14 CFR 21.193 - Experimental certificates: general.
Code of Federal Regulations, 2014 CFR
2014-01-01
... experiment; (2) The estimated time or number of flights required for the experiment; (3) The areas over which the experiment will be conducted; and (4) Except for aircraft converted from a previously certificated...
14 CFR 21.193 - Experimental certificates: general.
Code of Federal Regulations, 2012 CFR
2012-01-01
... experiment; (2) The estimated time or number of flights required for the experiment; (3) The areas over which the experiment will be conducted; and (4) Except for aircraft converted from a previously certificated...
14 CFR 21.193 - Experimental certificates: general.
Code of Federal Regulations, 2013 CFR
2013-01-01
... experiment; (2) The estimated time or number of flights required for the experiment; (3) The areas over which the experiment will be conducted; and (4) Except for aircraft converted from a previously certificated...
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
Method to Estimate the Dissolved Air Content in Hydraulic Fluid
NASA Technical Reports Server (NTRS)
Hauser, Daniel M.
2011-01-01
In order to verify the air content in hydraulic fluid, an instrument was needed to measure the dissolved air content before the fluid was loaded into the system. The instrument also needed to measure the dissolved air content in situ and in real time during the de-aeration process. The current methods used to measure the dissolved air content require the fluid to be drawn from the hydraulic system, and additional offline laboratory processing time is involved. During laboratory processing, there is a potential for contamination to occur, especially when subsaturated fluid is to be analyzed. A new method measures the amount of dissolved air in hydraulic fluid through the use of a dissolved oxygen meter. The device measures the dissolved air content through an in situ, real-time process that requires no additional offline laboratory processing time. The method utilizes an instrument that measures the partial pressure of oxygen in the hydraulic fluid. By using a standardized calculation procedure that relates the oxygen partial pressure to the volume of dissolved air in solution, the dissolved air content is estimated. The technique employs luminescent quenching technology to determine the partial pressure of oxygen in the hydraulic fluid. An estimated Henry s law coefficient for oxygen and nitrogen in hydraulic fluid is calculated using a standard method to estimate the solubility of gases in lubricants. The amount of dissolved oxygen in the hydraulic fluid is estimated using the Henry s solubility coefficient and the measured partial pressure of oxygen in solution. The amount of dissolved nitrogen that is in solution is estimated by assuming that the ratio of dissolved nitrogen to dissolved oxygen is equal to the ratio of the gas solubility of nitrogen to oxygen at atmospheric pressure and temperature. The technique was performed at atmospheric pressure and room temperature. The technique could be theoretically carried out at higher pressures and elevated temperatures.
Influence of mobile phone traffic on base station exposure of the general public.
Joseph, Wout; Verloock, Leen
2010-11-01
The influence of mobile phone traffic on temporal radiofrequency exposure due to base stations during 7 d is compared for five different sites with Erlang data (representing average mobile phone traffic intensity during a period of time). The time periods of high exposure and high traffic during a day are compared and good agreement is obtained. The minimal required measurement periods to obtain accurate estimates for maximal and average long-period exposure (7 d) are determined. It is shown that these periods may be very long, indicating the necessity of new methodologies to estimate maximal and average exposure from short-period measurement data. Therefore, a new method to calculate the fields at a time instant from fields at another time instant using normalized Erlang values is proposed. This enables the estimation of maximal and average exposure during a week from short-period measurements using only Erlang data and avoids the necessity of long measurement times.
Thinking aloud influences perceived time.
Hertzum, Morten; Holmegaard, Kristin Due
2015-02-01
We investigate whether thinking aloud influences perceived time. Thinking aloud is widely used in usability evaluation, yet it is debated whether thinking aloud influences thought and behavior. If thinking aloud is restricted to the verbalization of information to which a person is already attending, there is evidence that thinking aloud does not influence thought and behavior. In an experiment, 16 thinking-aloud participants and 16 control participants solved a code-breaking task 24 times each. Participants estimated task duration. The 24 trials involved two levels of time constraint (timed, untimed) and resulted in two levels of success (solved, unsolved). The ratio of perceived time to clock time was lower for thinking-aloud than control participants. Participants overestimated time by an average of 47% (thinking aloud) and 94% (control). The effect of thinking aloud on time perception also held separately for timed, untimed, solved, and unsolved trials. Thinking aloud (verbalization at Levels 1 and 2) influences perceived time. Possible explanations of this effect include that thinking aloud may require attention, cause a processing shift that overshadows the perception of time, or increase mental workload. For usability evaluation, this study implies that time estimates made while thinking aloud cannot be compared with time estimates made while not thinking aloud, that ratings of systems experienced while thinking aloud may be inaccurate (because the experience of time influences other experiences), and that it may therefore be considered to replace concurrent thinking aloud with retrospective thinking aloud when evaluations involve time estimation.
Application of expert systems in project management decision aiding
NASA Technical Reports Server (NTRS)
Harris, Regina; Shaffer, Steven; Stokes, James; Goldstein, David
1987-01-01
The feasibility of developing an expert systems-based project management decision aid to enhance the performance of NASA project managers was assessed. The research effort included extensive literature reviews in the areas of project management, project management decision aiding, expert systems technology, and human-computer interface engineering. Literature reviews were augmented by focused interviews with NASA managers. Time estimation for project scheduling was identified as the target activity for decision augmentation, and a design was developed for an Integrated NASA System for Intelligent Time Estimation (INSITE). The proposed INSITE design was judged feasible with a low level of risk. A partial proof-of-concept experiment was performed and was successful. Specific conclusions drawn from the research and analyses are included. The INSITE concept is potentially applicable in any management sphere, commercial or government, where time estimation is required for project scheduling. As project scheduling is a nearly universal management activity, the range of possibilities is considerable. The INSITE concept also holds potential for enhancing other management tasks, especially in areas such as cost estimation, where estimation-by-analogy is already a proven method.
Moving Sound Source Localization Based on Sequential Subspace Estimation in Actual Room Environments
NASA Astrophysics Data System (ADS)
Tsuji, Daisuke; Suyama, Kenji
This paper presents a novel method for moving sound source localization and its performance evaluation in actual room environments. The method is based on the MUSIC (MUltiple SIgnal Classification) which is one of the most high resolution localization methods. When using the MUSIC, a computation of eigenvectors of correlation matrix is required for the estimation. It needs often a high computational costs. Especially, in the situation of moving source, it becomes a crucial drawback because the estimation must be conducted at every the observation time. Moreover, since the correlation matrix varies its characteristics due to the spatial-temporal non-stationarity, the matrix have to be estimated using only a few observed samples. It makes the estimation accuracy degraded. In this paper, the PAST (Projection Approximation Subspace Tracking) is applied for sequentially estimating the eigenvectors spanning the subspace. In the PAST, the eigen-decomposition is not required, and therefore it is possible to reduce the computational costs. Several experimental results in the actual room environments are shown to present the superior performance of the proposed method.
Fitzgerald, Paul J
2014-07-01
It is of high clinical interest to better understand the timecourse through which psychiatric drugs produce their beneficial effects. While a rough estimate of the time lag between initiating monoaminergic antidepressant therapy and the onset of therapeutic effect in depressed subjects is two weeks, much less is known about when these drugs reach maximum effect. This paper briefly examines studies that directly address this question through long-term antidepressant administration to humans, while also putting forth a simple theoretical approach for estimating the time required for monoaminergic antidepressants to reach maximum therapeutic effect in humans. The theory invokes a comparison between speed of antidepressant drug response in humans and in rodents, focusing on the apparently greater speed in rodents. The principal argument is one of proportions, comparing earliest effects of these drugs in rodents and humans, versus their time to reach maximum effect in these organisms. If the proportionality hypothesis is even coarsely accurate, then applying these values or to some degree their ranges to the hypothesis, may suggest that monoaminergic antidepressants require a number of years to reach maximum effect in humans, at least in some individuals.
NASA Astrophysics Data System (ADS)
Zapata, D.; Salazar, M.; Chaves, B.; Keller, M.; Hoogenboom, G.
2015-12-01
Thermal time models have been used to predict the development of many different species, including grapevine ( Vitis vinifera L.). These models normally assume that there is a linear relationship between temperature and plant development. The goal of this study was to estimate the base temperature and duration in terms of thermal time for predicting veraison for four grapevine cultivars. Historical phenological data for four cultivars that were collected in the Pacific Northwest were used to develop the thermal time model. Base temperatures ( T b) of 0 and 10 °C and the best estimated T b using three different methods were evaluated for predicting veraison in grapevine. Thermal time requirements for each individual cultivar were evaluated through analysis of variance, and means were compared using the Fisher's test. The methods that were applied to estimate T b for the development of wine grapes included the least standard deviation in heat units, the regression coefficient, and the development rate method. The estimated T b varied among methods and cultivars. The development rate method provided the lowest T b values for all cultivars. For the three methods, Chardonnay had the lowest T b ranging from 8.7 to 10.7 °C, while the highest T b values were obtained for Riesling and Cabernet Sauvignon with 11.8 and 12.8 °C, respectively. Thermal time also differed among cultivars, when either the fixed or estimated T b was used. Predictions of the beginning of ripening with the estimated temperature resulted in the lowest variation in real days when compared with predictions using T b = 0 or 10 °C, regardless of the method that was used to estimate the T b.
Application of troposphere model from NWP and GNSS data into real-time precise positioning
NASA Astrophysics Data System (ADS)
Wilgan, Karina; Hadas, Tomasz; Kazmierski, Kamil; Rohm, Witold; Bosy, Jaroslaw
2016-04-01
The tropospheric delay empirical models are usually functions of meteorological parameters (temperature, pressure and humidity). The application of standard atmosphere parameters or global models, such as GPT (global pressure/temperature) model or UNB3 (University of New Brunswick, version 3) model, may not be sufficient, especially for positioning in non-standard weather conditions. The possible solution is to use regional troposphere models based on real-time or near-real time measurements. We implement a regional troposphere model into the PPP (Precise Point Positioning) software GNSS-WARP (Wroclaw Algorithms for Real-time Positioning) developed at Wroclaw University of Environmental and Life Sciences. The software is capable of processing static and kinematic multi-GNSS data in real-time and post-processing mode and takes advantage of final IGS (International GNSS Service) products as well as IGS RTS (Real-Time Service) products. A shortcoming of PPP technique is the time required for the solution to converge. One of the reasons is the high correlation among the estimated parameters: troposphere delay, receiver clock offset and receiver height. To efficiently decorrelate these parameters, a significant change in satellite geometry is required. Alternative solution is to introduce the external high-quality regional troposphere delay model to constrain troposphere estimates. The proposed model consists of zenith total delays (ZTD) and mapping functions calculated from meteorological parameters from Numerical Weather Prediction model WRF (Weather Research and Forecasting) and ZTDs from ground-based GNSS stations using the least-squares collocation software COMEDIE (Collocation of Meteorological Data for Interpretation and Estimation of Tropospheric Pathdelays) developed at ETH Zurich.
Electromagnetic Characterization of Inhomogeneous Media
2012-03-22
Engineering and Management Air Force Institute of Technology Air University Air Education and Training Command In Partial Fulfillment of the Requirements...found in the laboratory data, fun is the code that contains the theatrical formulation of S11, and beta0 is the initial constitutive parameter estimate...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources
Does Your Cohort Matter? Measuring Peer Effects in College Achievement. NBER Working Paper No. 14032
ERIC Educational Resources Information Center
Carrell, Scott E.; Fullerton, Richard L.; West, James E.
2008-01-01
To estimate peer effects in college achievement we exploit a unique dataset in which individuals have been exogenously assigned to peer groups of about 30 students with whom they are required to spend the majority of their time interacting. This feature enables us to estimate peer effects that are more comparable to changing the entire cohort of…
Ying Ouyang
2012-01-01
Understanding the dynamics of naturally occurring dissolved organic carbon (DOC) in a river is central to estimating surface water quality, aquatic carbon cycling, and global climate change. Currently, determination of the DOC in surface water is primarily accomplished by manually collecting samples for laboratory analysis, which requires at least 24 h. In other words...
Fumiaki Funahashi; Jennifer L. Parke
2017-01-01
Soil solarization has been shown to be an effective tool to manage Phytophthora spp. within surface soils, but estimating the minimum time required to complete local eradication under variable weather conditions remains unknown. A mathematical model could help predict the effectiveness of solarization at different sites and soil depths....
Toward inventory-based estimates of soil organic carbon in forests of the United States
G.M. Domke; C.H. Perry; B.F. Walters; L.E. Nave; C.W. Woodall; C.W. Swanston
2017-01-01
Soil organic carbon (SOC) is the largest terrestrial carbon (C) sink on Earth; this pool plays a critical role in ecosystem processes and climate change. Given the cost and time required to measure SOC, and particularly changes in SOC, many signatory nations to the United Nations Framework Convention on Climate Change report estimates of SOC stocks and stock changes...
Bianca N. I. Eskelson; Hailemariam Temesgen; Tara M. Barrett
2008-01-01
Many growth and yield simulators require a stand table or tree-list to set the initial condition for projections in time. Most similar neighbour (MSN) approaches can be used for estimating stand tables from information commonly available on forest cover maps (e.g. height, volume, canopy cover, and species composition). Simulations were used to compare MSN (using an...
NASA Astrophysics Data System (ADS)
Hamidi, Mohammadreza; Shahanaghi, Kamran; Jabbarzadeh, Armin; Jahani, Ehsan; Pousti, Zahra
2017-12-01
In every production plant, it is necessary to have an estimation of production level. Sometimes there are many parameters affective in this estimation. In this paper, it tried to find an appropriate estimation of production level for an industrial factory called Barez in an uncertain environment. We have considered a part of production line, which has different production time for different kind of products, which means both environmental and system uncertainty. To solve the problem we have simulated the line and because of the uncertainty in the times, fuzzy simulation is considered. Required fuzzy numbers are estimated by the use of bootstrap technique. The results are used in production planning process by factory experts and have had satisfying consequences. Opinions of these experts about the efficiency of using this methodology, has been attached.
Building a kinetic Monte Carlo model with a chosen accuracy.
Bhute, Vijesh J; Chatterjee, Abhijit
2013-06-28
The kinetic Monte Carlo (KMC) method is a popular modeling approach for reaching large materials length and time scales. The KMC dynamics is erroneous when atomic processes that are relevant to the dynamics are missing from the KMC model. Recently, we had developed for the first time an error measure for KMC in Bhute and Chatterjee [J. Chem. Phys. 138, 084103 (2013)]. The error measure, which is given in terms of the probability that a missing process will be selected in the correct dynamics, requires estimation of the missing rate. In this work, we present an improved procedure for estimating the missing rate. The estimate found using the new procedure is within an order of magnitude of the correct missing rate, unlike our previous approach where the estimate was larger by orders of magnitude. This enables one to find the error in the KMC model more accurately. In addition, we find the time for which the KMC model can be used before a maximum error in the dynamics has been reached.
A new parametric method to smooth time-series data of metabolites in metabolic networks.
Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide
2016-12-01
Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.
Methodologies for Adaptive Flight Envelope Estimation and Protection
NASA Technical Reports Server (NTRS)
Tang, Liang; Roemer, Michael; Ge, Jianhua; Crassidis, Agamemnon; Prasad, J. V. R.; Belcastro, Christine
2009-01-01
This paper reports the latest development of several techniques for adaptive flight envelope estimation and protection system for aircraft under damage upset conditions. Through the integration of advanced fault detection algorithms, real-time system identification of the damage/faulted aircraft and flight envelop estimation, real-time decision support can be executed autonomously for improving damage tolerance and flight recoverability. Particularly, a bank of adaptive nonlinear fault detection and isolation estimators were developed for flight control actuator faults; a real-time system identification method was developed for assessing the dynamics and performance limitation of impaired aircraft; online learning neural networks were used to approximate selected aircraft dynamics which were then inverted to estimate command margins. As off-line training of network weights is not required, the method has the advantage of adapting to varying flight conditions and different vehicle configurations. The key benefit of the envelope estimation and protection system is that it allows the aircraft to fly close to its limit boundary by constantly updating the controller command limits during flight. The developed techniques were demonstrated on NASA s Generic Transport Model (GTM) simulation environments with simulated actuator faults. Simulation results and remarks on future work are presented.
Duke, Lori J; Staton, April G; McCullough, Elizabeth S; Jain, Rahul; Miller, Mindi S; Lynn Stevenson, T; Fetterman, James W; Lynn Parham, R; Sheffield, Melody C; Unterwagner, Whitney L; McDuffie, Charles H
2012-04-10
To document the annual number of advanced pharmacy practice experience (APPE) placement changes for students across 5 colleges and schools of pharmacy, identify and compare initiating reasons, and estimate the associated administrative workload. Data collection occurred from finalization of the 2008-2009 APPE assignments throughout the last date of the APPE schedule. Internet-based customized tracking forms were used to categorize the initiating reason for the placement change and the administrative time required per change (0 to 120 minutes). APPE placement changes per institution varied from 14% to 53% of total assignments. Reasons for changes were: administrator initiated (20%), student initiated (23%), and site/preceptor initiated (57%) Total administrative time required per change varied across institutions from 3,130 to 22,750 minutes, while the average time per reassignment was 42.5 minutes. APPE placements are subject to high instability. Significant differences exist between public and private colleges and schools of pharmacy as to the number and type of APPE reassignments made and associated workload estimates.
Quintero, Ignacio; Wiens, John J
2013-08-01
A key question in predicting responses to anthropogenic climate change is: how quickly can species adapt to different climatic conditions? Here, we take a phylogenetic approach to this question. We use 17 time-calibrated phylogenies representing the major tetrapod clades (amphibians, birds, crocodilians, mammals, squamates, turtles) and climatic data from distributions of > 500 extant species. We estimate rates of change based on differences in climatic variables between sister species and estimated times of their splitting. We compare these rates to predicted rates of climate change from 2000 to 2100. Our results are striking: matching projected changes for 2100 would require rates of niche evolution that are > 10,000 times faster than rates typically observed among species, for most variables and clades. Despite many caveats, our results suggest that adaptation to projected changes in the next 100 years would require rates that are largely unprecedented based on observed rates among vertebrate species. © 2013 John Wiley & Sons Ltd/CNRS.
Garcia, Jordan A; Mistry, Bipin; Hardy, Stephen; Fracchia, Mary Shannon; Hersh, Cheryl; Wentland, Carissa; Vadakekalam, Joseph; Kaplan, Robert; Hartnick, Christopher J
2017-09-01
Providing high-value healthcare to patients is increasingly becoming an objective for providers including those at multidisciplinary aerodigestive centers. Measuring value has two components: 1) identify relevant health outcomes and 2) determine relevant treatment costs. Via their inherent structure, multidisciplinary care units consolidate care for complex patients. However, their potential impact on decreasing healthcare costs is less clear. The goal of this study was to estimate the potential cost savings of treating patients with laryngeal clefts at multidisciplinary aerodigestive centers. Retrospective chart review. Time-driven activity-based costing was used to estimate the cost of care for patients with laryngeal cleft seen between 2008 and 2013 at the Massachusetts Eye and Ear Infirmary Pediatric Aerodigestive Center. Retrospective chart review was performed to identify clinic utilization by patients as well as patient diet outcomes after treatment. Patients were stratified into neurologically complex and neurologically noncomplex groups. The cost of care for patients requiring surgical intervention was five and three times as expensive of the cost of care for patients not requiring surgery for neurologically noncomplex and complex patients, respectively. Following treatment, 50% and 55% of complex and noncomplex patients returned to normal diet, whereas 83% and 87% of patients experienced improved diets, respectively. Additionally, multidisciplinary team-based care for children with laryngeal clefts potentially achieves 20% to 40% cost savings. These findings demonstrate how time-driven activity-based costing can be used to estimate and compare patient costs in multidisciplinary aerodigestive centers. 2c. Laryngoscope, 127:2152-2158, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
Vero, S E; Ibrahim, T G; Creamer, R E; Grant, J; Healy, M G; Henry, T; Kramers, G; Richards, K G; Fenton, O
2014-12-01
The true efficacy of a programme of agricultural mitigation measures within a catchment to improve water quality can be determined only after a certain hydrologic time lag period (subsequent to implementation) has elapsed. As the biophysical response to policy is not synchronous, accurate estimates of total time lag (unsaturated and saturated) become critical to manage the expectations of policy makers. The estimation of the vertical unsaturated zone component of time lag is vital as it indicates early trends (initial breakthrough), bulk (centre of mass) and total (Exit) travel times. Typically, estimation of time lag through the unsaturated zone is poor, due to the lack of site specific soil physical data, or by assuming saturated conditions. Numerical models (e.g. Hydrus 1D) enable estimates of time lag with varied levels of input data. The current study examines the consequences of varied soil hydraulic and meteorological complexity on unsaturated zone time lag estimates using simulated and actual soil profiles. Results indicated that: greater temporal resolution (from daily to hourly) of meteorological data was more critical as the saturated hydraulic conductivity of the soil decreased; high clay content soils failed to converge reflecting prevalence of lateral component as a contaminant pathway; elucidation of soil hydraulic properties was influenced by the complexity of soil physical data employed (textural menu, ROSETTA, full and partial soil water characteristic curves), which consequently affected time lag ranges; as the importance of the unsaturated zone increases with respect to total travel times the requirements for high complexity/resolution input data become greater. The methodology presented herein demonstrates that decisions made regarding input data and landscape position will have consequences for the estimated range of vertical travel times. Insufficiencies or inaccuracies regarding such input data can therefore mislead policy makers regarding the achievability of water quality targets. Copyright © 2014 Elsevier B.V. All rights reserved.
Pailian, Hrag; Halberda, Justin
2015-04-01
We investigated the psychometric properties of the one-shot change detection task for estimating visual working memory (VWM) storage capacity-and also introduced and tested an alternative flicker change detection task for estimating these limits. In three experiments, we found that the one-shot whole-display task returns estimates of VWM storage capacity (K) that are unreliable across set sizes-suggesting that the whole-display task is measuring different things at different set sizes. In two additional experiments, we found that the one-shot single-probe variant shows improvements in the reliability and consistency of K estimates. In another additional experiment, we found that a one-shot whole-display-with-click task (requiring target localization) also showed improvements in reliability and consistency. The latter results suggest that the one-shot task can return reliable and consistent estimates of VWM storage capacity (K), and they highlight the possibility that the requirement to localize the changed target is what engenders this enhancement. Through a final series of four experiments, we introduced and tested an alternative flicker change detection method that also requires the observer to localize the changing target and that generates, from response times, an estimate of VWM storage capacity (K). We found that estimates of K from the flicker task correlated with estimates from the traditional one-shot task and also had high reliability and consistency. We highlight the flicker method's ability to estimate executive functions as well as VWM storage capacity, and discuss the potential for measuring multiple abilities with the one-shot and flicker tasks.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-27
... may be submitted to: DHS, USCIS, Office of Policy and Strategy, Chief, Regulatory Coordination... estimates on the burden in terms of time and money incurred by applicants for the following aspects of this... service. The average time required and money expended to secure secondary evidence such as an affidavit...
For Mole Problems, Call Avogadro: 602-1023.
ERIC Educational Resources Information Center
Uthe, R. E.
2002-01-01
Describes techniques to help introductory students become familiar with Avogadro's number and mole calculations. Techniques involve estimating numbers of common objects then calculating the length of time needed to count large numbers of them. For example, the immense amount of time required to count a mole of sand grains at one grain per second…
NASA Technical Reports Server (NTRS)
Moes, Timothy R.; Smith, Mark S.; Morelli, Eugene A.
2003-01-01
Near real-time stability and control derivative extraction is required to support flight demonstration of Intelligent Flight Control System (IFCS) concepts being developed by NASA, academia, and industry. Traditionally, flight maneuvers would be designed and flown to obtain stability and control derivative estimates using a postflight analysis technique. The goal of the IFCS concept is to be able to modify the control laws in real time for an aircraft that has been damaged in flight. In some IFCS implementations, real-time parameter identification (PID) of the stability and control derivatives of the damaged aircraft is necessary for successfully reconfiguring the control system. This report investigates the usefulness of Prescribed Simultaneous Independent Surface Excitations (PreSISE) to provide data for rapidly obtaining estimates of the stability and control derivatives. Flight test data were analyzed using both equation-error and output-error PID techniques. The equation-error PID technique is known as Fourier Transform Regression (FTR) and is a frequency-domain real-time implementation. Selected results were compared with a time-domain output-error technique. The real-time equation-error technique combined with the PreSISE maneuvers provided excellent derivative estimation in the longitudinal axis. However, the PreSISE maneuvers as presently defined were not adequate for accurate estimation of the lateral-directional derivatives.
Real time charge efficiency monitoring for nickel electrodes in NICD and NIH2 cells
NASA Astrophysics Data System (ADS)
Zimmerman, A. H.
1987-09-01
The charge efficiency of nickel-cadmium and nickel-hydrogen battery cells is critical in spacecraft applications for determining the amount of time required for a battery to reach a full state of charge. As the nickel-cadmium or nickel-hydrogen batteries approach about 90 percent state of charge, the charge efficiency begins to drop towards zero, making estimation of the total amount of stored charge uncertain. Charge efficiency estimates are typically based on prior history of available capacity following standardized conditions for charge and discharge. These methods work well as long as performance does not change significantly. A relatively simple method for determining charge efficiencies during real time operation for these battery cells would be a tremendous advantage. Such a method was explored and appears to be quite well suited for application to nickel-cadmium and nickel-hydrogen battery cells. The charge efficiency is monitored in real time, using only voltage measurements as inputs. With further evaluation such a method may provide a means to better manage charge control of batteries, particularly in systems where a high degree of autonomy or system intelligence is required.
Validation of a Formula for Assigning Continuing Education Credit to Printed Home Study Courses
Hanson, Alan L.
2007-01-01
Objectives To reevaluate and validate the use of a formula for calculating the amount of continuing education credit to be awarded for printed home study courses. Methods Ten home study courses were selected for inclusion in a study to validate the formula, which is based on the number of words, number of final examination questions, and estimated difficulty level of the course. The amount of estimated credit calculated using the a priori formula was compared to the average amount of time required to complete each article based on pharmacists' self-reporting. Results A strong positive relationship between the amount of time required to complete the home study courses based on the a priori calculation and the times reported by pharmacists completing the 10 courses was found (p < 0.001). The correlation accounted for 86.2% of the total variability in the average pharmacist reported completion times (p < 0.001). Conclusions The formula offers an efficient and accurate means of determining the amount of continuing education credit that should be assigned to printed home study courses. PMID:19503705
Real time charge efficiency monitoring for nickel electrodes in NICD and NIH2 cells
NASA Technical Reports Server (NTRS)
Zimmerman, A. H.
1987-01-01
The charge efficiency of nickel-cadmium and nickel-hydrogen battery cells is critical in spacecraft applications for determining the amount of time required for a battery to reach a full state of charge. As the nickel-cadmium or nickel-hydrogen batteries approach about 90 percent state of charge, the charge efficiency begins to drop towards zero, making estimation of the total amount of stored charge uncertain. Charge efficiency estimates are typically based on prior history of available capacity following standardized conditions for charge and discharge. These methods work well as long as performance does not change significantly. A relatively simple method for determining charge efficiencies during real time operation for these battery cells would be a tremendous advantage. Such a method was explored and appears to be quite well suited for application to nickel-cadmium and nickel-hydrogen battery cells. The charge efficiency is monitored in real time, using only voltage measurements as inputs. With further evaluation such a method may provide a means to better manage charge control of batteries, particularly in systems where a high degree of autonomy or system intelligence is required.
Time estimation by patients with frontal lesions and by Korsakoff amnesics.
Mimura, M; Kinsbourne, M; O'Connor, M
2000-07-01
We studied time estimation in patients with frontal damage (F) and alcoholic Korsakoff (K) patients in order to differentiate between the contributions of working memory and episodic memory to temporal cognition. In Experiment 1, F and K patients estimated time intervals between 10 and 120 s less accurately than matched normal and alcoholic control subjects. F patients were less accurate than K patients at short (< 1 min) time intervals whereas K patients increasingly underestimated durations as intervals grew longer. F patients overestimated short intervals in inverse proportion to their performance on the Wisconsin Card Sorting Test. As intervals grew longer, overestimation yielded to underestimation for F patients. Experiment 2 involved time estimation while counting at a subjective 1/s rate. F patients' subjective tempo, though relatively rapid, did not fully explain their overestimation of short intervals. In Experiment 3, participants produced predetermined time intervals by depressing a mouse key. K patients underproduced longer intervals. F patients produced comparably to normal participants, but were extremely variable. Findings suggest that both working memory and episodic memory play an individual role in temporal cognition. Turnover within a short-term working memory buffer provides a metric for temporal decisions. The depleted working memory that typically attends frontal dysfunction may result in quicker turnover, and this may inflate subjective duration. On the other hand, temporal estimation beyond 30 s requires episodic remembering, and this puts K patients at a disadvantage.
Koltun, G.F.
2001-01-01
This report provides data and methods to aid in the hydrologic design or evaluation of impounding reservoirs and side-channel reservoirs used for water supply in Ohio. Data from 117 streamflow-gaging stations throughout Ohio were analyzed by means of nonsequential-mass-curve-analysis techniques to develop relations between storage requirements, water demand, duration, and frequency. Information also is provided on minimum runoff for selected durations and frequencies. Systematic record lengths for the streamflow-gaging stations ranged from about 10 to 75 years; however, in many cases, additional streamflow record was synthesized. For impounding reservoirs, families of curves are provided to facilitate the estimation of storage requirements as a function of demand and the ratio of the 7-day, 2-year low flow to the mean annual flow. Information is provided with which to evaluate separately the effects of evaporation on storage requirements. Comparisons of storage requirements for impounding reservoirs determined by nonsequential-mass-curve-analysis techniques with storage requirements determined by annual-mass-curve techniques that employ probability routing to account for carryover-storage requirements indicate that large differences in computed required storages can result from the two methods, particularly for conditions where demand cannot be met from within-year storage. For side-channel reservoirs, tables of demand-storage-frequency information are provided for a primary pump relation consisting of one variable-speed pump with a pumping capacity that ranges from 0.1 to 20 times demand. Tables of adjustment ratios are provided to facilitate determination of storage requirements for 19 other pump sets consisting of assorted combinations of fixed-speed pumps or variable-speed pumps with aggregate pumping capacities smaller than or equal to the primary pump relation. The effects of evaporation on side-channel reservoir storage requirements are incorporated into the storage-requirement estimates. The effects of an instream-flow requirement equal to the 80-percent-duration flow are also incorporated into the storage-requirement estimates.
Vitamin D Requirements for the Future-Lessons Learned and Charting a Path Forward.
Cashman, Kevin D
2018-04-25
Estimates of dietary requirements for vitamin D or Dietary Reference Values (DRV) are crucial from a public health perspective in providing a framework for prevention of vitamin D deficiency and optimizing vitamin D status of individuals. While these important public health policy instruments were developed with the evidence-base and data available at the time, there are some issues that need to be clarified or considered in future iterations of DRV for vitamin D. This is important as it will allow for more fine-tuned and truer estimates of the dietary requirements for vitamin D and thus provide for more population protection. The present review will overview some of the confusion that has arisen in relation to the application and/or interpretation of the definitions of the Estimated Average Requirement (EAR) and Recommended Dietary Allowance (RDA). It will also highlight some of the clarifications needed and, in particular, how utilization of a new approach in terms of using individual participant-level data (IPD), over and beyond aggregated data, from randomised controlled trials with vitamin D may have a key role in generating these more fine-tuned and truer estimates, which is of importance as we move towards the next iteration of vitamin D DRVs.
Tai, Patricia; Yu, Edward; Cserni, Gábor; Vlastos, Georges; Royce, Melanie; Kunkler, Ian; Vinh-Hung, Vincent
2005-01-01
Background The present commonly used five-year survival rates are not adequate to represent the statistical cure. In the present study, we established the minimum number of years required for follow-up to estimate statistical cure rate, by using a lognormal distribution of the survival time of those who died of their cancer. We introduced the term, threshold year, the follow-up time for patients dying from the specific cancer covers most of the survival data, leaving less than 2.25% uncovered. This is close enough to cure from that specific cancer. Methods Data from the Surveillance, Epidemiology and End Results (SEER) database were tested if the survival times of cancer patients who died of their disease followed the lognormal distribution using a minimum chi-square method. Patients diagnosed from 1973–1992 in the registries of Connecticut and Detroit were chosen so that a maximum of 27 years was allowed for follow-up to 1999. A total of 49 specific organ sites were tested. The parameters of those lognormal distributions were found for each cancer site. The cancer-specific survival rates at the threshold years were compared with the longest available Kaplan-Meier survival estimates. Results The characteristics of the cancer-specific survival times of cancer patients who died of their disease from 42 cancer sites out of 49 sites were verified to follow different lognormal distributions. The threshold years validated for statistical cure varied for different cancer sites, from 2.6 years for pancreas cancer to 25.2 years for cancer of salivary gland. At the threshold year, the statistical cure rates estimated for 40 cancer sites were found to match the actuarial long-term survival rates estimated by the Kaplan-Meier method within six percentage points. For two cancer sites: breast and thyroid, the threshold years were so long that the cancer-specific survival rates could yet not be obtained because the SEER data do not provide sufficiently long follow-up. Conclusion The present study suggests a certain threshold year is required to wait before the statistical cure rate can be estimated for each cancer site. For some cancers, such as breast and thyroid, the 5- or 10-year survival rates inadequately reflect statistical cure rates, and highlight the need for long-term follow-up of these patients. PMID:15904508
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
Effort Drivers Estimation for Brazilian Geographically Distributed Software Development
NASA Astrophysics Data System (ADS)
Almeida, Ana Carina M.; Souza, Renata; Aquino, Gibeon; Meira, Silvio
To meet the requirements of today’s fast paced markets, it is important to develop projects on time and with the minimum use of resources. A good estimate is the key to achieve this goal. Several companies have started to work with geographically distributed teams due to cost reduction and time-to-market. Some researchers indicate that this approach introduces new challenges, because the teams work in different time zones and have possible differences in culture and language. It is already known that the multisite development increases the software cycle time. Data from 15 DSD projects from 10 distinct companies were collected. The analysis shows drivers that impact significantly the total effort planned to develop systems using DSD approach in Brazil.
Building occupancy simulation and data assimilation using a graph-based agent-oriented model
NASA Astrophysics Data System (ADS)
Rai, Sanish; Hu, Xiaolin
2018-07-01
Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.
Optimal control of nonlinear continuous-time systems in strict-feedback form.
Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani
2015-10-01
This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.
NASA Astrophysics Data System (ADS)
Vafadar, Bahareh; Bones, Philip J.
2012-10-01
There is a strong motivation to reduce the amount of acquired data necessary to reconstruct clinically useful MR images, since less data means faster acquisition sequences, less time for the patient to remain motionless in the scanner and better time resolution for observing temporal changes within the body. We recently introduced an improvement in image quality for reconstructing parallel MR images by incorporating a data ordering step with compressed sensing (CS) in an algorithm named `PECS'. That method requires a prior estimate of the image to be available. We are extending the algorithm to explore ways of utilizing the data ordering step without requiring a prior estimate. The method presented here first reconstructs an initial image x1 by compressed sensing (with scarcity enhanced by SVD), then derives a data ordering from x1, R'1 , which ranks the voxels of x1 according to their value. A second reconstruction is then performed which incorporates minimization of the first norm of the estimate after ordering by R'1 , resulting in a new reconstruction x2. Preliminary results are encouraging.
Optimally Distributed Kalman Filtering with Data-Driven Communication †
Dormann, Katharina
2018-01-01
For multisensor data fusion, distributed state estimation techniques that enable a local processing of sensor data are the means of choice in order to minimize storage and communication costs. In particular, a distributed implementation of the optimal Kalman filter has recently been developed. A significant disadvantage of this algorithm is that the fusion center needs access to each node so as to compute a consistent state estimate, which requires full communication each time an estimate is requested. In this article, different extensions of the optimally distributed Kalman filter are proposed that employ data-driven transmission schemes in order to reduce communication expenses. As a first relaxation of the full-rate communication scheme, it can be shown that each node only has to transmit every second time step without endangering consistency of the fusion result. Also, two data-driven algorithms are introduced that even allow for lower transmission rates, and bounds are derived to guarantee consistent fusion results. Simulations demonstrate that the data-driven distributed filtering schemes can outperform a centralized Kalman filter that requires each measurement to be sent to the center node. PMID:29596392
Achieving Optimal Quantum Acceleration of Frequency Estimation Using Adaptive Coherent Control.
Naghiloo, M; Jordan, A N; Murch, K W
2017-11-03
Precision measurements of frequency are critical to accurate time keeping and are fundamentally limited by quantum measurement uncertainties. While for time-independent quantum Hamiltonians the uncertainty of any parameter scales at best as 1/T, where T is the duration of the experiment, recent theoretical works have predicted that explicitly time-dependent Hamiltonians can yield a 1/T^{2} scaling of the uncertainty for an oscillation frequency. This quantum acceleration in precision requires coherent control, which is generally adaptive. We experimentally realize this quantum improvement in frequency sensitivity with superconducting circuits, using a single transmon qubit. With optimal control pulses, the theoretically ideal frequency precision scaling is reached for times shorter than the decoherence time. This result demonstrates a fundamental quantum advantage for frequency estimation.
Aviation Environmental Design Tool (AEDT): Version 2d: Installation Guide
DOT National Transportation Integrated Search
2017-09-01
Aviation Environmental Design Tool (AEDT) is a software system that models aircraft performance in space and time to estimate fuel consumption, emissions, noise, and air quality consequences. AEDT facilitates environmental review activities required ...
NASA Astrophysics Data System (ADS)
Sawant, S. A.; Chakraborty, M.; Suradhaniwar, S.; Adinarayana, J.; Durbha, S. S.
2016-06-01
Satellite based earth observation (EO) platforms have proved capability to spatio-temporally monitor changes on the earth's surface. Long term satellite missions have provided huge repository of optical remote sensing datasets, and United States Geological Survey (USGS) Landsat program is one of the oldest sources of optical EO datasets. This historical and near real time EO archive is a rich source of information to understand the seasonal changes in the horticultural crops. Citrus (Mandarin / Nagpur Orange) is one of the major horticultural crops cultivated in central India. Erratic behaviour of rainfall and dependency on groundwater for irrigation has wide impact on the citrus crop yield. Also, wide variations are reported in temperature and relative humidity causing early fruit onset and increase in crop water requirement. Therefore, there is need to study the crop growth stages and crop evapotranspiration at spatio-temporal scale for managing the scarce resources. In this study, an attempt has been made to understand the citrus crop growth stages using Normalized Difference Time Series (NDVI) time series data obtained from Landsat archives (http://earthexplorer.usgs.gov/). Total 388 Landsat 4, 5, 7 and 8 scenes (from year 1990 to Aug. 2015) for Worldwide Reference System (WRS) 2, path 145 and row 45 were selected to understand seasonal variations in citrus crop growth. Considering Landsat 30 meter spatial resolution to obtain homogeneous pixels with crop cover orchards larger than 2 hectare area was selected. To consider change in wavelength bandwidth (radiometric resolution) with Landsat sensors (i.e. 4, 5, 7 and 8) NDVI has been selected to obtain continuous sensor independent time series. The obtained crop growth stage information has been used to estimate citrus basal crop coefficient information (Kcb). Satellite based Kcb estimates were used with proximal agrometeorological sensing system observed relevant weather parameters for crop ET estimation. The results show that time series EO based crop growth stage estimates provide better information about geographically separated citrus orchards. Attempts are being made to estimate regional variations in citrus crop water requirement for effective irrigation planning. In future high resolution Sentinel 2 observations from European Space Agency (ESA) will be used to fill the time gaps and to get better understanding about citrus crop canopy parameters.
Scientist/AMPS equipment interface study
NASA Technical Reports Server (NTRS)
Anderson, H. R.
1977-01-01
The principal objective was to determine for each experiment how the operating procedures and modes of equipment onboard shuttle can be managed in real-time or near-real-time to enhance the quality of results. As part of this determination the data and display devices that a man will need for real-time management are defined. The secondary objectives, as listed in the RFQ and technical proposal, were to: (1) determine what quantities are to be measured (2) determine permissible background levels (3) decide in what portions of space measurements are to be made (4) estimate bit rates (5) establish time-lines for operating the experiments on a mission or set of missions and (6) determine the minimum set of hardware needed for real-time display. Experiment descriptions and requirements were written. The requirements of the various experiments are combined and a minimal set of joint requirements are defined.
Racimo, Allison R; Talathi, Nakul S; Zelenski, Nicole A; Wells, Lawrence; Shah, Apurva S
2018-05-02
Price transparency allows patients to make value-based health care decisions and is particularly important for individuals who are uninsured or enrolled in high-deductible health care plans. The availability of consumer prices for children undergoing orthopaedic surgery has not been previously investigated. We aimed to determine the availability of price estimates from hospitals in the United States for an archetypal pediatric orthopaedic surgical procedure (closed reduction and percutaneous pinning of a distal radius fracture) and identify variations in price estimates across hospitals. This prospective investigation utilized a scripted telephone call to obtain price estimates from 50 "top-ranked hospitals" for pediatric orthopaedics and 1 "non-top-ranked hospital" from each state and the District of Columbia. Price estimates were requested using a standardized script, in which an investigator posed as the mother of a child with a displaced distal radius fracture that needed closed reduction and pinning. Price estimates (complete or partial) were recorded for each hospital. The number of calls and the duration of time required to obtain the pricing information was also recorded. Variation was assessed, and hospitals were compared on the basis of ranking, teaching status, and region. Less than half (44%) of the 101 hospitals provided a complete price estimate. The mean price estimate for top-ranked hospitals ($17,813; range, $2742 to $49,063) was 50% higher than the price estimate for non-top-ranked hospitals ($11,866; range, $3623 to $22,967) (P=0.020). Differences in price estimates were attributable to differences in hospital fees (P=0.003), not surgeon fees. Top-ranked hospitals required more calls than non-top-ranked hospitals (4.4±2.9 vs. 2.8±2.3 calls, P=0.003). A longer duration of time was required to obtain price estimates from top-ranked hospitals than from non-top-ranked hospitals (8.2±9.4 vs. 4.1±5.1 d, P=0.024). Price estimates for pediatric orthopaedic procedures are difficult to obtain. Top-ranked hospitals are more expensive and less likely to provide price information than non-top-ranked hospitals, with price differences primarily caused by variation in hospital fees, not surgeon fees. Level II-economic and decision analyses.
Real-time reflectometry measurement validation in H-mode regimes for plasma position control.
Santos, J; Guimarais, L; Manso, M
2010-10-01
It has been shown that in H-mode regimes, reflectometry electron density profiles and an estimate for the density at the separatrix can be jointly used to track the separatrix within the precision required for plasma position control on ITER. We present a method to automatically remove, from the position estimation procedure, measurements performed during collapse and recovery phases of edge localized modes (ELMs). Based on the rejection mechanism, the method also produces an estimate confidence value to be fed to the position feedback controller. Preliminary results show that the method improves the real-time experimental separatrix tracking capabilities and has the potential to eliminate the need for an external online source of ELM event signaling during control feedback operation.
A Modularized Efficient Framework for Non-Markov Time Series Estimation
NASA Astrophysics Data System (ADS)
Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.
2018-06-01
We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.
An automatic calibration procedure for remote eye-gaze tracking systems.
Model, Dmitri; Guestrin, Elias D; Eizenman, Moshe
2009-01-01
Remote gaze estimation systems use calibration procedures to estimate subject-specific parameters that are needed for the calculation of the point-of-gaze. In these procedures, subjects are required to fixate on a specific point or points at specific time instances. Advanced remote gaze estimation systems can estimate the optical axis of the eye without any personal calibration procedure, but use a single calibration point to estimate the angle between the optical axis and the visual axis (line-of-sight). This paper presents a novel automatic calibration procedure that does not require active user participation. To estimate the angles between the optical and visual axes of each eye, this procedure minimizes the distance between the intersections of the visual axes of the left and right eyes with the surface of a display while subjects look naturally at the display (e.g., watching a video clip). Simulation results demonstrate that the performance of the algorithm improves as the range of viewing angles increases. For a subject sitting 75 cm in front of an 80 cm x 60 cm display (40" TV) the standard deviation of the error in the estimation of the angles between the optical and visual axes is 0.5 degrees.
Robust estimation of the proportion of treatment effect explained by surrogate marker information.
Parast, Layla; McDermott, Mary M; Tian, Lu
2016-05-10
In randomized treatment studies where the primary outcome requires long follow-up of patients and/or expensive or invasive obtainment procedures, the availability of a surrogate marker that could be used to estimate the treatment effect and could potentially be observed earlier than the primary outcome would allow researchers to make conclusions regarding the treatment effect with less required follow-up time and resources. The Prentice criterion for a valid surrogate marker requires that a test for treatment effect on the surrogate marker also be a valid test for treatment effect on the primary outcome of interest. Based on this criterion, methods have been developed to define and estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on the surrogate marker. These methods aim to identify useful statistical surrogates that capture a large proportion of the treatment effect. However, current methods to estimate this proportion usually require restrictive model assumptions that may not hold in practice and thus may lead to biased estimates of this quantity. In this paper, we propose a nonparametric procedure to estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on a potential surrogate marker and extend this procedure to a setting with multiple surrogate markers. We compare our approach with previously proposed model-based approaches and propose a variance estimation procedure based on a perturbation-resampling method. Simulation studies demonstrate that the procedure performs well in finite samples and outperforms model-based procedures when the specified models are not correct. We illustrate our proposed procedure using a data set from a randomized study investigating a group-mediated cognitive behavioral intervention for peripheral artery disease participants. Copyright © 2015 John Wiley & Sons, Ltd.
Density estimation in wildlife surveys
Bart, Jonathan; Droege, Sam; Geissler, Paul E.; Peterjohn, Bruce G.; Ralph, C. John
2004-01-01
Several authors have recently discussed the problems with using index methods to estimate trends in population size. Some have expressed the view that index methods should virtually never be used. Others have responded by defending index methods and questioning whether better alternatives exist. We suggest that index methods are often a cost-effective component of valid wildlife monitoring but that double-sampling or another procedure that corrects for bias or establishes bounds on bias is essential. The common assertion that index methods require constant detection rates for trend estimation is mathematically incorrect; the requirement is no long-term trend in detection "ratios" (index result/parameter of interest), a requirement that is probably approximately met by many well-designed index surveys. We urge that more attention be given to defining bird density rigorously and in ways useful to managers. Once this is done, 4 sources of bias in density estimates may be distinguished: coverage, closure, surplus birds, and detection rates. Distance, double-observer, and removal methods do not reduce bias due to coverage, closure, or surplus birds. These methods may yield unbiased estimates of the number of birds present at the time of the survey, but only if their required assumptions are met, which we doubt occurs very often in practice. Double-sampling, in contrast, produces unbiased density estimates if the plots are randomly selected and estimates on the intensive surveys are unbiased. More work is needed, however, to determine the feasibility of double-sampling in different populations and habitats. We believe the tension that has developed over appropriate survey methods can best be resolved through increased appreciation of the mathematical aspects of indices, especially the effects of bias, and through studies in which candidate methods are evaluated against known numbers determined through intensive surveys.
Reduced-rank technique for joint channel estimation in TD-SCDMA systems
NASA Astrophysics Data System (ADS)
Kamil Marzook, Ali; Ismail, Alyani; Mohd Ali, Borhanuddin; Sali, Adawati; Khatun, Sabira
2013-02-01
In time division-synchronous code division multiple access systems, increasing the system capacity by exploiting the inserting of the largest number of users in one time slot (TS) requires adding more estimation processes to estimate the joint channel matrix for the whole system. The increase in the number of channel parameters due the increase in the number of users in one TS directly affects the precision of the estimator's performance. This article presents a novel channel estimation with low complexity, which relies on reducing the rank order of the total channel matrix H. The proposed method exploits the rank deficiency of H to reduce the number of parameters that characterise this matrix. The adopted reduced-rank technique is based on truncated singular value decomposition algorithm. The algorithms for reduced-rank joint channel estimation (JCE) are derived and compared against traditional full-rank JCEs: least squares (LS) or Steiner and enhanced (LS or MMSE) algorithms. Simulation results of the normalised mean square error showed the superiority of reduced-rank estimators. In addition, the channel impulse responses founded by reduced-rank estimator for all active users offers considerable performance improvement over the conventional estimator along the channel window length.
NASA Technical Reports Server (NTRS)
Ratnayake, Nalin A.; Koshimoto, Ed T.; Taylor, Brian R.
2011-01-01
The problem of parameter estimation on hybrid-wing-body type aircraft is complicated by the fact that many design candidates for such aircraft involve a large number of aero- dynamic control effectors that act in coplanar motion. This fact adds to the complexity already present in the parameter estimation problem for any aircraft with a closed-loop control system. Decorrelation of system inputs must be performed in order to ascertain individual surface derivatives with any sort of mathematical confidence. Non-standard control surface configurations, such as clamshell surfaces and drag-rudder modes, further complicate the modeling task. In this paper, asymmetric, single-surface maneuvers are used to excite multiple axes of aircraft motion simultaneously. Time history reconstructions of the moment coefficients computed by the solved regression models are then compared to each other in order to assess relative model accuracy. The reduced flight-test time required for inner surface parameter estimation using multi-axis methods was found to come at the cost of slightly reduced accuracy and statistical confidence for linear regression methods. Since the multi-axis maneuvers captured parameter estimates similar to both longitudinal and lateral-directional maneuvers combined, the number of test points required for the inner, aileron-like surfaces could in theory have been reduced by 50%. While trends were similar, however, individual parameters as estimated by a multi-axis model were typically different by an average absolute difference of roughly 15-20%, with decreased statistical significance, than those estimated by a single-axis model. The multi-axis model exhibited an increase in overall fit error of roughly 1-5% for the linear regression estimates with respect to the single-axis model, when applied to flight data designed for each, respectively.
Timber marking costs in spruce-fir: experience on the Penobscot Experimental Forest
Paul E. Sendak
2002-01-01
In the application of partial harvests, time needs to be allocated to marking trees to be cut. On the Penobscot Experimental Forest located in Maine, eight major experimental treatments have been applied to northern conifer stands for more than 40 yr. Data recorded at the time of marking were used to estimate the time required to mark trees for harvest. A simple linear...
Planning and Estimation of Operations Support Requirements
NASA Technical Reports Server (NTRS)
Newhouse, Marilyn E.; Barley, Bryan; Bacskay, Allen; Clardy, Dennon
2010-01-01
Life Cycle Cost (LCC) estimates during the proposal and early design phases, as well as project replans during the development phase, are heavily focused on hardware development schedules and costs. Operations (phase E) costs are typically small compared to the spacecraft development and test costs. This, combined with the long lead time for realizing operations costs, can lead to de-emphasizing estimation of operations support requirements during proposal, early design, and replan cost exercises. The Discovery and New Frontiers (D&NF) programs comprise small, cost-capped missions supporting scientific exploration of the solar system. Any LCC growth can directly impact the programs' ability to fund new missions, and even moderate yearly underestimates of the operations costs can present significant LCC impacts for deep space missions with long operational durations. The National Aeronautics and Space Administration (NASA) D&NF Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for 5 missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that 4 out of the 5 missions studied had significant overruns at or after launch due to underestimation of the complexity and supporting requirements for operations activities; the fifth mission had not launched at the time of the mission. The drivers behind these overruns include overly optimistic assumptions regarding the savings resulting from the use of heritage technology, late development of operations requirements, inadequate planning for sustaining engineering and the special requirements of long duration missions (e.g., knowledge retention and hardware/software refresh), and delayed completion of ground system development work. This paper updates the D&NF LCC study, looking at the operations (phase E) cost drivers in more detail and extending the study to include 2 additional missions and identifies areas for increased emphasis by project management in order to improve the fidelity of operations estimates.
Divisions of geologic time-major chronostratigraphic and geochronologic units
,
2010-01-01
Effective communication in the geosciences requires consistent uses of stratigraphic nomenclature, especially divisions of geologic time. A geologic time scale is composed of standard stratigraphic divisions based on rock sequences and is calibrated in years. Over the years, the development of new dating methods and the refinement of previous methods have stimulated revisions to geologic time scales. Advances in stratigraphy and geochronology require that any time scale be periodically updated. Therefore, Divisions of Geologic Time, which shows the major chronostratigraphic (position) and geochronologic (time) units, is intended to be a dynamic resource that will be modified to include accepted changes of unit names and boundary age estimates. This fact sheet is a modification of USGS Fact Sheet 2007-3015 by the U.S. Geological Survey Geologic Names Committee.
Civil Uses of Remotely Piloted Aircraft
NASA Technical Reports Server (NTRS)
Aderhold, J. R.; Gordon, G.; Scott, G. W.
1976-01-01
The economic, technical, and environmental implications of remotely piloted vehicles (RVP) are examined. The time frame is 1980-85. Representative uses are selected; detailed functional and performance requirements are derived for RPV systems; and conceptual system designs are devised. Total system cost comparisons are made with non-RPV alternatives. The potential market demand for RPV systems is estimated. Environmental and safety requirements are examined, and legal and regulatory concerns are identified. A potential demand for 2,000-11,000 RVP systems is estimated. Typical cost savings of 25 to 35% compared to non-RPV alternatives are determined. There appear to be no environmental problems, and the safety issue appears manageable.
Tudur Smith, Catrin; Nevitt, Sarah; Appelbe, Duncan; Appleton, Richard; Dixon, Pete; Harrison, Janet; Marson, Anthony; Williamson, Paula; Tremain, Elizabeth
2017-07-17
Demands are increasingly being made for clinical trialists to actively share individual participant data (IPD) collected from clinical trials using responsible methods that protect the confidentiality and privacy of clinical trial participants. Clinical trialists, particularly those receiving public funding, are often concerned about the additional time and money that data-sharing activities will require, but few published empirical data are available to help inform these decisions. We sought to evaluate the activity and resources required to prepare anonymised IPD from a clinical trial in anticipation of a future data-sharing request. Data from two UK publicly funded clinical trials were used for this exercise: 2437 participants with epilepsy recruited from 90 hospital outpatient clinics in the SANAD trial and 146 children with neuro-developmental problems recruited from 18 hospitals in the MENDS trial. We calculated the time and resources required to prepare each anonymised dataset and assemble a data pack ready for sharing. The older SANAD trial (published 2007) required 50 hours of staff time with a total estimated associated cost of £3185 whilst the more recently completed MENDS trial (published 2012) required 39.5 hours of staff time with total estimated associated cost of £2540. Clinical trial researchers, funders and sponsors should consider appropriate resourcing and allow reasonable time for preparing IPD ready for subsequent sharing. This process would be most efficient if prospectively built into the standard operational design and conduct of a clinical trial. Further empirical examples exploring the resource requirements in other settings is recommended. SANAD: International Standard Randomised Controlled Trials Registry: ISRCTN38354748 . Registered on 25 April 2003. EU Clinical Trials Register Eudract 2006-004025-28 . Registered on 16 May 2007. International Standard Randomised Controlled Trials Registry: ISRCTN05534585 /MREC 07/MRE08/43. Registered on 26 January 2007.
Depth-estimation-enabled compound eyes
NASA Astrophysics Data System (ADS)
Lee, Woong-Bi; Lee, Heung-No
2018-04-01
Most animals that have compound eyes determine object distances by using monocular cues, especially motion parallax. In artificial compound eye imaging systems inspired by natural compound eyes, object depths are typically estimated by measuring optic flow; however, this requires mechanical movement of the compound eyes or additional acquisition time. In this paper, we propose a method for estimating object depths in a monocular compound eye imaging system based on the computational compound eye (COMPU-EYE) framework. In the COMPU-EYE system, acceptance angles are considerably larger than interommatidial angles, causing overlap between the ommatidial receptive fields. In the proposed depth estimation technique, the disparities between these receptive fields are used to determine object distances. We demonstrate that the proposed depth estimation technique can estimate the distances of multiple objects.
Digital program for solving the linear stochastic optimal control and estimation problem
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, B.
1975-01-01
A computer program is described which solves the linear stochastic optimal control and estimation (LSOCE) problem by using a time-domain formulation. The LSOCE problem is defined as that of designing controls for a linear time-invariant system which is disturbed by white noise in such a way as to minimize a performance index which is quadratic in state and control variables. The LSOCE problem and solution are outlined; brief descriptions are given of the solution algorithms, and complete descriptions of each subroutine, including usage information and digital listings, are provided. A test case is included, as well as information on the IBM 7090-7094 DCS time and storage requirements.
Robust Ambiguity Estimation for an Automated Analysis of the Intensive Sessions
NASA Astrophysics Data System (ADS)
Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger
2016-12-01
Very Long Baseline Interferometry (VLBI) is a unique space-geodetic technique that can directly determine the Earth's phase of rotation, namely UT1. The daily estimates of the difference between UT1 and Coordinated Universal Time (UTC) are computed from one-hour long VLBI Intensive sessions. These sessions are essential for providing timely UT1 estimates for satellite navigation systems. To produce timely UT1 estimates, efforts have been made to completely automate the analysis of VLBI Intensive sessions. This requires automated processing of X- and S-band group delays. These data often contain an unknown number of integer ambiguities in the observed group delays. In an automated analysis with the c5++ software the standard approach in resolving the ambiguities is to perform a simplified parameter estimation using a least-squares adjustment (L2-norm minimization). We implement the robust L1-norm with an alternative estimation method in c5++. The implemented method is used to automatically estimate the ambiguities in VLBI Intensive sessions for the Kokee-Wettzell baseline. The results are compared to an analysis setup where the ambiguity estimation is computed using the L2-norm. Additionally, we investigate three alternative weighting strategies for the ambiguity estimation. The results show that in automated analysis the L1-norm resolves ambiguities better than the L2-norm. The use of the L1-norm leads to a significantly higher number of good quality UT1-UTC estimates with each of the three weighting strategies.
NASA Technical Reports Server (NTRS)
Battin, R. H.; Croopnick, S. R.; Edwards, J. A.
1977-01-01
The formulation of a recursive maximum likelihood navigation system employing reference position and velocity vectors as state variables is presented. Convenient forms of the required variational equations of motion are developed together with an explicit form of the associated state transition matrix needed to refer measurement data from the measurement time to the epoch time. Computational advantages accrue from this design in that the usual forward extrapolation of the covariance matrix of estimation errors can be avoided without incurring unacceptable system errors. Simulation data for earth orbiting satellites are provided to substantiate this assertion.
NASA Astrophysics Data System (ADS)
Pham, T. D.
2016-12-01
Recurrence plots display binary texture of time series from dynamical systems with single dots and line structures. Using fuzzy recurrence plots, recurrences of the phase-space states can be visualized as grayscale texture, which is more informative for pattern analysis. The proposed method replaces the crucial similarity threshold required by symmetrical recurrence plots with the number of cluster centers, where the estimate of the latter parameter is less critical than the estimate of the former.
A Qualitative Analysis of the Navy’s HSI Billet Structure
2008-06-01
of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering...and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other...subspecialty code. The research results support the hypothesis that the work requirements of the July 2007 data set of 4600P-coded billets (billets
NASA Astrophysics Data System (ADS)
Claure, Yuri Navarro; Matsubara, Edson Takashi; Padovani, Carlos; Prati, Ronaldo Cristiano
2018-03-01
Traditional methods for estimating timing parameters in hydrological science require a rigorous study of the relations of flow resistance, slope, flow regime, watershed size, water velocity, and other local variables. These studies are mostly based on empirical observations, where the timing parameter is estimated using empirically derived formulas. The application of these studies to other locations is not always direct. The locations in which equations are used should have comparable characteristics to the locations from which such equations have been derived. To overcome this barrier, in this work, we developed a data-driven approach to estimate timing parameters such as travel time. Our proposal estimates timing parameters using historical data of the location without the need of adapting or using empirical formulas from other locations. The proposal only uses one variable measured at two different locations on the same river (for instance, two river-level measurements, one upstream and the other downstream on the same river). The recorded data from each location generates two time series. Our method aligns these two time series using derivative dynamic time warping (DDTW) and perceptually important points (PIP). Using data from timing parameters, a polynomial function generalizes the data by inducing a polynomial water travel time estimator, called PolyWaTT. To evaluate the potential of our proposal, we applied PolyWaTT to three different watersheds: a floodplain ecosystem located in the part of Brazil known as Pantanal, the world's largest tropical wetland area; and the Missouri River and the Pearl River, in United States of America. We compared our proposal with empirical formulas and a data-driven state-of-the-art method. The experimental results demonstrate that PolyWaTT showed a lower mean absolute error than all other methods tested in this study, and for longer distances the mean absolute error achieved by PolyWaTT is three times smaller than empirical formulas.
Bayes factors for the linear ballistic accumulator model of decision-making.
Evans, Nathan J; Brown, Scott D
2018-04-01
Evidence accumulation models of decision-making have led to advances in several different areas of psychology. These models provide a way to integrate response time and accuracy data, and to describe performance in terms of latent cognitive processes. Testing important psychological hypotheses using cognitive models requires a method to make inferences about different versions of the models which assume different parameters to cause observed effects. The task of model-based inference using noisy data is difficult, and has proven especially problematic with current model selection methods based on parameter estimation. We provide a method for computing Bayes factors through Monte-Carlo integration for the linear ballistic accumulator (LBA; Brown and Heathcote, 2008), a widely used evidence accumulation model. Bayes factors are used frequently for inference with simpler statistical models, and they do not require parameter estimation. In order to overcome the computational burden of estimating Bayes factors via brute force integration, we exploit general purpose graphical processing units; we provide free code for this. This approach allows estimation of Bayes factors via Monte-Carlo integration within a practical time frame. We demonstrate the method using both simulated and real data. We investigate the stability of the Monte-Carlo approximation, and the LBA's inferential properties, in simulation studies.
Shriver, K A
1986-01-01
Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.
Handbook of estimating data, factors, and procedures. [for manufacturing cost studies
NASA Technical Reports Server (NTRS)
Freeman, L. M.
1977-01-01
Elements to be considered in estimating production costs are discussed in this manual. Guidelines, objectives, and methods for analyzing requirements and work structure are given. Time standards for specific specfic operations are listed for machining, sheet metal working, electroplating and metal treating; painting; silk screening, etching and encapsulating; coil winding; wire preparation and wiring; soldering; and the fabrication of etched circuits and terminal boards. The relation of the various elements of cost to the total cost as proposed for various programs by various contractors is compared with government estimates.
Laser network survey and orbit recovery. [altimeter evaluation in GEOS-C project
NASA Technical Reports Server (NTRS)
Berbert, J. H.
1974-01-01
Simulations were performed for the anticipated GEOS-C laser network stations at Goddard, Bermuda, and Florida to predict how well survey and orbit will be recovered. Lasers were added one at a time at Grand Turk, Antigua, and Panama to estimate the contribution from these additional sites. Time tag biases of 50 microseconds, survey uncertainties of 10 meters in each coordinate, laser range biases and noise estimates of 20 cm each, and conventional gravity uncertainties were included in the simulations. The results indicate that survey can be recovered to about 1 meter and Grand Turk can be recovered better than Antigua or Panama. Reducing the probably pessimistic assumed time tag biases and gravity field uncertainties improves the results. Using these survey recovery estimates, the short arc GEOS-C satellite heights for altimeter intercomparison orbits can be recovered within the calibration area to better than the required two meters.
Aviation Environmental Design Tool (AEDT): Version 2b: Installation Guide : [December 2015
DOT National Transportation Integrated Search
2015-12-01
Aviation Environmental Design Tool (AEDT) is a software system that models aircraft performance in space and time to estimate fuel consumption, emissions, noise, and air quality consequences. AEDT facilitates environmental review activities required ...
Aviation Environmental Design Tool (AEDT): Version 2b: Installation Guide : [June 2016
DOT National Transportation Integrated Search
2016-06-01
Aviation Environmental Design Tool (AEDT) is a software system that models aircraft performance in space and time to estimate fuel consumption, emissions, noise, and air quality consequences. AEDT facilitates environmental review activities required ...
Aviation Environmental Design Tool (AEDT): Version 2b: Installation Guide : [July 2015
DOT National Transportation Integrated Search
2015-07-01
Aviation Environmental Design Tool (AEDT) is a software system that models aircraft performance in space and time to estimate fuel consumption, emissions, noise, and air quality consequences. AEDT facilitates environmental review activities required ...
Results of a State-Wide Evaluation of “Paperwork Burden” in Addiction Treatment
Carise, Deni; Love, Meghan; Zur, Julia; McLellan, A. Thomas; Kemp, Jack
2009-01-01
This article chronicles three steps taken by research, clinical and state staff towards assessing, evaluating and streamlining clinical and administrative paperwork at all public outpatient addiction treatment programs in 1 state. The first step was an accounting of all paperwork requirements at each program. Step two included the development of time estimates for the paperwork requirements, synthesis of information across sites, providing written evaluation of the need, utility and redundancy of all forms (paperwork) collected, and suggestions for eliminating unused or unnecessary data collection and streamlining the remaining data collection. Thirdly, the state agency hosted a meeting with the state staff, researchers and staff from all programs and agencies with state-funded contracts and took action. Paperwork reductions over the course of a 6-month outpatient treatment episode were estimated at 4 – 6 hours, with most of the time burden being eliminated from the intake process. PMID:19150201
V and V of ISHM Software for Space Exploration
NASA Technical Reports Server (NTRS)
Markosian, Lawrence; Feather, Martin, S.; Brinza, David; Figueroa, F.
2005-01-01
NASA has established a far-reaching and long-term program for robotic and manned exploration of the solar system, beginning with missions to the moon and Mars. The Crew Transportation System (CTS), a key system for space exploration, imposes four requirements' that ISHM addresses. These requirements have a wide range of implications for V&V and certification of ISHM. There is a range of time-criticality for ISHM actions, from prognostication, which is often (but not always) non-time-critical, to time-critical state estimation and system management under off-nominal emergency conditions. These are externally imposed requirements on ISHM that are subject to V&V. - In addition, a range of techniques are needed to implement an ISHM. The approaches to ISHM are described elsewhere. These approaches range from well-understood algorithms for low-level data analysis, validation and reporting, to AI techniques for state estimation and planning. The range of techniques, and specifically the use of AI techniques such as reasoning under uncertainty and mission planning (and re-planning), implies that several V&V approaches may be required. Depending on the ISHM architecture, traditional testing approaches may be adequate for some ISHM functionality. The AI-based approaches to reasoning under uncertainty, model-based reasoning, and planning share characteristics typical of other complex software systems, but they also have characteristics that set them apart and challenge standard V&V techniques. The range of possible solutions to the overall ISHM problem impose internal challenges to V&V. The V&V challenges increase when hard real-time constraints are imposed for time-critical functionality. For example, there is an external requirement that impending catastrophic failure of the Launch Vehicle (LV) at launch time be detected and life-saving action be taken within two seconds. In this paper we outline the challenges for ISHM V&V, existing approaches and analogs in other software application areas, and possible new approaches to the V&V challenges for space exploration ISHM.
Augmented Cross-Sectional Studies with Abbreviated Follow-up for Estimating HIV Incidence
Claggett, B.; Lagakos, S.W.; Wang, R.
2011-01-01
Summary Cross-sectional HIV incidence estimation based on a sensitive and less-sensitive test offers great advantages over the traditional cohort study. However, its use has been limited due to concerns about the false negative rate of the less-sensitive test, reflecting the phenomenon that some subjects may remain negative permanently on the less-sensitive test. Wang and Lagakos (2010) propose an augmented cross-sectional design which provides one way to estimate the size of the infected population who remain negative permanently and subsequently incorporate this information in the cross-sectional incidence estimator. In an augmented cross-sectional study, subjects who test negative on the less-sensitive test in the cross-sectional survey are followed forward for transition into the nonrecent state, at which time they would test positive on the less-sensitive test. However, considerable uncertainty exists regarding the appropriate length of follow-up and the size of the infected population who remain nonreactive permanently to the less-sensitive test. In this paper, we assess the impact of varying follow-up time on the resulting incidence estimators from an augmented cross-sectional study, evaluate the robustness of cross-sectional estimators to assumptions about the existence and the size of the subpopulation who will remain negative permanently, and propose a new estimator based on abbreviated follow-up time (AF). Compared to the original estimator from an augmented cross-sectional study, the AF Estimator allows shorter follow-up time and does not require estimation of the mean window period, defined as the average time between detectability of HIV infection with the sensitive and less-sensitive tests. It is shown to perform well in a wide range of settings. We discuss when the AF Estimator would be expected to perform well and offer design considerations for an augmented cross-sectional study with abbreviated follow-up. PMID:21668904
Augmented cross-sectional studies with abbreviated follow-up for estimating HIV incidence.
Claggett, B; Lagakos, S W; Wang, R
2012-03-01
Cross-sectional HIV incidence estimation based on a sensitive and less-sensitive test offers great advantages over the traditional cohort study. However, its use has been limited due to concerns about the false negative rate of the less-sensitive test, reflecting the phenomenon that some subjects may remain negative permanently on the less-sensitive test. Wang and Lagakos (2010, Biometrics 66, 864-874) propose an augmented cross-sectional design that provides one way to estimate the size of the infected population who remain negative permanently and subsequently incorporate this information in the cross-sectional incidence estimator. In an augmented cross-sectional study, subjects who test negative on the less-sensitive test in the cross-sectional survey are followed forward for transition into the nonrecent state, at which time they would test positive on the less-sensitive test. However, considerable uncertainty exists regarding the appropriate length of follow-up and the size of the infected population who remain nonreactive permanently to the less-sensitive test. In this article, we assess the impact of varying follow-up time on the resulting incidence estimators from an augmented cross-sectional study, evaluate the robustness of cross-sectional estimators to assumptions about the existence and the size of the subpopulation who will remain negative permanently, and propose a new estimator based on abbreviated follow-up time (AF). Compared to the original estimator from an augmented cross-sectional study, the AF estimator allows shorter follow-up time and does not require estimation of the mean window period, defined as the average time between detectability of HIV infection with the sensitive and less-sensitive tests. It is shown to perform well in a wide range of settings. We discuss when the AF estimator would be expected to perform well and offer design considerations for an augmented cross-sectional study with abbreviated follow-up. © 2011, The International Biometric Society.
Deviney, Frank A.; Rice, Karen; Brown, Donald E.
2012-01-01
Natural resource managers require information concerning the frequency, duration, and long-term probability of occurrence of water-quality indicator (WQI) violations of defined thresholds. The timing of these threshold crossings often is hidden from the observer, who is restricted to relatively infrequent observations. Here, a model for the hidden process is linked with a model for the observations, and the parameters describing duration, return period, and long-term probability of occurrence are estimated using Bayesian methods. A simulation experiment is performed to evaluate the approach under scenarios based on the equivalent of a total monitoring period of 5-30 years and an observation frequency of 1-50 observations per year. Given constant threshold crossing rate, accuracy and precision of parameter estimates increased with longer total monitoring period and more-frequent observations. Given fixed monitoring period and observation frequency, accuracy and precision of parameter estimates increased with longer times between threshold crossings. For most cases where the long-term probability of being in violation is greater than 0.10, it was determined that at least 600 observations are needed to achieve precise estimates. An application of the approach is presented using 22 years of quasi-weekly observations of acid-neutralizing capacity from Deep Run, a stream in Shenandoah National Park, Virginia. The time series also was sub-sampled to simulate monthly and semi-monthly sampling protocols. Estimates of the long-term probability of violation were unbiased despite sampling frequency; however, the expected duration and return period were over-estimated using the sub-sampled time series with respect to the full quasi-weekly time series.
Estimating avian population size using Bowden's estimator
Diefenbach, D.R.
2009-01-01
Avian researchers often uniquely mark birds, and multiple estimators could be used to estimate population size using individually identified birds. However, most estimators of population size require that all sightings of marked birds be uniquely identified, and many assume homogeneous detection probabilities. Bowden's estimator can incorporate sightings of marked birds that are not uniquely identified and relax assumptions required of other estimators. I used computer simulation to evaluate the performance of Bowden's estimator for situations likely to be encountered in bird studies. When the assumptions of the estimator were met, abundance and variance estimates and confidence-interval coverage were accurate. However, precision was poor for small population sizes (N < 50) unless a large percentage of the population was marked (>75%) and multiple (≥8) sighting surveys were conducted. If additional birds are marked after sighting surveys begin, it is important to initially mark a large proportion of the population (pm ≥ 0.5 if N ≤ 100 or pm > 0.1 if N ≥ 250) and minimize sightings in which birds are not uniquely identified; otherwise, most population estimates will be overestimated by >10%. Bowden's estimator can be useful for avian studies because birds can be resighted multiple times during a single survey, not all sightings of marked birds have to uniquely identify individuals, detection probabilities among birds can vary, and the complete study area does not have to be surveyed. I provide computer code for use with pilot data to design mark-resight surveys to meet desired precision for abundance estimates.
Space-Time Smoothing of Complex Survey Data: Small Area Estimation for Child Mortality.
Mercer, Laina D; Wakefield, Jon; Pantazis, Athena; Lutambi, Angelina M; Masanja, Honorati; Clark, Samuel
2015-12-01
Many people living in low and middle-income countries are not covered by civil registration and vital statistics systems. Consequently, a wide variety of other types of data including many household sample surveys are used to estimate health and population indicators. In this paper we combine data from sample surveys and demographic surveillance systems to produce small area estimates of child mortality through time. Small area estimates are necessary to understand geographical heterogeneity in health indicators when full-coverage vital statistics are not available. For this endeavor spatio-temporal smoothing is beneficial to alleviate problems of data sparsity. The use of conventional hierarchical models requires careful thought since the survey weights may need to be considered to alleviate bias due to non-random sampling and non-response. The application that motivated this work is estimation of child mortality rates in five-year time intervals in regions of Tanzania. Data come from Demographic and Health Surveys conducted over the period 1991-2010 and two demographic surveillance system sites. We derive a variance estimator of under five years child mortality that accounts for the complex survey weighting. For our application, the hierarchical models we consider include random effects for area, time and survey and we compare models using a variety of measures including the conditional predictive ordinate (CPO). The method we propose is implemented via the fast and accurate integrated nested Laplace approximation (INLA).
SNR-based queue observations at CFHT
NASA Astrophysics Data System (ADS)
Devost, Daniel; Moutou, Claire; Manset, Nadine; Mahoney, Billy; Burdullis, Todd; Cuillandre, Jean-Charles; Racine, René
2016-07-01
In an effort to optimize the night time utilizing the exquisite weather on Maunakea, CFHT has equipped its dome with vents and is now moving its Queued Scheduled Observing (QSO)1 based operations toward Signal to Noise Ratio (SNR) observing. In this new mode, individual exposure times for a science program are estimated using a model that uses measurements of the weather conditions as input and the science program is considered completed when the depth required by the scientific requirements are reached. These changes allow CFHT to make better use of the excellent seeing conditions provided by Maunakea, allowing us to complete programs in a shorter time than allocated to the science programs.
NASA Astrophysics Data System (ADS)
Chu, Hone-Jay; Kong, Shish-Jeng; Chang, Chih-Hua
2018-03-01
The turbidity (TB) of a water body varies with time and space. Water quality is traditionally estimated via linear regression based on satellite images. However, estimating and mapping water quality require a spatio-temporal nonstationary model, while TB mapping necessitates the use of geographically and temporally weighted regression (GTWR) and geographically weighted regression (GWR) models, both of which are more precise than linear regression. Given the temporal nonstationary models for mapping water quality, GTWR offers the best option for estimating regional water quality. Compared with GWR, GTWR provides highly reliable information for water quality mapping, boasts a relatively high goodness of fit, improves the explanation of variance from 44% to 87%, and shows a sufficient space-time explanatory power. The seasonal patterns of TB and the main spatial patterns of TB variability can be identified using the estimated TB maps from GTWR and by conducting an empirical orthogonal function (EOF) analysis.
Fast frequency acquisition via adaptive least squares algorithm
NASA Technical Reports Server (NTRS)
Kumar, R.
1986-01-01
A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general, adaptive parameter-estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be nonGaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the fast Fourier transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.
Yang, Yong-Qiang; Li, Xue-Bo; Shao, Ru-Yue; Lyu, Zhou; Li, Hong-Wei; Li, Gen-Ping; Xu, Lyu-Zi; Wan, Li-Hua
2016-09-01
The characteristic life stages of infesting blowflies (Calliphoridae) such as Chrysomya megacephala (Fabricius) are powerful evidence for estimating the death time of a corpse, but an established reference of developmental times for local blowfly species is required. We determined the developmental rates of C. megacephala from southwest China at seven constant temperatures (16-34°C). Isomegalen and isomorphen diagrams were constructed based on the larval length and time for each developmental event (first ecdysis, second ecdysis, wandering, pupariation, and eclosion), at each temperature. A thermal summation model was constructed by estimating the developmental threshold temperature D0 and the thermal summation constant K. The thermal summation model indicated that, for complete development from egg hatching to eclosion, D0 = 9.07 ± 0.54°C and K = 3991.07 ± 187.26 h °C. This reference can increase the accuracy of estimations of postmortem intervals in China by predicting the growth of C. megacephala. © 2016 American Academy of Forensic Sciences.
C.E. McGee; F.A. Bennett
1959-01-01
Proper management of any timber species or type requires valid estimates of volume from time to time. Tables 1 and 2 were constructed to meet this need for the expanding area of slash pine plantations in the middle coastal plain of Georgia and the Carolina Sandhills.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-12
... request for comments. SUMMARY: As part of its continuing effort to reduce paperwork burden and as required... forms of information technology; and ways to further reduce the information burden for small business... responses. Estimated Time Per Response: 30 sec (.0084 hours). Frequency of Response: One time reporting...
Mapping wildfire and clearcut harvest disturbances in boreal forests with Landsat time series data
Todd Schroeder; Michael A. Wulder; Sean P. Healey; Gretchen G. Moisen
2011-01-01
Information regarding the extent, timing andmagnitude of forest disturbance are key inputs required for accurate estimation of the terrestrial carbon balance. Equally important for studying carbon dynamics is the ability to distinguish the cause or type of forest disturbance occurring on the landscape. Wildfire and timber harvesting are common disturbances occurring in...
Ahmed, Afaz Uddin; Arablouei, Reza; Hoog, Frank de; Kusy, Branislav; Jurdak, Raja; Bergmann, Neil
2018-05-29
Channel state information (CSI) collected during WiFi packet transmissions can be used for localization of commodity WiFi devices in indoor environments with multipath propagation. To this end, the angle of arrival (AoA) and time of flight (ToF) for all dominant multipath components need to be estimated. A two-dimensional (2D) version of the multiple signal classification (MUSIC) algorithm has been shown to solve this problem using 2D grid search, which is computationally expensive and is therefore not suited for real-time localisation. In this paper, we propose using a modified matrix pencil (MMP) algorithm instead. Specifically, we show that the AoA and ToF estimates can be found independently of each other using the one-dimensional (1D) MMP algorithm and the results can be accurately paired to obtain the AoA⁻ToF pairs for all multipath components. Thus, the 2D estimation problem reduces to running 1D estimation multiple times, substantially reducing the computational complexity. We identify and resolve the problem of degenerate performance when two or more multipath components have the same AoA. In addition, we propose a packet aggregation model that uses the CSI data from multiple packets to improve the performance under noisy conditions. Simulation results show that our algorithm achieves two orders of magnitude reduction in the computational time over the 2D MUSIC algorithm while achieving similar accuracy. High accuracy and low computation complexity of our approach make it suitable for applications that require location estimation to run on resource-constrained embedded devices in real time.
An Adaptive Kalman Filter using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
Application of square-root filtering for spacecraft attitude control
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Schmidt, S. F.; Goka, T.
1978-01-01
Suitable digital algorithms are developed and tested for providing on-board precision attitude estimation and pointing control for potential use in the Landsat-D spacecraft. These algorithms provide pointing accuracy of better than 0.01 deg. To obtain necessary precision with efficient software, a six state-variable square-root Kalman filter combines two star tracker measurements to update attitude estimates obtained from processing three gyro outputs. The validity of the estimation and control algorithms are established, and the sensitivity of their performance to various error sources and software parameters are investigated by detailed digital simulation. Spacecraft computer memory, cycle time, and accuracy requirements are estimated.
Heading Toward Launch with the Integrated Multi-Satellite Retrievals for GPM (IMERG)
NASA Technical Reports Server (NTRS)
Huffman, George J.; Bolvin, David T.; Nelkin, Eric J.; Adler, Robert F.
2012-01-01
The Day-l algorithm for computing combined precipitation estimates in GPM is the Integrated Multi-satellitE Retrievals for GPM (IMERG). We plan for the period of record to encompass both the TRMM and GPM eras, and the coverage to extend to fully global as experience is gained in the difficult high-latitude environment. IMERG is being developed as a unified U.S. algorithm that takes advantage of strengths in the three groups that are contributing expertise: 1) the TRMM Multi-satellite Precipitation Analysis (TMPA), which addresses inter-satellite calibration of precipitation estimates and monthly scale combination of satellite and gauge analyses; 2) the CPC Morphing algorithm with Kalman Filtering (KF-CMORPH), which provides quality-weighted time interpolation of precipitation patterns following cloud motion; and 3) the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks using a Cloud Classification System (PERSIANN-CCS), which provides a neural-network-based scheme for generating microwave-calibrated precipitation estimates from geosynchronous infrared brightness temperatures. In this talk we summarize the major building blocks and important design issues driven by user needs and practical data issues. One concept being pioneered by the IMERG team is that the code system should produce estimates for the same time period but at different latencies to support the requirements of different groups of users. Another user requirement is that all these runs must be reprocessed as new IMERG versions are introduced. IMERG's status at meeting time will be summarized, and the processing scenario in the transition from TRMM to GPM will be laid out. Initially, IMERG will be run with TRMM-based calibration, and then a conversion to a GPM-based calibration will be employed after the GPM sensor products are validated. A complete reprocessing will be computed, which will complete the transition from TMPA.
Standardized versus custom parenteral nutrition: impact on clinical and cost-related outcomes.
Blanchette, Lisa M; Huiras, Paul; Papadopoulos, Stella
2014-01-15
Results of a study comparing clinical and cost outcomes with the use of standardized versus custom-prepared parenteral nutrition (PN) in an acute care setting are reported. In a retrospective pre-post analysis, nutritional target attainment, electrolyte abnormalities, and other outcomes were compared in patients 15 years of age or older who received custom PN (n = 49) or a standardized PN product (n = 57) for at least 72 hours at a large medical center over a 13-month period; overall, 45% of the cases were intensive care unit (ICU) admissions. A time-and-motion assessment was conducted to determine PN preparation times. There were no significant between-group differences in the percentage of patients who achieved estimated caloric requirements or in mean ICU or hospital length of stay. However, patients who received standardized PN were significantly less likely than those who received custom PN to achieve the highest protein intake goal (63% versus 92%, p = 0.003) and more likely to develop hyponatremia (37% versus 14%, p = 0.01). Pharmacy preparation times averaged 20 minutes for standardized PN and 80 minutes for custom PN; unit costs were $61.06 and $57.84, respectively. A standardized PN formulation was as effective as custom PN in achieving estimated caloric requirements, but it was relatively less effective in achieving 90% of estimated protein requirements and was associated with a higher frequency of hyponatremia. The standardized PN product may be a cost-effective formulation for institutions preparing an average of five or fewer PN orders per day.
Ojima, T; Saito, E; Kanagawa, K; Sakata, K; Yanagawa, H
1997-04-01
The purpose of this study was to estimate the manpower required for the health care and nursing services for the aged at home. For prefectural health care and welfare planning for the aged, data such as the proportion of the aged who need help, service demand, and required frequency of services were obtained. The means and "mean +/- 2 x standard deviations" were calculated to obtain various parameters. Calculated figures were those which can be obtained with some effort. The results are as follows (middle level estimation (low level estimation-high level estimation)): requirements are 1.9 (0.61-5.7) public health nurses, 2.6 (0.63-14) visiting nurses, 0.20 (0.084-0.42) dental hygienists, 0.35 (0.17-0.66) dietitians, and 0.25 (0.014-1.27) physical and occupational therapists per population 10,000. For the national total, requirements are 23 (7.3-67) thousand public health nurses, 31 (7.5-160) thousand visiting nurses, 2.4 (1.0-5.0) thousand dental hygienists, 3.9 (2.0-7.8) thousand dietitians, and 3.0 (0.17-15) thousand physical and occupational therapists. By population sizes, for example in the municipalities which has 10-30 thousand people, required are 4.2 (1.7-11) public health nurses, 5.3 (1.3-27) visiting nurses, 0.4 (0.2-0.8) dental hygienists, 0.5 (0.3-0.9) dietitians, and 0.5 (0.0-2.5) physical and occupational therapists. Comparison of the present numbers with estimated manpower needs show that, the present number of public health personnel is almost the same as the low level estimation. But the present numbers of other manpower is lower than the low level estimation. Considering other services such as maternal and child health, it seems that the municipalities which has 10+ thousand population should employ full-time dietitians and dental hygienists. For policy making in a municipality, the policies of other municipalities should be considered. Because it is based on means for municipalities, the results of this study should be useful for application by other municipalities.
Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery.
Rottmann, Joerg; Keall, Paul; Berbeco, Ross
2013-09-01
To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time.
Plotz, Roan D.; Grecian, W. James; Kerley, Graham I.H.; Linklater, Wayne L.
2016-01-01
Comparisons of recent estimations of home range sizes for the critically endangered black rhinoceros in Hluhluwe-iMfolozi Park (HiP), South Africa, with historical estimates led reports of a substantial (54%) increase, attributed to over-stocking and habitat deterioration that has far-reaching implications for rhino conservation. Other reports, however, suggest the increase is more likely an artefact caused by applying various home range estimators to non-standardised datasets. We collected 1939 locations of 25 black rhino over six years (2004–2009) to estimate annual home ranges and evaluate the hypothesis that they have increased in size. A minimum of 30 and 25 locations were required for accurate 95% MCP estimation of home range of adult rhinos, during the dry and wet seasons respectively. Forty and 55 locations were required for adult female and male annual MCP home ranges, respectively, and 30 locations were necessary for estimating 90% bivariate kernel home ranges accurately. Average annual 95% bivariate kernel home ranges were 20.4 ± 1.2 km2, 53 ±1.9% larger than 95% MCP ranges (9.8 km2 ± 0.9). When home range techniques used during the late-1960s in HiP were applied to our dataset, estimates were similar, indicating that ranges have not changed substantially in 50 years. Inaccurate, non-standardised, home range estimates and their comparison have the potential to mislead black rhino population management. We recommend that more care be taken to collect adequate numbers of rhino locations within standardized time periods (i.e., season or year) and that the comparison of home ranges estimated using dissimilar procedures be avoided. Home range studies of black rhino have been data deficient and procedurally inconsistent. Standardisation of methods is required. PMID:27028728
Plotz, Roan D; Grecian, W James; Kerley, Graham I H; Linklater, Wayne L
2016-01-01
Comparisons of recent estimations of home range sizes for the critically endangered black rhinoceros in Hluhluwe-iMfolozi Park (HiP), South Africa, with historical estimates led reports of a substantial (54%) increase, attributed to over-stocking and habitat deterioration that has far-reaching implications for rhino conservation. Other reports, however, suggest the increase is more likely an artefact caused by applying various home range estimators to non-standardised datasets. We collected 1939 locations of 25 black rhino over six years (2004-2009) to estimate annual home ranges and evaluate the hypothesis that they have increased in size. A minimum of 30 and 25 locations were required for accurate 95% MCP estimation of home range of adult rhinos, during the dry and wet seasons respectively. Forty and 55 locations were required for adult female and male annual MCP home ranges, respectively, and 30 locations were necessary for estimating 90% bivariate kernel home ranges accurately. Average annual 95% bivariate kernel home ranges were 20.4 ± 1.2 km(2), 53 ± 1.9% larger than 95% MCP ranges (9.8 km(2) ± 0.9). When home range techniques used during the late-1960s in HiP were applied to our dataset, estimates were similar, indicating that ranges have not changed substantially in 50 years. Inaccurate, non-standardised, home range estimates and their comparison have the potential to mislead black rhino population management. We recommend that more care be taken to collect adequate numbers of rhino locations within standardized time periods (i.e., season or year) and that the comparison of home ranges estimated using dissimilar procedures be avoided. Home range studies of black rhino have been data deficient and procedurally inconsistent. Standardisation of methods is required.
Real-time state estimation in a flight simulator using fNIRS.
Gateau, Thibault; Durantin, Gautier; Lancelot, Francois; Scannella, Sebastien; Dehais, Frederic
2015-01-01
Working memory is a key executive function for flying an aircraft. This function is particularly critical when pilots have to recall series of air traffic control instructions. However, working memory limitations may jeopardize flight safety. Since the functional near-infrared spectroscopy (fNIRS) method seems promising for assessing working memory load, our objective is to implement an on-line fNIRS-based inference system that integrates two complementary estimators. The first estimator is a real-time state estimation MACD-based algorithm dedicated to identifying the pilot's instantaneous mental state (not-on-task vs. on-task). It does not require a calibration process to perform its estimation. The second estimator is an on-line SVM-based classifier that is able to discriminate task difficulty (low working memory load vs. high working memory load). These two estimators were tested with 19 pilots who were placed in a realistic flight simulator and were asked to recall air traffic control instructions. We found that the estimated pilot's mental state matched significantly better than chance with the pilot's real state (62% global accuracy, 58% specificity, and 72% sensitivity). The second estimator, dedicated to assessing single trial working memory loads, led to 80% classification accuracy, 72% specificity, and 89% sensitivity. These two estimators establish reusable blocks for further fNIRS-based passive brain computer interface development.
An evaluation of rapid methods for monitoring vegetation characteristics of wetland bird habitat
Tavernia, Brian G.; Lyons, James E.; Loges, Brian W.; Wilson, Andrew; Collazo, Jaime A.; Runge, Michael C.
2016-01-01
Wetland managers benefit from monitoring data of sufficient precision and accuracy to assess wildlife habitat conditions and to evaluate and learn from past management decisions. For large-scale monitoring programs focused on waterbirds (waterfowl, wading birds, secretive marsh birds, and shorebirds), precision and accuracy of habitat measurements must be balanced with fiscal and logistic constraints. We evaluated a set of protocols for rapid, visual estimates of key waterbird habitat characteristics made from the wetland perimeter against estimates from (1) plots sampled within wetlands, and (2) cover maps made from aerial photographs. Estimated percent cover of annuals and perennials using a perimeter-based protocol fell within 10 percent of plot-based estimates, and percent cover estimates for seven vegetation height classes were within 20 % of plot-based estimates. Perimeter-based estimates of total emergent vegetation cover did not differ significantly from cover map estimates. Post-hoc analyses revealed evidence for observer effects in estimates of annual and perennial covers and vegetation height. Median time required to complete perimeter-based methods was less than 7 percent of the time needed for intensive plot-based methods. Our results show that rapid, perimeter-based assessments, which increase sample size and efficiency, provide vegetation estimates comparable to more intensive methods.
Eruption history of the Tharsis shield volcanoes, Mars
NASA Technical Reports Server (NTRS)
Plescia, J. B.
1993-01-01
The Tharsis Montes volcanoes and Olympus Mons are giant shield volcanoes. Although estimates of their average surface age have been made using crater counts, the length of time required to build the shields has not been considered. Crater counts for the volcanoes indicate the constructs are young; average ages are Amazonian to Hesperian. In relative terms; Arsia Mons is the oldest, Pavonis Mons intermediate, and Ascreaus Mons the youngest of the Tharsis Montes shield; Olympus Mons is the youngest of the group. Depending upon the calibration, absolute ages range from 730 Ma to 3100 Ma for Arsia Mons and 25 Ma to 100 Ma for Olympus Mons. These absolute chronologies are highly model dependent, and indicate only the time surficial volcanism ceased, not the time over which the volcano was built. The problem of estimating the time necessary to build the volcanoes can be attacked in two ways. First, eruption rates from terrestrial and extraterrestrial examples can be used to calculate the required period of time to build the shields. Second, some relation of eruptive activity between the volcanoes can be assumed, such as they all began at a speficic time or they were active sequentially, and calculate the eruptive rate. Volumes of the shield volcanoes were derived from topographic/volume data.
Real-time stylistic prediction for whole-body human motions.
Matsubara, Takamitsu; Hyon, Sang-Ho; Morimoto, Jun
2012-01-01
The ability to predict human motion is crucial in several contexts such as human tracking by computer vision and the synthesis of human-like computer graphics. Previous work has focused on off-line processes with well-segmented data; however, many applications such as robotics require real-time control with efficient computation. In this paper, we propose a novel approach called real-time stylistic prediction for whole-body human motions to satisfy these requirements. This approach uses a novel generative model to represent a whole-body human motion including rhythmic motion (e.g., walking) and discrete motion (e.g., jumping). The generative model is composed of a low-dimensional state (phase) dynamics and a two-factor observation model, allowing it to capture the diversity of motion styles in humans. A real-time adaptation algorithm was derived to estimate both state variables and style parameter of the model from non-stationary unlabeled sequential observations. Moreover, with a simple modification, the algorithm allows real-time adaptation even from incomplete (partial) observations. Based on the estimated state and style, a future motion sequence can be accurately predicted. In our implementation, it takes less than 15 ms for both adaptation and prediction at each observation. Our real-time stylistic prediction was evaluated for human walking, running, and jumping behaviors. Copyright © 2011 Elsevier Ltd. All rights reserved.
Aviation Environmental Design Tool (AEDT): Version 2c Service Pack 2: Installation Guide
DOT National Transportation Integrated Search
2017-03-01
Aviation Environmental Design Tool (AEDT) is a software system that models aircraft performance in space and time to estimate fuel consumption, emissions, noise, and air quality consequences. AEDT facilitates environmental review activities required ...
Miyauchi, Masaatsu; Hirai, Chizuko; Nakajima, Hideaki
2013-01-01
Although the importance of solar radiation for vitamin D3 synthesis in the human body is well known, the solar exposure time required to prevent vitamin D deficiency has not been determined in Japan. This study attempted to identify the time of solar exposure required for vitamin D3 synthesis in the body by season, time of day, and geographic location (Sapporo, Tsukuba, and Naha) using both numerical simulations and observations. According to the numerical simulation for Tsukuba at noon in July under a cloudless sky, 3.5 min of solar exposure are required to produce 5.5 μg vitamin D3 per 600 cm2 skin corresponding to the area of a face and the back of a pair of hands without ingestion from foods. In contrast, it took 76.4 min to produce the same quantity of vitamin D3 at Sapporo in December, at noon under a cloudless sky. The necessary exposure time varied considerably with the time of the day. For Tsukuba at noon in December, 22.4 min were required, but 106.0 min were required at 09:00 and 271.3 min were required at 15:00 for the same meteorological conditions. Naha receives high levels of ultraviolet radiation allowing vitamin D3 synthesis almost throughout the year.
Computer Directed Training System (CDTS), User’s Manual
1983-07-01
lessons, together with an estimate of the time required for completion. a. BSCOl0. This lesson in BASIC ( Beginners All Purpose Symbolic Instruction Code...A2-8 FIGURESj Figure A2-1. Training Systems Manager and Training Monitors Responsibility Flowchart ...training at the site. Therefore, the TSM must be knowledgeable in the various tasks required. Figure A2-1 illustrates the position in the flowchart . These
Estimating air-drying times of small-diameter ponderosa pine and Douglas-fir logs
William T. Simpson; Xiping Wang
2004-01-01
One potential use for small-diameter ponderosa pine and Douglas-fir timber is in log form. Many potential uses of logs require some degree of drying. Even though these small diameters may be considered small in the forestry context, their size when compared to typical lumber thickness dimensions is large. These logs, however, may require uneconomically long kiln-drying...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fernandez, Juan Carlos; Barnes, Cris William; Mocko, Michael Jeffrey
This report is intended to examine the use of neutron resonance spectroscopy (NRS) to make time- dependent and spatially-resolved temperature measurements of materials in extreme conditions. Specifically, the sensitivities of the temperature estimate on neutron-beam and diagnostic parameters is examined. Based on that examination, requirements are set on a pulsed neutron-source and diagnostics to make a meaningful measurement.
Evaluating abundance estimate precision and the assumptions of a count-based index for small mammals
Wiewel, A.S.; Adams, A.A.Y.; Rodda, G.H.
2009-01-01
Conservation and management of small mammals requires reliable knowledge of population size. We investigated precision of markrecapture and removal abundance estimates generated from live-trapping and snap-trapping data collected at sites on Guam (n 7), Rota (n 4), Saipan (n 5), and Tinian (n 3), in the Mariana Islands. We also evaluated a common index, captures per unit effort (CPUE), as a predictor of abundance. In addition, we evaluated cost and time associated with implementing live-trapping and snap-trapping and compared species-specific capture rates of selected live- and snap-traps. For all species, markrecapture estimates were consistently more precise than removal estimates based on coefficients of variation and 95 confidence intervals. The predictive utility of CPUE was poor but improved with increasing sampling duration. Nonetheless, modeling of sampling data revealed that underlying assumptions critical to application of an index of abundance, such as constant capture probability across space, time, and individuals, were not met. Although snap-trapping was cheaper and faster than live-trapping, the time difference was negligible when site preparation time was considered. Rattus diardii spp. captures were greatest in Haguruma live-traps (Standard Trading Co., Honolulu, HI) and Victor snap-traps (Woodstream Corporation, Lititz, PA), whereas Suncus murinus and Mus musculus captures were greatest in Sherman live-traps (H. B. Sherman Traps, Inc., Tallahassee, FL) and Museum Special snap-traps (Woodstream Corporation). Although snap-trapping and CPUE may have utility after validation against more rigorous methods, validation should occur across the full range of study conditions. Resources required for this level of validation would likely be better allocated towards implementing rigorous and robust methods.
Efficient mental workload estimation using task-independent EEG features.
Roy, R N; Charbonnier, S; Campagne, A; Bonnet, S
2016-04-01
Mental workload is frequently estimated by EEG-based mental state monitoring systems. Usually, these systems use spectral markers and event-related potentials (ERPs). To our knowledge, no study has directly compared their performance for mental workload assessment, nor evaluated the stability in time of these markers and of the performance of the associated mental workload estimators. This study proposes a comparison of two processing chains, one based on the power in five frequency bands, and one based on ERPs, both including a spatial filtering step (respectively CSP and CCA), an FLDA classification and a 10-fold cross-validation. To get closer to a real life implementation, spectral markers were extracted from a short window (i.e. towards reactive systems) that did not include any motor activity and the analyzed ERPs were elicited by a task-independent probe that required a reflex-like answer (i.e. close to the ones required by dead man's vigilance devices). The data were acquired from 20 participants who performed a Sternberg memory task for 90 min (i.e. 2/6 digits to memorize) inside which a simple detection task was inserted. The results were compared both when the testing was performed at the beginning and end of the session. Both chains performed significantly better than random; however the one based on the spectral markers had a low performance (60%) and was not stable in time. Conversely, the ERP-based chain gave very high results (91%) and was stable in time. This study demonstrates that an efficient and stable in time workload estimation can be achieved using task-independent spatially filtered ERPs elicited in a minimally intrusive manner.
Conditional Poisson models: a flexible alternative to conditional logistic case cross-over analysis.
Armstrong, Ben G; Gasparrini, Antonio; Tobias, Aurelio
2014-11-24
The time stratified case cross-over approach is a popular alternative to conventional time series regression for analysing associations between time series of environmental exposures (air pollution, weather) and counts of health outcomes. These are almost always analyzed using conditional logistic regression on data expanded to case-control (case crossover) format, but this has some limitations. In particular adjusting for overdispersion and auto-correlation in the counts is not possible. It has been established that a Poisson model for counts with stratum indicators gives identical estimates to those from conditional logistic regression and does not have these limitations, but it is little used, probably because of the overheads in estimating many stratum parameters. The conditional Poisson model avoids estimating stratum parameters by conditioning on the total event count in each stratum, thus simplifying the computing and increasing the number of strata for which fitting is feasible compared with the standard unconditional Poisson model. Unlike the conditional logistic model, the conditional Poisson model does not require expanding the data, and can adjust for overdispersion and auto-correlation. It is available in Stata, R, and other packages. By applying to some real data and using simulations, we demonstrate that conditional Poisson models were simpler to code and shorter to run than are conditional logistic analyses and can be fitted to larger data sets than possible with standard Poisson models. Allowing for overdispersion or autocorrelation was possible with the conditional Poisson model but when not required this model gave identical estimates to those from conditional logistic regression. Conditional Poisson regression models provide an alternative to case crossover analysis of stratified time series data with some advantages. The conditional Poisson model can also be used in other contexts in which primary control for confounding is by fine stratification.
van Stralen, Marijn; Bosch, Johan G; Voormolen, Marco M; van Burken, Gerard; Krenning, Boudewijn J; van Geuns, Robert-Jan M; Lancée, Charles T; de Jong, Nico; Reiber, Johan H C
2005-10-01
We propose a semiautomatic endocardial border detection method for three-dimensional (3D) time series of cardiac ultrasound (US) data based on pattern matching and dynamic programming, operating on two-dimensional (2D) slices of the 3D plus time data, for the estimation of full cycle left ventricular volume, with minimal user interaction. The presented method is generally applicable to 3D US data and evaluated on data acquired with the Fast Rotating Ultrasound (FRU-) Transducer, developed by Erasmus Medical Center (Rotterdam, the Netherlands), a conventional phased-array transducer, rotating at very high speed around its image axis. The detection is based on endocardial edge pattern matching using dynamic programming, which is constrained by a 3D plus time shape model. It is applied to an automatically selected subset of 2D images of the original data set, for typically 10 equidistant rotation angles and 16 cardiac phases (160 images). Initialization requires the drawing of four contours per patient manually. We evaluated this method on 14 patients against MRI end-diastole and end-systole volumes. Initialization requires the drawing of four contours per patient manually. We evaluated this method on 14 patients against MRI end-diastolic (ED) and end-systolic (ES) volumes. The semiautomatic border detection approach shows good correlations with MRI ED/ES volumes (r = 0.938) and low interobserver variability (y = 1.005x - 16.7, r = 0.943) over full-cycle volume estimations. It shows a high consistency in tracking the user-defined initial borders over space and time. We show that the ease of the acquisition using the FRU-transducer and the semiautomatic endocardial border detection method together can provide a way to quickly estimate the left ventricular volume over the full cardiac cycle using little user interaction.
Efficient mental workload estimation using task-independent EEG features
NASA Astrophysics Data System (ADS)
Roy, R. N.; Charbonnier, S.; Campagne, A.; Bonnet, S.
2016-04-01
Objective. Mental workload is frequently estimated by EEG-based mental state monitoring systems. Usually, these systems use spectral markers and event-related potentials (ERPs). To our knowledge, no study has directly compared their performance for mental workload assessment, nor evaluated the stability in time of these markers and of the performance of the associated mental workload estimators. This study proposes a comparison of two processing chains, one based on the power in five frequency bands, and one based on ERPs, both including a spatial filtering step (respectively CSP and CCA), an FLDA classification and a 10-fold cross-validation. Approach. To get closer to a real life implementation, spectral markers were extracted from a short window (i.e. towards reactive systems) that did not include any motor activity and the analyzed ERPs were elicited by a task-independent probe that required a reflex-like answer (i.e. close to the ones required by dead man’s vigilance devices). The data were acquired from 20 participants who performed a Sternberg memory task for 90 min (i.e. 2/6 digits to memorize) inside which a simple detection task was inserted. The results were compared both when the testing was performed at the beginning and end of the session. Main results. Both chains performed significantly better than random; however the one based on the spectral markers had a low performance (60%) and was not stable in time. Conversely, the ERP-based chain gave very high results (91%) and was stable in time. Significance. This study demonstrates that an efficient and stable in time workload estimation can be achieved using task-independent spatially filtered ERPs elicited in a minimally intrusive manner.
Modeling ARRM Xenon Tank Pressurization Using 1D Thermodynamic and Heat Transfer Equations
NASA Technical Reports Server (NTRS)
Gilligan, Patrick; Tomsik, Thomas
2016-01-01
As a first step in understanding what ground support equipment (GSE) is required to provide external cooling during the loading of 5,000 kg of xenon into 4 aluminum lined composite overwrapped pressure vessels (COPVs), a modeling analysis was performed using Microsoft Excel. The goals of the analysis were to predict xenon temperature and pressure throughout loading at the launch facility, estimate the time required to load one tank, and to get an early estimate of what provisions for cooling xenon might be needed while the tanks are being filled. The model uses the governing thermodynamic and heat transfer equations to achieve these goals. Results indicate that a single tank can be loaded in about 15 hours with reasonable external coolant requirements. The model developed in this study was successfully validated against flight and test data. The first data set is from the Dawn mission which also utilizes solar electric propulsion with xenon propellant, and the second is test data from the rapid loading of a hydrogen cylindrical COPV. The main benefit of this type of model is that the governing physical equations using bulk fluid solid temperatures can provide a quick and accurate estimate of the state of the propellant throughout loading which is much cheaper in terms of computational time and licensing costs than a Computation Fluid Dynamics (CFD) analysis while capturing the majority of the thermodynamics and heat transfer.
Modeling Xenon Tank Pressurization using One-Dimensional Thermodynamic and Heat Transfer Equations
NASA Technical Reports Server (NTRS)
Gilligan, Ryan P.; Tomsik, Thomas M.
2017-01-01
As a first step in understanding what ground support equipment (GSE) is required to provide external cooling during the loading of 5,000 kg of xenon into 4 aluminum lined composite overwrapped pressure vessels (COPVs), a modeling analysis was performed using Microsoft Excel. The goals of the analysis were to predict xenon temperature and pressure throughout loading at the launch facility, estimate the time required to load one tank, and to get an early estimate of what provisions for cooling xenon might be needed while the tanks are being filled. The model uses the governing thermodynamic and heat transfer equations to achieve these goals. Results indicate that a single tank can be loaded in about 15 hours with reasonable external coolant requirements. The model developed in this study was successfully validated against flight and test data. The first data set is from the Dawn mission which also utilizes solar electric propulsion with xenon propellant, and the second is test data from the rapid loading of a hydrogen cylindrical COPV. The main benefit of this type of model is that the governing physical equations using bulk fluid solid temperatures can provide a quick and accurate estimate of the state of the propellant throughout loading which is much cheaper in terms of computational time and licensing costs than a Computation Fluid Dynamics (CFD) analysis while capturing the majority of the thermodynamics and heat transfer.
An Adaptive Kalman Filter Using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
NASA Astrophysics Data System (ADS)
Hyer, E. J.; Schmidt, C. C.; Hoffman, J.; Giglio, L.; Peterson, D. A.
2013-12-01
Polar and geostationary satellites are used operationally for fire detection and smoke source estimation by many near-real-time operational users, including operational forecast centers around the globe. The input satellite radiance data are processed by data providers to produce Level-2 and Level -3 fire detection products, but processing these data into spatially and temporally consistent estimates of fire activity requires a substantial amount of additional processing. The most significant processing steps are correction for variable coverage of the satellite observations, and correction for conditions that affect the detection efficiency of the satellite sensors. We describe a system developed by the Naval Research Laboratory (NRL) that uses the full raster information from the entire constellation to diagnose detection opportunities, calculate corrections for factors such as angular dependence of detection efficiency, and generate global estimates of fire activity at spatial and temporal scales suitable for atmospheric modeling. By incorporating these improved fire observations, smoke emissions products, such as NRL's FLAMBE, are able to produce improved estimates of global emissions. This talk provides an overview of the system, demonstrates the achievable improvement over older methods, and describes challenges for near-real-time implementation.
Singh, I S; Mishra, Lokpati; Yadav, J R; Nadar, M Y; Rao, D D; Pradeepkumar, K S
2015-10-01
The estimation of Pu/(241)Am ratio in the biological samples is an important input for the assessment of internal dose received by the workers. The radiochemical separation of Pu isotopes and (241)Am in a sample followed by alpha spectrometry is a widely used technique for the determination of Pu/(241)Am ratio. However, this method is time consuming and many times quick estimation is required. In this work, Pu/(241)Am ratio in the biological sample was estimated with HPGe detector based measurements using gamma/X-rays emitted by these radionuclides. These results were compared with those obtained from alpha spectroscopy of sample after radiochemical analysis and found to be in good agreement. Copyright © 2015 Elsevier Ltd. All rights reserved.
In situ method for estimating cell survival in a solid tumor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfieri, A.A.; Hahn, E.W.
1978-09-01
The response of the murine Meth-A fibrosarcoma to single and fractionated doses of x-irradiation, actinomycin D chemotherapy, and/or concomitant local tumor hyperthermia was assayed with the use of an in situ method for estimating cell kill within a solid tumor. The cell survival assay was based on a standard curve plotting number of inoculated viable cells with and without radiation-inactivated homologous tumor cells versus the time required for i.m. tumors to grow to 1.0 cu cm. The time for post-treatment tumors to grow to 1.0 cu cm was cross-referenced to the standard curve, and the number of surviving cells contributingmore » to tumor regrowth was estimated. The resulting surviving fraction curves closely resemble those obtained with in vitro systems.« less
NASA Technical Reports Server (NTRS)
Hurd, W. J.
1974-01-01
A prototype of a semi-real time system for synchronizing the Deep Space Net station clocks by radio interferometry was successfully demonstrated on August 30, 1972. The system utilized an approximate maximum likelihood estimation procedure for processing the data, thereby achieving essentially optimum time sync estimates for a given amount of data, or equivalently, minimizing the amount of data required for reliable estimation. Synchronization accuracies as good as 100 ns rms were achieved between Deep Space Stations 11 and 12, both at Goldstone, Calif. The accuracy can be improved by increasing the system bandwidth until the fundamental limitations due to baseline and source position uncertainties and atmospheric effects are reached. These limitations are under 10 ns for transcontinental baselines.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-28
...The Federal Emergency Management Agency (FEMA) has submitted the following information collection to the Office of Management and Budget (OMB) for review and clearance in accordance with the requirements of the Paperwork Reduction Act of 1995. The submission describes the nature of the information collection, the categories of respondents, the estimated burden (i.e., the time, effort and resources used by respondents to respond) and cost, and includes the actual data collection instruments FEMA will use. There has been a change in the respondents, estimated burden, and estimated total annual burden hours from previous 30 day Notice. This change is a result of including the time, effort, and resources to collect information to be used by respondents as well as the significant decline in respondents expected.
Microprocessor utilization in search and rescue missions
NASA Technical Reports Server (NTRS)
Schwartz, M.; Bashkow, T.
1978-01-01
The position of an emergency transmitter may be determined by measuring the Doppler shift of the distress signal as received by an orbiting satellite. This requires the computation of an initial estimate and refinement of this estimate through an iterative, nonlinear, least squares estimation. A version of the algorithm was implemented and tested by locating a transmitter on the premises and obtaining observations from a satellite. The computer used was an IBM 360/95. The position was determined within the desired 10 km radius accuracy. The feasibility of performing the same task in real time using microprocessor technology, was determined. The least squares algorithm was implemented on an Intel 8080 microprocessor. The results indicate that a microprocessor can easily match the IBM implementation in accuracy and be performed inside the time limitations set.
Kletting, P; Schimmel, S; Kestler, H A; Hänscheid, H; Luster, M; Fernández, M; Bröer, J H; Nosske, D; Lassmann, M; Glatting, G
2013-10-01
Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error. The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB. To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit parameters and their standard error estimated by using SAAM numerical and NUKFIT showed differences of <1%. The differences for the time-integrated activity coefficients were also <1% (standard error between 0.4% and 3%). In general, the application of the software is user-friendly and the results are mathematically correct and reproducible. An application of NUKFIT is presented for three different clinical examples. The software tool with its underlying methodology can be employed to objectively and reproducibly estimate the time integrated activity coefficient and its standard error for most time activity data in molecular radiotherapy.
Natsume, Takahiro; Ishida, Masaki; Kitagawa, Kakuya; Nagata, Motonori; Sakuma, Hajime; Ichihara, Takashi
2015-11-01
The purpose of this study was to develop a method to determine time discrepancies between input and myocardial time-signal intensity (TSI) curves for accurate estimation of myocardial perfusion with first-pass contrast-enhanced MRI. Estimation of myocardial perfusion with contrast-enhanced MRI using kinetic models requires faithful recording of contrast content in the blood and myocardium. Typically, the arterial input function (AIF) is obtained by setting a region of interest in the left ventricular cavity. However, there is a small delay between the AIF and the myocardial curves, and such time discrepancies can lead to errors in flow estimation using Patlak plot analysis. In this study, the time discrepancies between the arterial TSI curve and the myocardial tissue TSI curve were estimated based on the compartment model. In the early phase after the arrival of the contrast agent in the myocardium, the relationship between rate constant K1 and the concentrations of Gd-DTPA contrast agent in the myocardium and arterial blood (LV blood) can be described by the equation K1={dCmyo(tpeak)/dt}/Ca(tpeak), where Cmyo(t) and Ca(t) are the relative concentrations of Gd-DTPA contrast agent in the myocardium and in the LV blood, respectively, and tpeak is the time corresponding to the peak of Ca(t). In the ideal case, the time corresponding to the maximum upslope of Cmyo(t), tmax, is equal to tpeak. In practice, however, there is a small difference in the arrival times of the contrast agent into the LV and into the myocardium. This difference was estimated to correspond to the difference between tpeak and tmax. The magnitudes of such time discrepancies and the effectiveness of the correction for these time discrepancies were measured in 18 subjects who underwent myocardial perfusion MRI under rest and stress conditions. The effects of the time discrepancies could be corrected effectively in the myocardial perfusion estimates. Copyright © 2015 Elsevier Inc. All rights reserved.
Recovery time and state change of terrestrial carbon cycle after disturbance
NASA Astrophysics Data System (ADS)
Fu, Zheng; Li, Dejun; Hararuk, Oleksandra; Schwalm, Christopher; Luo, Yiqi; Yan, Liming; Niu, Shuli
2017-10-01
Ecosystems usually recover from disturbance until a stable state, during which carbon (C) is accumulated to compensate for the C loss associated with disturbance events. However, it is not well understood how likely it is for an ecosystem to recover to an alternative state and how long it takes to recover toward a stable state. Here, we synthesized the results from 77 peer-reviewed case studies that examined ecosystem recovery following disturbances to quantify state change (relative changes between pre-disturbance and fully recovered states) and recovery times for various C cycle variables and disturbance types. We found that most ecosystem C pools and fluxes fully recovered to a stable state that was not significantly different from the pre-disturbance state, except for leaf area index and net primary productivity, which were 10% and 35% higher than the pre-disturbance value, respectively, in forest ecosystem. Recovery times varied largely among variables and disturbance types in the forest, with the longest recovery time required for total biomass (104 ± 33 years) and the shortest time required for C fluxes (23 ± 5 years). The longest and shortest recovery times for different disturbance types are deforestation (101 ± 28 years) and drought (10 ± 1 years), respectively. The recovery time was related to disturbance severity with severer disturbances requiring longer recovery times. However, in the long term, recovery had a strong tendency to drive ecosystem C accumulation towards an equilibrium state. Although we assumed disturbances are static, the recovery-related estimates and relationships revealed in this study are crucial for improving the estimates of disturbance impacts and long-term C balance in terrestrial ecosystems within a disturbance-recovery cycle.
Transmitter Pulse Estimation and Measurements for Airborne TDEM Systems
NASA Astrophysics Data System (ADS)
Vetrov, A.; Mejzr, I.
2013-12-01
The processing and interpretation of Airborne Time Domain EM data requires precise description of the transmitter parameters, including shape, amplitude and length of the transmitted pulse. There are several ways to measure pulse shape of the transmitter loop. Transmitted pulse can be recorded by a current monitor installed on the loop. The current monitor readings do not give exact image due to own time-domain physical characteristics of the current monitor. Another way is to restore the primary pulse shape from the receiver data recorded on-time, if such is possible. The receiver gives exact image of the primary field projection combined with the ground response, which can be minimized at high altitude pass, usually with a transmitter elevation higher than 1500 ft from the ground. The readings on the receiver are depending on receiver position and orientation. Modeling of airborne TDEM transmitter pulse allows us to compare estimated and measured shape of the pulse and apply required corrections. Airborne TDEM system transmitter pulse shape has been studied by authors while developing P-THEM system. The data has been gathered during in-doors and out-doors ground tests in Canada, as well as during flight tests in Canada and in India. The P-THEM system has three-axes receiver that is suspended on a tow-cable in the midpoint between the transmitter and the helicopter. The P-THEM receiver geometry does not require backing coils to dump the primary field. The system records full-wave data from the receiver and current monitor installed on the transmitter loop, including on-time and off-time data. The modeling of the transmitter pulse allowed us to define the difference between estimated and measured values. The higher accuracy pulse shape can be used for better data processing and interpretation. A developed model can be applied to similar systems and configurations.
Modelling survival: exposure pattern, species sensitivity and uncertainty.
Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B; Van den Brink, Paul J; Veltman, Karin; Vogel, Sören; Zimmer, Elke I; Preuss, Thomas G
2016-07-06
The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans.
Modelling survival: exposure pattern, species sensitivity and uncertainty
NASA Astrophysics Data System (ADS)
Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I.; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B.; van den Brink, Paul J.; Veltman, Karin; Vogel, Sören; Zimmer, Elke I.; Preuss, Thomas G.
2016-07-01
The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans.
Defining Tsunami Magnitude as Measure of Potential Impact
NASA Astrophysics Data System (ADS)
Titov, V. V.; Tang, L.
2016-12-01
The goal of tsunami forecast, as a system for predicting potential impact of a tsunami at coastlines, requires quick estimate of a tsunami magnitude. This goal has been recognized since the beginning of tsunami research. The work of Kajiura, Soloviev, Abe, Murty, and many others discussed several scales for tsunami magnitude based on estimates of tsunami energy. However, difficulties of estimating tsunami energy based on available tsunami measurements at coastal sea-level stations has carried significant uncertainties and has been virtually impossible in real time, before tsunami impacts coastlines. The slow process of tsunami magnitude estimates, including collection of vast amount of available coastal sea-level data from affected coastlines, made it impractical to use any tsunami magnitude scales in tsunami warning operations. Uncertainties of estimates made tsunami magnitudes difficult to use as universal scale for tsunami analysis. Historically, the earthquake magnitude has been used as a proxy of tsunami impact estimates, since real-time seismic data is available of real-time processing and ample amount of seismic data is available for an elaborate post event analysis. This measure of tsunami impact carries significant uncertainties in quantitative tsunami impact estimates, since the relation between the earthquake and generated tsunami energy varies from case to case. In this work, we argue that current tsunami measurement capabilities and real-time modeling tools allow for establishing robust tsunami magnitude that will be useful for tsunami warning as a quick estimate for tsunami impact and for post-event analysis as a universal scale for tsunamis inter-comparison. We present a method for estimating the tsunami magnitude based on tsunami energy and present application of the magnitude analysis for several historical events for inter-comparison with existing methods.
Automated Determination of Magnitude and Source Length of Large Earthquakes
NASA Astrophysics Data System (ADS)
Wang, D.; Kawakatsu, H.; Zhuang, J.; Mori, J. J.; Maeda, T.; Tsuruoka, H.; Zhao, X.
2017-12-01
Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus source duration time) depending on the epicenter distances. It may be a promising aid for disaster mitigation right after a damaging earthquake, especially when dealing with the tsunami evacuation and emergency rescue.
Automated Determination of Magnitude and Source Extent of Large Earthquakes
NASA Astrophysics Data System (ADS)
Wang, Dun
2017-04-01
Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus source duration time) depending on the epicenter distances. It may be a promising aid for disaster mitigation right after a damaging earthquake, especially when dealing with the tsunami evacuation and emergency rescue.
Hormuth, David A; Skinner, Jack T; Does, Mark D; Yankeelov, Thomas E
2014-05-01
Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) can quantitatively and qualitatively assess physiological characteristics of tissue. Quantitative DCE-MRI requires an estimate of the time rate of change of the concentration of the contrast agent in the blood plasma, the vascular input function (VIF). Measuring the VIF in small animals is notoriously difficult as it requires high temporal resolution images limiting the achievable number of slices, field-of-view, spatial resolution, and signal-to-noise. Alternatively, a population-averaged VIF could be used to mitigate the acquisition demands in studies aimed to investigate, for example, tumor vascular characteristics. Thus, the overall goal of this manuscript is to determine how the kinetic parameters estimated by a population based VIF differ from those estimated by an individual VIF. Eight rats bearing gliomas were imaged before, during, and after an injection of Gd-DTPA. K(trans), ve, and vp were extracted from signal-time curves of tumor tissue using both individual and population-averaged VIFs. Extended model voxel estimates of K(trans) and ve in all animals had concordance correlation coefficients (CCC) ranging from 0.69 to 0.98 and Pearson correlation coefficients (PCC) ranging from 0.70 to 0.99. Additionally, standard model estimates resulted in CCCs ranging from 0.81 to 0.99 and PCCs ranging from 0.98 to 1.00, supporting the use of a population based VIF if an individual VIF is not available. Copyright © 2014 Elsevier Inc. All rights reserved.
Accounting for Incomplete Species Detection in Fish Community Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamay, Ryan A; Orth, Dr. Donald J; Jager, Yetta
2013-01-01
Riverine fish assemblages are heterogeneous and very difficult to characterize with a one-size-fits-all approach to sampling. Furthermore, detecting changes in fish assemblages over time requires accounting for variation in sampling designs. We present a modeling approach that permits heterogeneous sampling by accounting for site and sampling covariates (including method) in a model-based framework for estimation (versus a sampling-based framework). We snorkeled during three surveys and electrofished during a single survey in suite of delineated habitats stratified by reach types. We developed single-species occupancy models to determine covariates influencing patch occupancy and species detection probabilities whereas community occupancy models estimated speciesmore » richness in light of incomplete detections. For most species, information-theoretic criteria showed higher support for models that included patch size and reach as covariates of occupancy. In addition, models including patch size and sampling method as covariates of detection probabilities also had higher support. Detection probability estimates for snorkeling surveys were higher for larger non-benthic species whereas electrofishing was more effective at detecting smaller benthic species. The number of sites and sampling occasions required to accurately estimate occupancy varied among fish species. For rare benthic species, our results suggested that higher number of occasions, and especially the addition of electrofishing, may be required to improve detection probabilities and obtain accurate occupancy estimates. Community models suggested that richness was 41% higher than the number of species actually observed and the addition of an electrofishing survey increased estimated richness by 13%. These results can be useful to future fish assemblage monitoring efforts by informing sampling designs, such as site selection (e.g. stratifying based on patch size) and determining effort required (e.g. number of sites versus occasions).« less
Integrating O/S models during conceptual design, part 3
NASA Technical Reports Server (NTRS)
Ebeling, Charles E.
1994-01-01
Space vehicles, such as the Space Shuttle, require intensive ground support prior to, during, and after each mission. Maintenance is a significant part of that ground support. All space vehicles require scheduled maintenance to ensure operability and performance. In addition, components of any vehicle are not one-hundred percent reliable so they exhibit random failures. Once detected, a failure initiates unscheduled maintenance on the vehicle. Maintenance decreases the number of missions which can be completed by keeping vehicles out of service so that the time between the completion of one mission and the start of the next is increased. Maintenance also requires resources such as people, facilities, tooling, and spare parts. Assessing the mission capability and resource requirements of any new space vehicle, in addition to performance specification, is necessary to predict the life cycle cost and success of the vehicle. Maintenance and logistics support has been modeled by computer simulation to estimate mission capability and resource requirements for evaluation of proposed space vehicles. The simulation was written with Simulation Language for Alternative Modeling II (SLAM II) for execution on a personal computer. For either one or a fleet of space vehicles, the model simulates the preflight maintenance checks, the mission and return to earth, and the post flight maintenance in preparation to be sent back into space. THe model enables prediction of the number of missions possible and vehicle turn-time (the time between completion of one mission and the start of the next) given estimated values for component reliability and maintainability. The model also facilitates study of the manpower and vehicle requirements for the proposed vehicle to meet its desired mission rate. This is the 3rd part of a 3 part technical report.
A straightforward frequency-estimation technique for GPS carrier-phase time transfer.
Hackman, Christine; Levine, Judah; Parker, Thomas E; Piester, Dirk; Becker, Jürgen
2006-09-01
Although Global Positioning System (GPS) carrier-phase time transfer (GPSCPTT) offers frequency stability approaching 10-15 at averaging times of 1 d, a discontinuity occurs in the time-transfer estimates between the end of one processing batch (1-3 d in length) and the beginning of the next. The average frequency over a multiday analysis period often has been computed by first estimating and removing these discontinuities, i.e., through concatenation. We present a new frequency-estimation technique in which frequencies are computed from the individual batches then averaged to obtain the mean frequency for a multiday period. This allows the frequency to be computed without the uncertainty associated with the removal of the discontinuities and requires fewer computational resources. The new technique was tested by comparing the fractional frequency-difference values it yields to those obtained using a GPSCPTT concatenation method and those obtained using two-way satellite time-and-frequency transfer (TWSTFT). The clocks studied were located in Braunschweig, Germany, and in Boulder, CO. The frequencies obtained from the GPSCPTT measurements using either method agreed with those obtained from TWSTFT at several parts in 1016. The frequency values obtained from the GPSCPTT data by use of the new method agreed with those obtained using the concatenation technique at 1-4 x 10(-16).
Initial dynamic load estimates during configuration design
NASA Technical Reports Server (NTRS)
Schiff, Daniel
1987-01-01
This analysis includes the structural response to shock and vibration and evaluates the maximum deflections and material stresses and the potential for the occurrence of elastic instability, fatigue and fracture. The required computations are often performed by means of finite element analysis (FEA) computer programs in which the structure is simulated by a finite element model which may contain thousands of elements. The formulation of a finite element model can be time consuming, and substantial additional modeling effort may be necessary if the structure requires significant changes after initial analysis. Rapid methods for obtaining rough estimates of the structural response to shock and vibration are presented for the purpose of providing guidance during the initial mechanical design configuration stage.
Nonlinear models for estimating GSFC travel requirements
NASA Technical Reports Server (NTRS)
Buffalano, C.; Hagan, F. J.
1974-01-01
A methodology is presented for estimating travel requirements for a particular period of time. Travel models were generated using nonlinear regression analysis techniques on a data base of FY-72 and FY-73 information from 79 GSFC projects. Although the subject matter relates to GSFX activities, the type of analysis used and the manner of selecting the relevant variables would be of interest to other NASA centers, government agencies, private corporations and, in general, any organization with a significant travel budget. Models were developed for each of six types of activity: flight projects (in-house and out-of-house), experiments on non-GSFC projects, international projects, ART/SRT, data analysis, advanced studies, tracking and data, and indirects.
Charlson, Fiona J; Steel, Zachary; Degenhardt, Louisa; Chey, Tien; Silove, Derrick; Marnane, Claire; Whiteford, Harvey A
2012-01-01
Mental disorders are likely to be elevated in the Libyan population during the post-conflict period. We estimated cases of severe PTSD and depression and related health service requirements using modelling from existing epidemiological data and current recommended mental health service targets in low and middle income countries (LMIC's). Post-conflict prevalence estimates were derived from models based on a previously conducted systematic review and meta-regression analysis of mental health among populations living in conflict. Political terror ratings and intensity of exposure to traumatic events were used in predictive models. Prevalence of severe cases was applied to chosen populations along with uncertainty ranges. Six populations deemed to be affected by the conflict were chosen for modelling: Misrata (population of 444,812), Benghazi (pop. 674,094), Zintan (pop. 40,000), displaced people within Tripoli/Zlitan (pop. 49,000), displaced people within Misrata (pop. 25,000) and Ras Jdir camps (pop. 3,700). Proposed targets for service coverage, resource utilisation and full-time equivalent staffing for management of severe cases of major depression and post-traumatic stress disorder (PTSD) are based on a published model for LMIC's. Severe PTSD prevalence in populations exposed to a high level of political terror and traumatic events was estimated at 12.4% (95%CI 8.5-16.7) and was 19.8% (95%CI 14.0-26.3) for severe depression. Across all six populations (total population 1,236,600), the conflict could be associated with 123,200 (71,600-182,400) cases of severe PTSD and 228,100 (134,000-344,200) cases of severe depression; 50% of PTSD cases were estimated to co-occur with severe depression. Based upon service coverage targets, approximately 154 full-time equivalent staff would be required to respond to these cases sufficiently which is substantially below the current level of resource estimates for these regions. This is the first attempt to predict the mental health burden and consequent service response needs of such a conflict, and is crucially timed for Libya.
Charlson, Fiona J.; Steel, Zachary; Degenhardt, Louisa; Chey, Tien; Silove, Derrick; Marnane, Claire; Whiteford, Harvey A.
2012-01-01
Background Mental disorders are likely to be elevated in the Libyan population during the post-conflict period. We estimated cases of severe PTSD and depression and related health service requirements using modelling from existing epidemiological data and current recommended mental health service targets in low and middle income countries (LMIC’s). Methods Post-conflict prevalence estimates were derived from models based on a previously conducted systematic review and meta-regression analysis of mental health among populations living in conflict. Political terror ratings and intensity of exposure to traumatic events were used in predictive models. Prevalence of severe cases was applied to chosen populations along with uncertainty ranges. Six populations deemed to be affected by the conflict were chosen for modelling: Misrata (population of 444,812), Benghazi (pop. 674,094), Zintan (pop. 40,000), displaced people within Tripoli/Zlitan (pop. 49,000), displaced people within Misrata (pop. 25,000) and Ras Jdir camps (pop. 3,700). Proposed targets for service coverage, resource utilisation and full-time equivalent staffing for management of severe cases of major depression and post-traumatic stress disorder (PTSD) are based on a published model for LMIC’s. Findings Severe PTSD prevalence in populations exposed to a high level of political terror and traumatic events was estimated at 12.4% (95%CI 8.5–16.7) and was 19.8% (95%CI 14.0–26.3) for severe depression. Across all six populations (total population 1,236,600), the conflict could be associated with 123,200 (71,600–182,400) cases of severe PTSD and 228,100 (134,000–344,200) cases of severe depression; 50% of PTSD cases were estimated to co-occur with severe depression. Based upon service coverage targets, approximately 154 full-time equivalent staff would be required to respond to these cases sufficiently which is substantially below the current level of resource estimates for these regions. Discussion This is the first attempt to predict the mental health burden and consequent service response needs of such a conflict, and is crucially timed for Libya. PMID:22808201
Integrating Aggregate Exposure Pathway (AEP) and Adverse ...
High throughput toxicity testing (HTT) holds the promise of providing data for tens of thousands of chemicals that currently have no data due to the cost and time required for animal testing. Interpretation of these results require information linking the perturbations seen in vitro with adverse outcomes in vivo and requires knowledge of how estimated exposure to the chemicals compare to the in vitro concentrations that show an effect. This abstract discusses how Adverse Outcome Pathways (AOPs) can be used to link HTT with adverse outcomes of regulatory significance and how Aggregate Exposure Pathways (AEPs) can connect concentrations of environment stressors at a source with an expected target site concentration designed to provide exposure estimates that are comparable to concentrations identified in HTT. Presentation at the ICCA-LRI and JRC Workshop: Fit-For-Purpose Exposure Assessment For Risk-Based Decision Making
Connolly, M K; Cooper, C E
2014-12-01
Metabolic rate and evaporative water loss are two commonly measured physiological variables. It is therefore important, especially for comparative studies, that these variables (and others) are measured under standardised conditions, of which a resting state during the inactive phase is part of the accepted criteria. Here we show how measurement duration and timing affect these criteria and impact on the estimation of basal metabolic rate (oxygen consumption and carbon dioxide production) and standard evaporative water loss of a small nocturnal rodent. Oxygen consumption, carbon dioxide production and evaporative water loss all decreased over the duration of an experiment. Random assortment of hourly values indicated that this was an animal rather than a random effect for up to 11h. Experimental start time also had a significant effect on measurement of physiological variables. A longer time period was required to achieve minimal carbon dioxide consumption and evaporative water loss when experiments commenced earlier in the day; however, experiments with earlier start times had a lower overall estimates of minimal oxygen consumption and carbon dioxide production. For this species, measurement duration of at least 8h, ideally commencing between before the inactive phase at 03:00h and 05:00h, is required to obtain minimal standard values for physiological variables. Up to 80% of recently published studies measuring basal metabolic rate and/or evaporative water loss of small nocturnal mammals may overestimate basal values due to insufficiently long measurement duration. Copyright © 2014 Elsevier Inc. All rights reserved.
Estimation of real-time N load in surface water using dynamic data driven application system
Y. Ouyang; S.M. Luo; L.H. Cui; Q. Wang; J.E. Zhang
2011-01-01
Agricultural, industrial, and urban activities are the major sources for eutrophication of surface water ecosystems. Currently, determination of nutrients in surface water is primarily accomplished by manually collecting samples for laboratory analysis, which requires at least 24 h. In other words, little to no effort has been devoted to monitoring real-time variations...
William T. Simpson
2006-01-01
Heat sterilization is used to kill insects and fungi in wood being traded internationally. Determining the time required to reach the kill temperature is difficult considering the many variables that can affect it, such as heating temperature, target center temperature, initial wood temperature, wood configuration dimensions, specific gravity, and moisture content. In...
A MODIS direct broadcast algorithm for mapping wildfire burned area in the western United States
S. P. Urbanski; J. M. Salmon; B. L. Nordgren; W. M. Hao
2009-01-01
Improved wildland fire emission inventory methods are needed to support air quality forecasting and guide the development of air shed management strategies. Air quality forecasting requires dynamic fire emission estimates that are generated in a timely manner to support real-time operations. In the regulatory and planning realm, emission inventories are essential for...
Spatial Modelling of Soil-Transmitted Helminth Infections in Kenya: A Disease Control Planning Tool
Pullan, Rachel L.; Gething, Peter W.; Smith, Jennifer L.; Mwandawiro, Charles S.; Sturrock, Hugh J. W.; Gitonga, Caroline W.; Hay, Simon I.; Brooker, Simon
2011-01-01
Background Implementation of control of parasitic diseases requires accurate, contemporary maps that provide intervention recommendations at policy-relevant spatial scales. To guide control of soil transmitted helminths (STHs), maps are required of the combined prevalence of infection, indicating where this prevalence exceeds an intervention threshold of 20%. Here we present a new approach for mapping the observed prevalence of STHs, using the example of Kenya in 2009. Methods and Findings Observed prevalence data for hookworm, Ascaris lumbricoides and Trichuris trichiura were assembled for 106,370 individuals from 945 cross-sectional surveys undertaken between 1974 and 2009. Ecological and climatic covariates were extracted from high-resolution satellite data and matched to survey locations. Bayesian space-time geostatistical models were developed for each species, and were used to interpolate the probability that infection prevalence exceeded the 20% threshold across the country for both 1989 and 2009. Maps for each species were integrated to estimate combined STH prevalence using the law of total probability and incorporating a correction factor to adjust for associations between species. Population census data were combined with risk models and projected to estimate the population at risk and requiring treatment in 2009. In most areas for 2009, there was high certainty that endemicity was below the 20% threshold, with areas of endemicity ≥20% located around the shores of Lake Victoria and on the coast. Comparison of the predicted distributions for 1989 and 2009 show how observed STH prevalence has gradually decreased over time. The model estimated that a total of 2.8 million school-age children live in districts which warrant mass treatment. Conclusions Bayesian space-time geostatistical models can be used to reliably estimate the combined observed prevalence of STH and suggest that a quarter of Kenya's school-aged children live in areas of high prevalence and warrant mass treatment. As control is successful in reducing infection levels, updated models can be used to refine decision making in helminth control. PMID:21347451
2004-03-01
using standard Internet technologies with no additional client software required. Furthermore, using a portable...Wilkerson Computational and Information Sciences Directorate, ARL Approved for public release... information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holden, Jacob; Wood, Eric W; Zhu, Lei
A data-driven technique for estimation of energy requirements for a proposed vehicle trip has been developed. Based on over 700,000 miles of driving data, the technique has been applied to generate a model that estimates trip energy requirements. The model uses a novel binning approach to categorize driving by road type, traffic conditions, and driving profile. The trip-level energy estimations can easily be aggregated to any higher-level transportation system network desired. The model has been tested and validated on the Austin, Texas, data set used to build this model. Ground-truth energy consumption for the data set was obtained from Futuremore » Automotive Systems Technology Simulator (FASTSim) vehicle simulation results. The energy estimation model has demonstrated 12.1 percent normalized total absolute error. The energy estimation from the model can be used to inform control strategies in routing tools, such as change in departure time, alternate routing, and alternate destinations, to reduce energy consumption. The model can also be used to determine more accurate energy consumption of regional or national transportation networks if trip origin and destinations are known. Additionally, this method allows the estimation tool to be tuned to a specific driver or vehicle type.« less
Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan
2015-07-22
Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the traditional PPP mode because of its advantages of independence, high positioning precision, and real-time performance. It could be an alternative solution for regional positioning service before global PPP service comes into operation.
Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan
2015-01-01
Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the traditional PPP mode because of its advantages of independence, high positioning precision, and real-time performance. It could be an alternative solution for regional positioning service before global PPP service comes into operation. PMID:26205276
Staton, April G.; McCullough, Elizabeth S.; Jain, Rahul; Miller, Mindi S.; Lynn Stevenson, T.; Fetterman, James W.; Lynn Parham, R.; Sheffield, Melody C.; Unterwagner, Whitney L.; McDuffie, Charles H.
2012-01-01
Objective. To document the annual number of advanced pharmacy practice experience (APPE) placement changes for students across 5 colleges and schools of pharmacy, identify and compare initiating reasons, and estimate the associated administrative workload. Methods. Data collection occurred from finalization of the 2008-2009 APPE assignments throughout the last date of the APPE schedule. Internet-based customized tracking forms were used to categorize the initiating reason for the placement change and the administrative time required per change (0 to 120 minutes). Results. APPE placement changes per institution varied from 14% to 53% of total assignments. Reasons for changes were: administrator initiated (20%), student initiated (23%), and site/preceptor initiated (57%) Total administrative time required per change varied across institutions from 3,130 to 22,750 minutes, while the average time per reassignment was 42.5 minutes. Conclusion. APPE placements are subject to high instability. Significant differences exist between public and private colleges and schools of pharmacy as to the number and type of APPE reassignments made and associated workload estimates. PMID:22544966
Katriel, G.; Yaari, R.; Huppert, A.; Roll, U.; Stone, L.
2011-01-01
This paper presents new computational and modelling tools for studying the dynamics of an epidemic in its initial stages that use both available incidence time series and data describing the population's infection network structure. The work is motivated by data collected at the beginning of the H1N1 pandemic outbreak in Israel in the summer of 2009. We formulated a new discrete-time stochastic epidemic SIR (susceptible-infected-recovered) model that explicitly takes into account the disease's specific generation-time distribution and the intrinsic demographic stochasticity inherent to the infection process. Moreover, in contrast with many other modelling approaches, the model allows direct analytical derivation of estimates for the effective reproductive number (Re) and of their credible intervals, by maximum likelihood and Bayesian methods. The basic model can be extended to include age–class structure, and a maximum likelihood methodology allows us to estimate the model's next-generation matrix by combining two types of data: (i) the incidence series of each age group, and (ii) infection network data that provide partial information of ‘who-infected-who’. Unlike other approaches for estimating the next-generation matrix, the method developed here does not require making a priori assumptions about the structure of the next-generation matrix. We show, using a simulation study, that even a relatively small amount of information about the infection network greatly improves the accuracy of estimation of the next-generation matrix. The method is applied in practice to estimate the next-generation matrix from the Israeli H1N1 pandemic data. The tools developed here should be of practical importance for future investigations of epidemics during their initial stages. However, they require the availability of data which represent a random sample of the real epidemic process. We discuss the conditions under which reporting rates may or may not influence our estimated quantities and the effects of bias. PMID:21247949
The Radial Speed-Expansion Speed Relation for Earth-Directed CMEs
NASA Technical Reports Server (NTRS)
Makela, P.; Gopalswamy, N.; Yashiro, S.
2016-01-01
Earth-directed coronal mass ejections (CMEs) are the main drivers of major geomagnetic storms. Therefore, a good estimate of the disturbance arrival time at Earth is required for space weather predictions. The STEREO and SOHO spacecraft were viewing the Sun in near quadrature during January 2010 to September 2012, providing a unique opportunity to study the radial speed (V (sub rad)) to expansion speed(V (sub exp)) relationship of Earth-directed CMEs. This relationship is useful in estimating the V (sub rad) of Earth-directed CMEs, when they are observed from Earth view only. We selected 19 Earth-directed CMEs observed by the Large Angle and Spectrometric Coronagraph (LASCO)/C3 coronagraph on SOHO and the Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI)/COR2 coronagraph on STEREO during January 2010 to September 2012. We found that of the three tested geometric CME models the full ice-cream cone model of the CME describes best the V (sub rad) to V (sub exp) relationship, as suggested by earlier investigations. We also tested the prediction accuracy of the empirical shock arrival (ESA) model proposed by Gopalswamy et al.(2005a), while estimating the CME propagation speeds from the CME expansion speeds. If we use STEREO observations to estimate the CME width required to calculate the V (sub rad) from the V (sub exp) measurements, the mean absolute error (MAE) of the shock arrival times of the ESA model is 8.4 hours. If the LASCO measurements are used to estimate the CME width, the MAE still remains below 17 hours. Therefore, by using the simple V (sub rad) to V (sub exp) relationship to estimate the V (sub rad) of the Earth-directed CMEs, the ESA model is able to predict the shock arrival times with accuracy comparable to most other more complex models.
NASA Astrophysics Data System (ADS)
Azarpour, Masoumeh; Enzner, Gerald
2017-12-01
Binaural noise reduction, with applications for instance in hearing aids, has been a very significant challenge. This task relates to the optimal utilization of the available microphone signals for the estimation of the ambient noise characteristics and for the optimal filtering algorithm to separate the desired speech from the noise. The additional requirements of low computational complexity and low latency further complicate the design. A particular challenge results from the desired reconstruction of binaural speech input with spatial cue preservation. The latter essentially diminishes the utility of multiple-input/single-output filter-and-sum techniques such as beamforming. In this paper, we propose a comprehensive and effective signal processing configuration with which most of the aforementioned criteria can be met suitably. This relates especially to the requirement of efficient online adaptive processing for noise estimation and optimal filtering while preserving the binaural cues. Regarding noise estimation, we consider three different architectures: interaural (ITF), cross-relation (CR), and principal-component (PCA) target blocking. An objective comparison with two other noise PSD estimation algorithms demonstrates the superiority of the blocking-based noise estimators, especially the CR-based and ITF-based blocking architectures. Moreover, we present a new noise reduction filter based on minimum mean-square error (MMSE), which belongs to the class of common gain filters, hence being rigorous in terms of spatial cue preservation but also efficient and competitive for the acoustic noise reduction task. A formal real-time subjective listening test procedure is also developed in this paper. The proposed listening test enables a real-time assessment of the proposed computationally efficient noise reduction algorithms in a realistic acoustic environment, e.g., considering time-varying room impulse responses and the Lombard effect. The listening test outcome reveals that the signals processed by the blocking-based algorithms are significantly preferred over the noisy signal in terms of instantaneous noise attenuation. Furthermore, the listening test data analysis confirms the conclusions drawn based on the objective evaluation.
NASA Astrophysics Data System (ADS)
Li, L.; Yang, K.; Jia, G.; Ran, X.; Song, J.; Han, Z.-Q.
2015-05-01
The accurate estimation of the tire-road friction coefficient plays a significant role in the vehicle dynamics control. The estimation method should be timely and reliable for the controlling requirements, which means the contact friction characteristics between the tire and the road should be recognized before the interference to ensure the safety of the driver and passengers from drifting and losing control. In addition, the estimation method should be stable and feasible for complex maneuvering operations to guarantee the control performance as well. A signal fusion method combining the available signals to estimate the road friction is suggested in this paper on the basis of the estimated ones of braking, driving and steering conditions individually. Through the input characteristics and the states of the vehicle and tires from sensors the maneuvering condition may be recognized, by which the certainty factors of the friction of the three conditions mentioned above may be obtained correspondingly, and then the comprehensive road friction may be calculated. Experimental vehicle tests validate the effectiveness of the proposed method through complex maneuvering operations; the estimated road friction coefficient based on the signal fusion method is relatively timely and accurate to satisfy the control demands.
Tracking the time-varying cortical connectivity patterns by adaptive multivariate estimators.
Astolfi, L; Cincotti, F; Mattia, D; De Vico Fallani, F; Tocci, A; Colosimo, A; Salinari, S; Marciani, M G; Hesse, W; Witte, H; Ursino, M; Zavaglia, M; Babiloni, F
2008-03-01
The directed transfer function (DTF) and the partial directed coherence (PDC) are frequency-domain estimators that are able to describe interactions between cortical areas in terms of the concept of Granger causality. However, the classical estimation of these methods is based on the multivariate autoregressive modelling (MVAR) of time series, which requires the stationarity of the signals. In this way, transient pathways of information transfer remains hidden. The objective of this study is to test a time-varying multivariate method for the estimation of rapidly changing connectivity relationships between cortical areas of the human brain, based on DTF/PDC and on the use of adaptive MVAR modelling (AMVAR) and to apply it to a set of real high resolution EEG data. This approach will allow the observation of rapidly changing influences between the cortical areas during the execution of a task. The simulation results indicated that time-varying DTF and PDC are able to estimate correctly the imposed connectivity patterns under reasonable operative conditions of signal-to-noise ratio (SNR) ad number of trials. An SNR of five and a number of trials of at least 20 provide a good accuracy in the estimation. After testing the method by the simulation study, we provide an application to the cortical estimations obtained from high resolution EEG data recorded from a group of healthy subject during a combined foot-lips movement and present the time-varying connectivity patterns resulting from the application of both DTF and PDC. Two different cortical networks were detected with the proposed methods, one constant across the task and the other evolving during the preparation of the joint movement.
Close the gap for vision: The key is to invest on coordination.
Hsueh, Ya-seng Arthur; Dunt, David; Anjou, Mitchell D; Boudville, Andrea; Taylor, Hugh
2013-12-01
The study aims to estimate costs required for coordination and case management activities support access to treatment for the three most common eye conditions among Indigenous Australians, cataract, refractive error and diabetic retinopathy. Coordination activities were identified using in-depth interviews, focus groups and face-to-face consultations. Data were collected at 21 sites across Australia. The estimation of costs used salary data from relevant government websites and was organised by diagnosis and type of coordination activity. Urban and remote regions of Australia. Needs-based provision support services to facilitate access to eye care for cataract, refractive error and diabetic retinopathy to Indigenous Australians. Cost (AUD$ in 2011) of equivalent full time (EFT) coordination staff. The annual coordination workforce required for the three eye conditions was 8.3 EFT staff per 10 000 Indigenous Australians. The annual cost of eye care coordination workforce is estimated to be AUD$21 337 012 in 2011. This innovative, 'activity-based' model identified the workforce required to support the provision of eye care for Indigenous Australians and estimated their costs. The findings are of clear value to government funders and other decision makers. The model can potentially be used to estimate staffing and associated costs for other Indigenous and non-Indigenous health needs. © 2013 The Authors. Australian Journal of Rural Health © National Rural Health Alliance Inc.
NASA Astrophysics Data System (ADS)
Otsuka, Mioko; Hasegawa, Yasuhiro; Arisaka, Taichi; Shinozaki, Ryo; Morita, Hiroyuki
2017-11-01
The dimensionless figure of merit and its efficiency for the transient response of a Π-shaped thermoelectric module are estimated according to the theory of impedance spectroscopy. The effective dimensionless figure of merit is described as a function of the product of the characteristic time to reduce the temperature and the representative angular frequency of the module, which is expressed by the thermal diffusivity and the length of the elements used. The characteristic time required for achieving a higher dimensionless figure of merit and efficiency is derived quantitatively for the transient response using the properties of a commercial thermoelectric module.
Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks
Kaltenbacher, Barbara; Hasenauer, Jan
2017-01-01
Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351
Equipment for the Transient Capture of Chaotic Microwave Signals
2017-09-14
estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the... times are needed and over-sampling by a factor of 8 is required so that the effective number of bits can be increased from the actual bit resolution... time acquisition of transient signals with analog bandwidths up to 70 GHz for one channel, and 30 GHz for two channels.. Training Opportunities
A model for estimating pathogen variability in shellfish and predicting minimum depuration times.
McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick
2018-01-01
Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist norovirus risk managers with future control strategies.
Chong, Ka Chun; Zee, Benny Chung Ying; Wang, Maggie Haitian
2018-04-10
In an influenza pandemic, arrival times of cases are a proxy of the epidemic size and disease transmissibility. Because of intense surveillance of travelers from infected countries, detection is more rapid and complete than on local surveillance. Travel information can provide a more reliable estimation of transmission parameters. We developed an Approximate Bayesian Computation algorithm to estimate the basic reproduction number (R 0 ) in addition to the reporting rate and unobserved epidemic start time, utilizing travel, and routine surveillance data in an influenza pandemic. A simulation was conducted to assess the sampling uncertainty. The estimation approach was further applied to the 2009 influenza A/H1N1 pandemic in Mexico as a case study. In the simulations, we showed that the estimation approach was valid and reliable in different simulation settings. We also found estimates of R 0 and the reporting rate to be 1.37 (95% Credible Interval [CI]: 1.26-1.42) and 4.9% (95% CI: 0.1%-18%), respectively, in the 2009 influenza pandemic in Mexico, which were robust to variations in the fixed parameters. The estimated R 0 was consistent with that in the literature. This method is useful for officials to obtain reliable estimates of disease transmissibility for strategic planning. We suggest that improvements to the flow of reporting for confirmed cases among patients arriving at different countries are required. Copyright © 2018 Elsevier Ltd. All rights reserved.
Non-invasive estimation of dissipation from non-equilibrium fluctuations in chemical reactions.
Muy, S; Kundu, A; Lacoste, D
2013-09-28
We show how to extract an estimate of the entropy production from a sufficiently long time series of stationary fluctuations of chemical reactions. This method, which is based on recent work on fluctuation theorems, is direct, non-invasive, does not require any knowledge about the underlying dynamics and is applicable even when only partial information is available. We apply it to simple stochastic models of chemical reactions involving a finite number of states, and for this case, we study how the estimate of dissipation is affected by the degree of coarse-graining present in the input data.
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1994-01-01
An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.
Energetics of sows and gilts in gestation crates in the cold.
Verstegen, M W; Curtis, S E
1988-11-01
Seventy pregnant sows and gilts in gestation crates with unbedded concrete-slat floors and partitions in common (which permitted contact by neighbors) in a closed house with air temperature 10 to 12 degrees C during cold weather were studied for 3 wk. The animals' lower critical temperature and thermoregulatory heat and feed requirements were estimated from measured variables, including ME intake, body weight and its change and body surface temperature, and other calculated values and assumptions. Estimates for a 165-kg sow or gilt in such an environment were: lower critical temperature = 15 degrees C; thermoregulatory heat requirement = 126 to 161 kcal/d per 1 C degree of coldness (higher as pregnancy progresses); and thermoregulatory feed requirement = 42 to 54 g/d per 1 C degree of coldness (assuming 3 kcal ME/g of diet). The sow's lower critical temperature was affected by state of pregnancy; in late pregnancy it was 1.6 to 2.6 C degrees lower than in early pregnancy. These estimates of the pregnant sow's thermoregulatory heat and feed requirements at effective environmental temperatures below the lower critical temperature accord well with those published before. But this estimate of the pregnant sow's lower critical temperature is approximately 5 C degrees lower than several made in laboratory settings on animals held individually, with no opportunity to huddle. The fact that every sow and gilt in this experiment could make mechanical contact with at least one neighbor at all times, and sometimes two, might account for much of the difference in lower critical temperature estimates.
A Cost Analysis of the American Board of Internal Medicine's Maintenance-of-Certification Program.
Sandhu, Alexander T; Dudley, R Adams; Kazi, Dhruv S
2015-09-15
In 2014, the American Board of Internal Medicine (ABIM) substantially increased the requirements and fees for its maintenance-of-certification (MOC) program. Faced with mounting criticism, the ABIM suspended certain content requirements in February 2015 but retained the increased fees and number of modules. An objective appraisal of the cost of MOC would help inform upcoming consultations about MOC reform. To estimate the total cost of the 2015 version of the MOC program ("2015 MOC") and the incremental cost relative to the 2013 version ("2013 MOC"). Decision analytic model. Published literature. All ABIM-certified U.S. physicians. 10 years (2015 to 2024). Societal. 2015 MOC. Testing costs (ABIM fees) and time costs (monetary value of physician time). Internists will incur an average of $23 607 (95% CI, $5380 to $66 383) in MOC costs over 10 years, ranging from $16 725 for general internists to $40 495 for hematologists-oncologists. Time costs account for 90% of MOC costs. Cumulatively, 2015 MOC will cost $5.7 billion over 10 years, $1.2 billion more than 2013 MOC. This includes $5.1 billion in time costs (resulting from 32.7 million physician-hours spent on MOC) and $561 million in testing costs. Costs are sensitive to time spent on MOC and MOC credits obtainable from current continuing education activities. Precise estimates of time required for MOC are not available. The ABIM MOC program will generate considerable costs, predominantly due to demands on physician time. A rigorous evaluation of its effect on clinical and economic outcomes is warranted to balance potential gains in health care quality and efficiency against the high costs identified in this study. University of California, San Francisco, and the U.S. Department of Veterans Affairs.
Palomar-Aumatell, Xavier; Subirana-Casacuberta, Mireia; Mila-Villarroel, Raimon
2017-12-01
To determine which interventions within the Nursing Interventions Classification are most often applied in intensive care units and to validate the time required for each. A three-stage e-Delphi was conducted; 21 panelists were recruited, seven manager nurses and 14 clinical nurses with higher degrees and more than five years experience in intensive care nursing. The first round explored the most common interventions applied. Additionally, panelists were asked to propose others. In the second round, participants reflected on the interventions where no consensus was reached as well as to estimate the time required for each intervention. In the third, panelists were queried about the time required for the interventions for which consensus regarding the time was not reached. A total of 183 interventions were included; 50% of the "Physiological: Complex" domain. The list included 52 (90%) of the 58 "core interventions for critical care nursing" identified in the Nursing Interventions Classification. The time required for 89.1% of the interventions was the same as in the Nursing Interventions Classification seminal work recommendations. Results provide a clear picture of nursing activity in general intensive care units, allows to tailor the Nursing Intervetions Classification in Catalonia context and to confirm findings of previous studies. Copyright © 2017 Elsevier Ltd. All rights reserved.
A methodology for long range prediction of air transportation
NASA Technical Reports Server (NTRS)
Ayati, M. B.; English, J. M.
1980-01-01
The paper describes the methodology for long-time projection of aircraft fuel requirements. A new concept of social and economic factors for future aviation industry which provides an estimate of predicted fuel usage is presented; it includes air traffic forecasts and lead times for producing new engines and aircraft types. An air transportation model is then developed in terms of an abstracted set of variables which represent the entire aircraft industry on a macroscale. This model was evaluated by testing the required output variables from a model based on historical data over the past decades.
Lower Limits on Aperture Size for an ExoEarth Detecting Coronagraphic Mission
NASA Technical Reports Server (NTRS)
Stark, Christopher C.; Roberge, Aki; Mandell, Avi; Clampin, Mark; Domagal-Goldman, Shawn D.; McElwain, Michael W.; Stapelfeldt, Karl R.
2015-01-01
The yield of Earth-like planets will likely be a primary science metric for future space-based missions that will drive telescope aperture size. Maximizing the exoEarth candidate yield is therefore critical to minimizing the required aperture. Here we describe a method for exoEarth candidate yield maximization that simultaneously optimizes, for the first time, the targets chosen for observation, the number of visits to each target, the delay time between visits, and the exposure time of every observation. This code calculates both the detection time and multiwavelength spectral characterization time required for planets. We also refine the astrophysical assumptions used as inputs to these calculations, relying on published estimates of planetary occurrence rates as well as theoretical and observational constraints on terrestrial planet sizes and classical habitable zones. Given these astrophysical assumptions, optimistic telescope and instrument assumptions, and our new completeness code that produces the highest yields to date, we suggest lower limits on the aperture size required to detect and characterize a statistically motivated sample of exoEarths.
NASA Astrophysics Data System (ADS)
Gedeon, M.; Vandersteen, K.; Rogiers, B.
2012-04-01
Radionuclide concentrations in aquifers represent an important indicator in estimating the impact of a planned surface disposal for low and medium level short-lived radioactive waste in Belgium, developed by the Belgian Agency for Radioactive Waste and Enriched Fissile Materials (ONDRAF/NIRAS), who also coordinates and leads the corresponding research. Estimating aquifer concentrations for individual radionuclides represents a computational challenge because (a) different retardation values are applied to different hydrogeologic units and (b) sequential decay reactions with radionuclides of various sorption characteristics cause long computational times until a steady-state is reached. The presented work proposes a methodology reducing substantially the computational effort by postprocessing the results of a prior non-reactive tracer simulation. These advective transport results represent the steady-state concentration - source flux ratio and the break-through time at each modelling cell. These two variables are further used to estimate the individual radionuclide concentrations by (a) scaling the steady-state concentrations to the source fluxes of individual radionuclides; (b) applying the radioactive decay and ingrowth in a decay chain; (c) scaling the travel time by the retardation factor and (d) applying linear sorption. While all steps except (b) require solving simple linear equations, applying ingrowth of individual radionuclides in decay chains requires solving the differential Bateman equation. This equation needs to be solved once for a unit radionuclide activity at all arrival times found in the numerical grid. The ratios between the parent nuclide activity and the progeny activities are then used in the postprocessing. Results are presented for discrete points and examples of radioactive plume maps are given. These results compare well to the results achieved using a full numerical simulation including the respective chemical reaction processes. Although the proposed method represents a fast way to estimate the radionuclide concentrations without performing timely challenging simulations, its applicability has some limits. The radionuclide source needs to be assumed constant during the period of achieving a steady-state in the model. Otherwise, the source variability of individual radionuclides needs to be modelled using a numerical simulation. However, such a situation only occurs in cases of source variability in a period until steady-state is reached and such a simulation takes a relatively short time. The proposed method enables an effective estimation of individual radionuclide concentrations in the frame of performance assessment of a radioactive waste disposal. Reducing the calculation time to a minimum enables performing sensitivity and uncertainty analyses, testing alternative models, etc. thus enhancing the overall quality of the modelling analysis.
Effect of sampling rate and record length on the determination of stability and control derivatives
NASA Technical Reports Server (NTRS)
Brenner, M. J.; Iliff, K. W.; Whitman, R. K.
1978-01-01
Flight data from five aircraft were used to assess the effects of sampling rate and record length reductions on estimates of stability and control derivatives produced by a maximum likelihood estimation method. Derivatives could be extracted from flight data with the maximum likelihood estimation method even if there were considerable reductions in sampling rate and/or record length. Small amplitude pulse maneuvers showed greater degradation of the derivative maneuvers than large amplitude pulse maneuvers when these reductions were made. Reducing the sampling rate was found to be more desirable than reducing the record length as a method of lessening the total computation time required without greatly degrading the quantity of the estimates.
NASA Technical Reports Server (NTRS)
Campbell, John P; Mckinney, Marion O
1952-01-01
A summary of methods for making dynamic lateral stability and response calculations and for estimating the aerodynamic stability derivatives required for use in these calculations is presented. The processes of performing calculations of the time histories of lateral motions, of the period and damping of these motions, and of the lateral stability boundaries are presented as a series of simple straightforward steps. Existing methods for estimating the stability derivatives are summarized and, in some cases, simple new empirical formulas are presented. Detailed estimation methods are presented for low-subsonic-speed conditions but only a brief discussion and a list of references are given for transonic and supersonic speed conditions.
Space-Time Smoothing of Complex Survey Data: Small Area Estimation for Child Mortality
Mercer, Laina D; Wakefield, Jon; Pantazis, Athena; Lutambi, Angelina M; Masanja, Honorati; Clark, Samuel
2016-01-01
Many people living in low and middle-income countries are not covered by civil registration and vital statistics systems. Consequently, a wide variety of other types of data including many household sample surveys are used to estimate health and population indicators. In this paper we combine data from sample surveys and demographic surveillance systems to produce small area estimates of child mortality through time. Small area estimates are necessary to understand geographical heterogeneity in health indicators when full-coverage vital statistics are not available. For this endeavor spatio-temporal smoothing is beneficial to alleviate problems of data sparsity. The use of conventional hierarchical models requires careful thought since the survey weights may need to be considered to alleviate bias due to non-random sampling and non-response. The application that motivated this work is estimation of child mortality rates in five-year time intervals in regions of Tanzania. Data come from Demographic and Health Surveys conducted over the period 1991–2010 and two demographic surveillance system sites. We derive a variance estimator of under five years child mortality that accounts for the complex survey weighting. For our application, the hierarchical models we consider include random effects for area, time and survey and we compare models using a variety of measures including the conditional predictive ordinate (CPO). The method we propose is implemented via the fast and accurate integrated nested Laplace approximation (INLA). PMID:27468328
A distributed, dynamic, parallel computational model: the role of noise in velocity storage
Merfeld, Daniel M.
2012-01-01
Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic “real-time” calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique–“particle filtering”–that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate one-dimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. PMID:22514288
Robust Low-dose CT Perfusion Deconvolution via Tensor Total-Variation Regularization
Zhang, Shaoting; Chen, Tsuhan; Sanelli, Pina C.
2016-01-01
Acute brain diseases such as acute strokes and transit ischemic attacks are the leading causes of mortality and morbidity worldwide, responsible for 9% of total death every year. ‘Time is brain’ is a widely accepted concept in acute cerebrovascular disease treatment. Efficient and accurate computational framework for hemodynamic parameters estimation can save critical time for thrombolytic therapy. Meanwhile the high level of accumulated radiation dosage due to continuous image acquisition in CT perfusion (CTP) raised concerns on patient safety and public health. However, low-radiation leads to increased noise and artifacts which require more sophisticated and time-consuming algorithms for robust estimation. In this paper, we focus on developing a robust and efficient framework to accurately estimate the perfusion parameters at low radiation dosage. Specifically, we present a tensor total-variation (TTV) technique which fuses the spatial correlation of the vascular structure and the temporal continuation of the blood signal flow. An efficient algorithm is proposed to find the solution with fast convergence and reduced computational complexity. Extensive evaluations are carried out in terms of sensitivity to noise levels, estimation accuracy, contrast preservation, and performed on digital perfusion phantom estimation, as well as in-vivo clinical subjects. Our framework reduces the necessary radiation dose to only 8% of the original level and outperforms the state-of-art algorithms with peak signal-to-noise ratio improved by 32%. It reduces the oscillation in the residue functions, corrects over-estimation of cerebral blood flow (CBF) and under-estimation of mean transit time (MTT), and maintains the distinction between the deficit and normal regions. PMID:25706579
Retrieving Baseflow from SWOT Mission
NASA Astrophysics Data System (ADS)
Baratelli, F.; Flipo, N.; Biancamaria, S.; Rivière, A.
2017-12-01
The quantification of aquifer contribution to river discharge is of primary importance to evaluate the impact of climatic and anthropogenic stresses on the availability of water resources. Several baseflow estimation methods require river discharge measurements, which can be difficult to obtain at high spatio-temporal resolution for large scale basins. The SWOT satellite mission will provide discharge estimations for large rivers (50 - 100 m wide) even in remote basins. The frequency of these estimations depends on the position and ranges from zero to four values in the 21-days satellite cycle. This work aims at answering the following question: can baseflow be estimated from SWOT observations during the mission lifetime? An algorithm based on hydrograph separation by Chapman's filter was developed to automatically estimate the baseflow in a river network at regional or larger scale (> 10000 km2). The algorithm was first applied using the discharge time series simulated at daily time step by a coupled hydrological-hydrogeological model to obtain the reference baseflow estimations. The same algorithm is then forced with discharge time series sampled at SWOT observation frequency. The methodology was applied to the Seine River basin (65000 km2, France). The results show that the average baseflow is estimated with good accuracy for all the reaches which are observed at least once per cycle (relative bias less than 4%). The time evolution of baseflow is also rather well retrieved, with a Nash coefficient which is more than 0.7 for 94% of the network length. This work provides new potential for the SWOT mission in terms of global hydrological analysis.
Rüdt, Matthias; Gillet, Florian; Heege, Stefanie; Hitzler, Julian; Kalbfuss, Bernd; Guélat, Bertrand
2015-09-25
Application of model-based design is appealing to support the development of protein chromatography in the biopharmaceutical industry. However, the required efforts for parameter estimation are frequently perceived as time-consuming and expensive. In order to speed-up this work, a new parameter estimation approach for modelling ion-exchange chromatography in linear conditions was developed. It aims at reducing the time and protein demand for the model calibration. The method combines the estimation of kinetic and thermodynamic parameters based on the simultaneous variation of the gradient slope and the residence time in a set of five linear gradient elutions. The parameters are estimated from a Yamamoto plot and a gradient-adjusted Van Deemter plot. The combined approach increases the information extracted per experiment compared to the individual methods. As a proof of concept, the combined approach was successfully applied for a monoclonal antibody on a cation-exchanger and for a Fc-fusion protein on an anion-exchange resin. The individual parameter estimations for the mAb confirmed that the new approach maintained the accuracy of the usual Yamamoto and Van Deemter plots. In the second case, offline size-exclusion chromatography was performed in order to estimate the thermodynamic parameters of an impurity (high molecular weight species) simultaneously with the main product. Finally, the parameters obtained from the combined approach were used in a lumped kinetic model to simulate the chromatography runs. The simulated chromatograms obtained for a wide range of gradient lengths and residence times showed only small deviations compared to the experimental data. Copyright © 2015 Elsevier B.V. All rights reserved.
Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelouche, Doron; Pozo-Nuñez, Francisco; Zucker, Shay, E-mail: doron@sci.haifa.ac.il, E-mail: francisco.pozon@gmail.com, E-mail: shayz@post.tau.ac.il
A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann’s mean-square successive-difference estimator are found to be superior to those using other estimators. An optimized von Neumann scheme is formulated, which better handles sparsely sampled data and outperforms current implementations of discretemore » correlation function methods. This scheme is applied to existing reverberation data of varying quality, and consistency with previously reported time delays is found. In particular, the size–luminosity relation of the broad-line region in quasars is recovered with a scatter comparable to that obtained by other works, yet with fewer assumptions made concerning the process underlying the variability. The proposed method for time-lag determination is particularly relevant for irregularly sampled time series, and in cases where the process underlying the variability cannot be adequately modeled.« less
Estimating Green Net National Product for Puerto Rico: An Economic Measure of Sustainability
NASA Astrophysics Data System (ADS)
Wu, Shanshan; Heberling, Matthew T.
2016-04-01
This paper presents the data sources and methodology used to estimate Green Net National Product (GNNP), an economic metric of sustainability, for Puerto Rico. Using the change in GNNP as a one-sided test of weak sustainability (i.e., positive growth in GNNP is not enough to show the economy is sustainable), we measure the movement away from sustainability by examining the change in GNNP from 1993 to 2009. In order to calculate GNNP, we require both economic and natural capital data, but limited data for Puerto Rico require a number of simplifying assumptions. Based on the environmental challenges faced by Puerto Rico, we include damages from air emissions and solid waste, the storm protection value of mangroves and the value of extracting crushed stone as components in the depreciation of natural capital. Our estimate of GNNP also includes the value of time, which captures the effects of technological progress. The results show that GNNP had an increasing trend over the 17 years studied with two periods of negative growth (2004-2006 and 2007-2008). Our additional analysis suggests that the negative growth in 2004-2006 was possibly due to a temporary economic downturn. However, the negative growth in 2007-2008 was likely from the decline in the value of time, suggesting the island of Puerto Rico was moving away from sustainability during this time.
NASA Astrophysics Data System (ADS)
Knapp, Julia L. A.; Cirpka, Olaf A.
2017-06-01
The complexity of hyporheic flow paths requires reach-scale models of solute transport in streams that are flexible in their representation of the hyporheic passage. We use a model that couples advective-dispersive in-stream transport to hyporheic exchange with a shape-free distribution of hyporheic travel times. The model also accounts for two-site sorption and transformation of reactive solutes. The coefficients of the model are determined by fitting concurrent stream-tracer tests of conservative (fluorescein) and reactive (resazurin/resorufin) compounds. The flexibility of the shape-free models give rise to multiple local minima of the objective function in parameter estimation, thus requiring global-search algorithms, which is hindered by the large number of parameter values to be estimated. We present a local-in-global optimization approach, in which we use a Markov-Chain Monte Carlo method as global-search method to estimate a set of in-stream and hyporheic parameters. Nested therein, we infer the shape-free distribution of hyporheic travel times by a local Gauss-Newton method. The overall approach is independent of the initial guess and provides the joint posterior distribution of all parameters. We apply the described local-in-global optimization method to recorded tracer breakthrough curves of three consecutive stream sections, and infer section-wise hydraulic parameter distributions to analyze how hyporheic exchange processes differ between the stream sections.
Estimating Green Net National Product for Puerto Rico: An Economic Measure of Sustainability.
Wu, Shanshan; Heberling, Matthew T
2016-04-01
This paper presents the data sources and methodology used to estimate Green Net National Product (GNNP), an economic metric of sustainability, for Puerto Rico. Using the change in GNNP as a one-sided test of weak sustainability (i.e., positive growth in GNNP is not enough to show the economy is sustainable), we measure the movement away from sustainability by examining the change in GNNP from 1993 to 2009. In order to calculate GNNP, we require both economic and natural capital data, but limited data for Puerto Rico require a number of simplifying assumptions. Based on the environmental challenges faced by Puerto Rico, we include damages from air emissions and solid waste, the storm protection value of mangroves and the value of extracting crushed stone as components in the depreciation of natural capital. Our estimate of GNNP also includes the value of time, which captures the effects of technological progress. The results show that GNNP had an increasing trend over the 17 years studied with two periods of negative growth (2004-2006 and 2007-2008). Our additional analysis suggests that the negative growth in 2004-2006 was possibly due to a temporary economic downturn. However, the negative growth in 2007-2008 was likely from the decline in the value of time, suggesting the island of Puerto Rico was moving away from sustainability during this time.
Limited sampling strategy models for estimating the AUC of gliclazide in Chinese healthy volunteers.
Huang, Ji-Han; Wang, Kun; Huang, Xiao-Hui; He, Ying-Chun; Li, Lu-Jin; Sheng, Yu-Cheng; Yang, Juan; Zheng, Qing-Shan
2013-06-01
The aim of this work is to reduce the cost of required sampling for the estimation of the area under the gliclazide plasma concentration versus time curve within 60 h (AUC0-60t ). The limited sampling strategy (LSS) models were established and validated by the multiple regression model within 4 or fewer gliclazide concentration values. Absolute prediction error (APE), root of mean square error (RMSE) and visual prediction check were used as criterion. The results of Jack-Knife validation showed that 10 (25.0 %) of the 40 LSS based on the regression analysis were not within an APE of 15 % using one concentration-time point. 90.2, 91.5 and 92.4 % of the 40 LSS models were capable of prediction using 2, 3 and 4 points, respectively. Limited sampling strategies were developed and validated for estimating AUC0-60t of gliclazide. This study indicates that the implementation of an 80 mg dosage regimen enabled accurate predictions of AUC0-60t by the LSS model. This study shows that 12, 6, 4, 2 h after administration are the key sampling times. The combination of (12, 2 h), (12, 8, 2 h) or (12, 8, 4, 2 h) can be chosen as sampling hours for predicting AUC0-60t in practical application according to requirement.
Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca
2016-10-15
The Expected Value of Perfect Partial Information (EVPPI) is a decision-theoretic measure of the 'cost' of parametric uncertainty in decision making used principally in health economic decision making. Despite this decision-theoretic grounding, the uptake of EVPPI calculations in practice has been slow. This is in part due to the prohibitive computational time required to estimate the EVPPI via Monte Carlo simulations. However, recent developments have demonstrated that the EVPPI can be estimated by non-parametric regression methods, which have significantly decreased the computation time required to approximate the EVPPI. Under certain circumstances, high-dimensional Gaussian Process (GP) regression is suggested, but this can still be prohibitively expensive. Applying fast computation methods developed in spatial statistics using Integrated Nested Laplace Approximations (INLA) and projecting from a high-dimensional into a low-dimensional input space allows us to decrease the computation time for fitting these high-dimensional GP, often substantially. We demonstrate that the EVPPI calculated using our method for GP regression is in line with the standard GP regression method and that despite the apparent methodological complexity of this new method, R functions are available in the package BCEA to implement it simply and efficiently. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
1984-05-23
Because the cost accounting reports provide the historical cost information for the cost estimating reports, we also tested the reasonableness of... accounting and cost estimating reports must be based on timely and accurate infor- mation. The reports, therefore, require the continual attention of... accounting system reported less than half the value of site direct charges (labor, materials, equipment usage, and other costs ) that should have been
Predicting Operator Execution Times Using CogTool
NASA Technical Reports Server (NTRS)
Santiago-Espada, Yamira; Latorella, Kara A.
2013-01-01
Researchers and developers of NextGen systems can use predictive human performance modeling tools as an initial approach to obtain skilled user performance times analytically, before system testing with users. This paper describes the CogTool models for a two pilot crew executing two different types of a datalink clearance acceptance tasks, and on two different simulation platforms. The CogTool time estimates for accepting and executing Required Time of Arrival and Interval Management clearances were compared to empirical data observed in video tapes and registered in simulation files. Results indicate no statistically significant difference between empirical data and the CogTool predictions. A population comparison test found no significant differences between the CogTool estimates and the empirical execution times for any of the four test conditions. We discuss modeling caveats and considerations for applying CogTool to crew performance modeling in advanced cockpit environments.
Flood frequency analysis - the challenge of using historical data
NASA Astrophysics Data System (ADS)
Engeland, Kolbjorn
2015-04-01
Estimates of high flood quantiles are needed for many applications, .e.g. dam safety assessments are based on the 1000 years flood, whereas the dimensioning of important infrastructure requires estimates of the 200 year flood. The flood quantiles are estimated by fitting a parametric distribution to a dataset of high flows comprising either annual maximum values or peaks over a selected threshold. Since the record length of data is limited compared to the desired flood quantile, the estimated flood magnitudes are based on a high degree of extrapolation. E.g. the longest time series available in Norway are around 120 years, and as a result any estimation of a 1000 years flood will require extrapolation. One solution is to extend the temporal dimension of a data series by including information about historical floods before the stream flow was systematically gaugeded. Such information could be flood marks or written documentation about flood events. The aim of this study was to evaluate the added value of using historical flood data for at-site flood frequency estimation. The historical floods were included in two ways by assuming: (1) the size of (all) floods above a high threshold within a time interval is known; and (2) the number of floods above a high threshold for a time interval is known. We used a Bayesian model formulation, with MCMC used for model estimation. This estimation procedure allowed us to estimate the predictive uncertainty of flood quantiles (i.e. both sampling and parameter uncertainty is accounted for). We tested the methods using 123 years of systematic data from Bulken in western Norway. In 2014 the largest flood in the systematic record was observed. From written documentation and flood marks we had information from three severe floods in the 18th century and they were likely to exceed the 2014 flood. We evaluated the added value in two ways. First we used the 123 year long streamflow time series and investigated the effect of having several shorter series' which could be supplemented with a limited number of known large flood events. Then we used the three historical floods from the 18th century combined with the whole and subsets of the 123 years of systematic observations. In the latter case several challenges were identified: i) The possibility to transfer water levels to river streamflows due to man made changes in the river profile, (ii) The stationarity of the data might be questioned since the three largest historical floods occurred during the "little ice age" with different climatic conditions compared to today.
Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery
Rottmann, Joerg; Keall, Paul; Berbeco, Ross
2013-01-01
Purpose: To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. Methods: 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Results: Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. Conclusions: The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time. PMID:24007146
Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R.
2016-01-01
In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator’s temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector’s single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal. PMID:27295658
Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R
2016-11-01
In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator's temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector's single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal.
Evaluation of AUC(0-4) predictive methods for cyclosporine in kidney transplant patients.
Aoyama, Takahiko; Matsumoto, Yoshiaki; Shimizu, Makiko; Fukuoka, Masamichi; Kimura, Toshimi; Kokubun, Hideya; Yoshida, Kazunari; Yago, Kazuo
2005-05-01
Cyclosporine (CyA) is the most commonly used immunosuppressive agent in patients who undergo kidney transplantation. Dosage adjustment of CyA is usually based on trough levels. Recently, trough levels have been replacing the area under the concentration-time curve during the first 4 h after CyA administration (AUC(0-4)). The aim of this study was to compare the predictive values obtained using three different methods of AUC(0-4) monitoring. AUC(0-4) was calculated from 0 to 4 h in early and stable renal transplant patients using the trapezoidal rule. The predicted AUC(0-4) was calculated using three different methods: the multiple regression equation reported by Uchida et al.; Bayesian estimation for modified population pharmacokinetic parameters reported by Yoshida et al.; and modified population pharmacokinetic parameters reported by Cremers et al. The predicted AUC(0-4) was assessed on the basis of predictive bias, precision, and correlation coefficient. The predicted AUC(0-4) values obtained using three methods through measurement of three blood samples showed small differences in predictive bias, precision, and correlation coefficient. In the prediction of AUC(0-4) measurement of one blood sample from stable renal transplant patients, the performance of the regression equation reported by Uchida depended on sampling time. On the other hand, the performance of Bayesian estimation with modified pharmacokinetic parameters reported by Yoshida through measurement of one blood sample, which is not dependent on sampling time, showed a small difference in the correlation coefficient. The prediction of AUC(0-4) using a regression equation required accurate sampling time. In this study, the prediction of AUC(0-4) using Bayesian estimation did not require accurate sampling time in the AUC(0-4) monitoring of CyA. Thus Bayesian estimation is assumed to be clinically useful in the dosage adjustment of CyA.
DOT National Transportation Integrated Search
1998-01-01
The conventional methods of determining origin-destination (O-D) trip tables involve elaborate surveys, e.g., home interviews, that require considerable time, staff, and funds. To overcome this drawback, a number of theoretical models that synthesize...
ESTIMATION OF GIARDIA CT VALUES AT HIGH PH FOR THE SURFACE WATER TREATMENT RULE
The U.S. Environmental Protection Agency currently recommends Ct (disinfectant concentration multiplied by the exposure time) values to achieve required levels of inactivation of Giardia lamblia cysts by different disinfectants including free chlorine. Current guidance covers ina...
Multimedia risk assessments require the temporal integration of atmospheric concentration and deposition with other media modules. However, providing an extended time series of estimates is computationally expensive. An alternative approach is to substitute long-term average a...
Surviving an Information Systems Conversion.
ERIC Educational Resources Information Center
Neel, Don
1999-01-01
Prompted by the "millennium bug," many school districts are in the process of replacing non-Y2K-compliant information systems. Planners should establish a committee to develop performance criteria and select the winning proposal, estimate time requirements, and schedule retraining during low-activity periods. (MLH)
Cross-validation of recent and longstanding resting metabolic rate prediction equations
USDA-ARS?s Scientific Manuscript database
Resting metabolic rate (RMR) measurement is time consuming and requires specialized equipment. Prediction equations provide an easy method to estimate RMR; however, their accuracy likely varies across individuals. Understanding the factors that influence predicted RMR accuracy at the individual lev...
Thompson, A J; Weary, D M; von Keyserlingk, M A G
2017-05-01
The electronic equipment used on farms can be creatively co-opted to collect data for which it was not originally designed. In the current study, we describe 2 novel algorithms that harvest data from electronic feeding equipment and data loggers used to record standing and lying behavior, to estimate the time that dairy cows spend away from their pen to be milked. Our 2 objectives were to (1) measure the ability of the first algorithm to estimate the time cows spend away from the pen as a group and (2) determine the capability of a second algorithm to estimate the time it takes for individual cows to return to their pen after being milked. To achieve these objectives, we conducted 2 separate experiments: first, to estimate group time away, the feeding behavior of 1 pen of 20 Holstein cows was monitored electronically for 1 mo; second, to measure individual latency to return to the pen, feeding and lying behavior of 12 healthy Holstein cows was monitored electronically from parturition to 21 d in milk. For both experiments, we monitored the time each individual cow exited the pen before each milking and when she returned to the pen after milking using video recordings. Estimates generated by our algorithms were then compared with the times captured from the video recordings. Our first algorithm provided reliable pen-based estimates for the minimum time cows spent away from the pen to be milked in the morning [coefficient of determination (R 2 ) = 0.92] and afternoon (R 2 = 0.96). The second algorithm was able to estimate of the time it took for individual cows to return to the pen after being milked in the morning (R 2 = 0.98), but less so in the afternoon (R 2 = 0.67). This study illustrates how data from electronic systems used to assess feeding and lying behavior can be mined to estimate novel measures. New work is now required to improve the estimates of our algorithm for individuals, for example by adding data from other electronic monitoring systems on the farm. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Time-of-flight PET time calibration using data consistency
NASA Astrophysics Data System (ADS)
Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan
2018-05-01
This paper presents new data driven methods for the time of flight (TOF) calibration of positron emission tomography (PET) scanners. These methods are derived from the consistency condition for TOF PET, they can be applied to data measured with an arbitrary tracer distribution and are numerically efficient because they do not require a preliminary image reconstruction from the non-TOF data. Two-dimensional simulations are presented for one of the methods, which only involves the two first moments of the data with respect to the TOF variable. The numerical results show that this method estimates the detector timing offsets with errors that are larger than those obtained via an initial non-TOF reconstruction, but remain smaller than of the TOF resolution and thereby have a limited impact on the quantitative accuracy of the activity image estimated with standard maximum likelihood reconstruction algorithms.
On modeling animal movements using Brownian motion with measurement error.
Pozdnyakov, Vladimir; Meyer, Thomas; Wang, Yu-Bo; Yan, Jun
2014-02-01
Modeling animal movements with Brownian motion (or more generally by a Gaussian process) has a long tradition in ecological studies. The recent Brownian bridge movement model (BBMM), which incorporates measurement errors, has been quickly adopted by ecologists because of its simplicity and tractability. We discuss some nontrivial properties of the discrete-time stochastic process that results from observing a Brownian motion with added normal noise at discrete times. In particular, we demonstrate that the observed sequence of random variables is not Markov. Consequently the expected occupation time between two successively observed locations does not depend on just those two observations; the whole path must be taken into account. Nonetheless, the exact likelihood function of the observed time series remains tractable; it requires only sparse matrix computations. The likelihood-based estimation procedure is described in detail and compared to the BBMM estimation.
Biogeographic Dating of Speciation Times Using Paleogeographically Informed Processes
Landis, Michael J.
2017-01-01
Abstract Standard models of molecular evolution cannot estimate absolute speciation times alone, and require external calibrations to do so, such as fossils. Because fossil calibration methods rely on the incomplete fossil record, a great number of nodes in the tree of life cannot be dated precisely. However, many major paleogeographical events are dated, and since biogeographic processes depend on paleogeographical conditions, biogeographic dating may be used as an alternative or complementary method to fossil dating. I demonstrate how a time-stratified biogeographic stochastic process may be used to estimate absolute divergence times by conditioning on dated paleogeographical events. Informed by the current paleogeographical literature, I construct an empirical dispersal graph using 25 areas and 26 epochs for the past 540 Ma of Earth’s history. Simulations indicate biogeographic dating performs well so long as paleogeography imposes constraint on biogeographic character evolution. To gauge whether biogeographic dating may be of practical use, I analyzed the well-studied turtle clade (Testudines) to assess how well biogeographic dating fares when compared to fossil-calibrated dating estimates reported in the literature. Fossil-free biogeographic dating estimated the age of the most recent common ancestor of extant turtles to be from the Late Triassic, which is consistent with fossil-based estimates. Dating precision improves further when including a root node fossil calibration. The described model, paleogeographical dispersal graph, and analysis scripts are available for use with RevBayes. PMID:27155009
Ganju, N.K.; Knowles, N.; Schoellhamer, D.H.
2008-01-01
In this study we used hydrologic proxies to develop a daily sediment load time-series, which agrees with decadal sediment load estimates, when integrated. Hindcast simulations of bathymetric change in estuaries require daily sediment loads from major tributary rivers, to capture the episodic delivery of sediment during multi-day freshwater flow pulses. Two independent decadal sediment load estimates are available for the Sacramento/San Joaquin River Delta, California prior to 1959, but they must be downscaled to a daily interval for use in hindcast models. Daily flow and sediment load data to the Delta are available after 1930 and 1959, respectively, but bathymetric change simulations for San Francisco Bay prior to this require a method to generate daily sediment load estimates into the Delta. We used two historical proxies, monthly rainfall and unimpaired flow magnitudes, to generate monthly unimpaired flows to the Sacramento/San Joaquin Delta for the 1851-1929 period. This step generated the shape of the monthly hydrograph. These historical monthly flows were compared to unimpaired monthly flows from the modern era (1967-1987), and a least-squares metric selected a modern water year analogue for each historical water year. The daily hydrograph for the modern analogue was then assigned to the historical year and scaled to match the flow volume estimated by dendrochronology methods, providing the correct total flow for the year. We applied a sediment rating curve to this time-series of daily flows, to generate daily sediment loads for 1851-1958. The rating curve was calibrated with the two independent decadal sediment load estimates, over two distinct periods. This novel technique retained the timing and magnitude of freshwater flows and sediment loads, without damping variability or net sediment loads to San Francisco Bay. The time-series represents the hydraulic mining period with sustained periods of increased sediment loads, and a dramatic decrease after 1910, corresponding to a reduction in available mining debris. The analogue selection procedure also permits exploration of the morphological hydrograph concept, where a limited set of hydrographs is used to simulate the same bathymetric change as the actual set of hydrographs. The final daily sediment load time-series and morphological hydrograph concept will be applied as landward boundary conditions for hindcasting simulations of bathymetric change in San Francisco Bay.
Work measurement for estimating food preparation time of a bioregenerative diet
NASA Technical Reports Server (NTRS)
Olabi, Ammar; Hunter, Jean; Jackson, Peter; Segal, Michele; Spies, Rupert; Wang, Carolyn; Lau, Christina; Ong, Christopher; Alexander, Conor; Raskob, Evan;
2003-01-01
During space missions, such as the prospective Mars mission, crew labor time is a strictly limited resource. The diet for such a mission (based on crops grown in a bioregenerative life support system) will require astronauts to prepare their meals essentially from raw ingredients. Time spent on food processing and preparation is time lost for other purposes. Recipe design and diet planning for a space mission should therefore incorporate the time required to prepare the recipes as a critical factor. In this study, videotape analysis of an experienced chef was used to develop a database of recipe preparation time. The measurements were highly consistent among different measurement teams. Data analysis revealed a wide variation between the active times of different recipes, underscoring the need for optimization of diet planning. Potential uses of the database developed in this study are discussed and illustrated in this work.
Dasgupta, Nilanjan; Carin, Lawrence
2005-04-01
Time-reversal imaging (TRI) is analogous to matched-field processing, although TRI is typically very wideband and is appropriate for subsequent target classification (in addition to localization). Time-reversal techniques, as applied to acoustic target classification, are highly sensitive to channel mismatch. Hence, it is crucial to estimate the channel parameters before time-reversal imaging is performed. The channel-parameter statistics are estimated here by applying a geoacoustic inversion technique based on Gibbs sampling. The maximum a posteriori (MAP) estimate of the channel parameters are then used to perform time-reversal imaging. Time-reversal implementation requires a fast forward model, implemented here by a normal-mode framework. In addition to imaging, extraction of features from the time-reversed images is explored, with these applied to subsequent target classification. The classification of time-reversed signatures is performed by the relevance vector machine (RVM). The efficacy of the technique is analyzed on simulated in-channel data generated by a free-field finite element method (FEM) code, in conjunction with a channel propagation model, wherein the final classification performance is demonstrated to be relatively insensitive to the associated channel parameters. The underlying theory of Gibbs sampling and TRI are presented along with the feature extraction and target classification via the RVM.
Net migration estimation in an extended, multiregional gravity model.
Foot, D K; Milne, W J
1984-02-01
A multi-regional framework is developed in order to analyze net migration over time to all 10 Canadian provinces within an integrated system of equations. "An extended gravity model is the basis for the equation specification and the use of constrained econometric estimation techniques allows for the provincial interdependence of the migration decision while at the same time ensuring that an important system-wide requirement is respected." The model is estimated using official Canadian data for the 1960s and 1970s. "The results suggest the predominance of the push factor for interprovincial migration for most provinces, although net migration to the Atlantic provinces is also shown to be subject to pull forces from the rest of the country." The effects of wage rate variables, unemployment, and political disturbances in Quebec on inter-provincial migration are noted. excerpt
More realistic power estimation for new user, active comparator studies: an empirical example.
Gokhale, Mugdha; Buse, John B; Pate, Virginia; Marquis, M Alison; Stürmer, Til
2016-04-01
Pharmacoepidemiologic studies are often expected to be sufficiently powered to study rare outcomes, but there is sequential loss of power with implementation of study design options minimizing bias. We illustrate this using a study comparing pancreatic cancer incidence after initiating dipeptidyl-peptidase-4 inhibitors (DPP-4i) versus thiazolidinediones or sulfonylureas. We identified Medicare beneficiaries with at least one claim of DPP-4i or comparators during 2007-2009 and then applied the following steps: (i) exclude prevalent users, (ii) require a second prescription of same drug, (iii) exclude prevalent cancers, (iv) exclude patients age <66 years and (v) censor for treatment changes during follow-up. Power to detect hazard ratios (effect measure strongly driven by the number of events) ≥ 2.0 estimated after step 5 was compared with the naïve power estimated prior to step 1. There were 19,388 and 28,846 DPP-4i and thiazolidinedione initiators during 2007-2009. The number of drug initiators dropped most after requiring a second prescription, outcomes dropped most after excluding patients with prevalent cancer and person-time dropped most after requiring a second prescription and as-treated censoring. The naïve power (>99%) was considerably higher than the power obtained after the final step (~75%). In designing new-user active-comparator studies, one should be mindful how steps minimizing bias affect sample-size, number of outcomes and person-time. While actual numbers will depend on specific settings, application of generic losses in percentages will improve estimates of power compared with the naive approach mostly ignoring steps taken to increase validity. Copyright © 2015 John Wiley & Sons, Ltd.
Heberling, Matthew T; Templeton, Joshua J; Wu, Shanshan
2012-11-30
This paper presents the data sources and methodology used to estimate Green Net Regional Product (GNRP), a green accounting approach, for the San Luis Basin (SLB). We measured the movement away from sustainability by examining the change in GNRP over time. Any attempt at green accounting requires both economic and natural capital data. However, limited data for the Basin requires a number of simplifying assumptions and requires transforming economic data at the national, state, and county levels to the level of the SLB. Given the contribution of agribusiness to the SLB, we included the depletion of both groundwater and soil as components in the depreciation of natural capital. We also captured the effect of the consumption of energy on climate change for future generations through carbon dioxide (CO(2)) emissions. In order to estimate the depreciation of natural capital, the shadow price of water for agriculture, the economic damages from soil erosion due to wind, and the social cost of carbon emissions were obtained from the literature and applied to the SLB using benefit transfer. We used Colorado's total factor productivity for agriculture to estimate the value of time (i.e., to include the effects of exogenous technological progress). We aggregated the economic data and the depreciation of natural capital for the SLB from 1980 to 2005. The results suggest that GNRP had a slight upward trend through most of this time period, despite temporary negative trends, the longest of which occurred during the period 1985-86 to 1987-88. However, given the upward trend in GNRP and the possibility of business cycles causing the temporary declines, there is no definitive evidence of moving away from sustainability. Published by Elsevier Ltd.