Comparison of estimators for rolling samples using Forest Inventory and Analysis data
Devin S. Johnson; Michael S. Williams; Raymond L. Czaplewski
2003-01-01
The performance of three classes of weighted average estimators is studied for an annual inventory design similar to the Forest Inventory and Analysis program of the United States. The first class is based on an ARIMA(0,1,1) time series model. The equal weight, simple moving average is a member of this class. The second class is based on an ARIMA(0,2,2) time series...
Kaur, Gurpreet; English, Coralie; Hillier, Susan
2013-03-01
How accurately do physiotherapists estimate how long stroke survivors spend in physiotherapy sessions and the amount of time stroke survivors are engaged in physical activity during physiotherapy sessions? Does the mode of therapy (individual sessions or group circuit classes) affect the accuracy of therapists' estimates? Observational study embedded within a randomised trial. People who participated in the CIRCIT trial after having a stroke. 47 therapy sessions scheduled and supervised by physiotherapists (n = 8) and physiotherapy assistants (n = 4) for trial participants were video-recorded. Therapists' estimations of therapy time were compared to the video-recorded times. The agreement between therapist-estimated and video-recorded data for total therapy time and active time was excellent, with intraclass correlation coefficients (ICC) of 0.90 (95% CI 0.83 to 0.95) and 0.83 (95% CI 0.73 to 0.93) respectively. Agreement between therapist-estimated and video-recorded data for inactive time was good (ICC score 0.62, 95% CI 0.40 to 0.77). The mean (SD) difference between therapist-estimated and video-recorded total therapy time, active time, and inactive time for all sessions was 7.7 (10.5), 14.1 (10.3) and -6.9 (9.5) minutes respectively. Bland-Altman analyses revealed a systematic bias of overestimation of total therapy time and total active time, and underestimation of inactive time by therapists. Compared to individual therapy sessions, therapists estimated total circuit class therapy duration more accurately, but estimated active time within circuit classes less accurately. Therapists are inaccurate in their estimation of the amount of time stroke survivors are active during therapy sessions. When accurate therapy data are required, use of objective measures is recommended. Copyright © 2013 Australian Physiotherapy Association. Published by .. All rights reserved.
Hidden Markov models for fault detection in dynamic systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J. (Inventor)
1995-01-01
The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) (vertical bar)/x), 1 less than or equal to i isless than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.
Hidden Markov models for fault detection in dynamic systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J. (Inventor)
1993-01-01
The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) perpendicular to x), 1 less than or equal to i is less than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.
ERIC Educational Resources Information Center
Naylor, Robin; Smith, Jeremy; Telhaj, Shqiponja
2015-01-01
We investigate the extent to which graduate returns vary according to the class of degree achieved by UK university students and examine changes over time in estimated degree class premia. Using a variety of complementary datasets for individuals born in Britain around 1970 and aged between 30 and 40, we estimate an hourly wage premium for a…
CrowdWater - Can people observe what models need?
NASA Astrophysics Data System (ADS)
van Meerveld, I. H. J.; Seibert, J.; Vis, M.; Etter, S.; Strobl, B.
2017-12-01
CrowdWater (www.crowdwater.ch) is a citizen science project that explores the usefulness of crowd-sourced data for hydrological model calibration and prediction. Hydrological models are usually calibrated based on observed streamflow data but it is likely easier for people to estimate relative stream water levels, such as the water level above or below a rock, than streamflow. Relative stream water levels may, therefore, be a more suitable variable for citizen science projects than streamflow. In order to test this assumption, we held surveys near seven different sized rivers in Switzerland and asked more than 450 volunteers to estimate the water level class based on a picture with a virtual staff gauge. The results show that people can generally estimate the relative water level well, although there were also a few outliers. We also asked the volunteers to estimate streamflow based on the stick method. The median estimated streamflow was close to the observed streamflow but the spread in the streamflow estimates was large and there were very large outliers, suggesting that crowd-based streamflow data is highly uncertain. In order to determine the potential value of water level class data for model calibration, we converted streamflow time series for 100 catchments in the US to stream level class time series and used these to calibrate the HBV model. The model was then validated using the streamflow data. The results of this modeling exercise show that stream level class data are useful for constraining a simple runoff model. Time series of only two stream level classes, e.g. above or below a rock in the stream, were already informative, especially when the class boundary was chosen towards the highest stream levels. There was hardly any improvement in model performance when more than five water level classes were used. This suggests that if crowd-sourced stream level observations are available for otherwise ungauged catchments, these data can be used to constrain a simple runoff model and to generate simulated streamflow time series from the level observations.
Automatic threshold selection for multi-class open set recognition
NASA Astrophysics Data System (ADS)
Scherreik, Matthew; Rigling, Brian
2017-05-01
Multi-class open set recognition is the problem of supervised classification with additional unknown classes encountered after a model has been trained. An open set classifer often has two core components. The first component is a base classifier which estimates the most likely class of a given example. The second component consists of open set logic which estimates if the example is truly a member of the candidate class. Such a system is operated in a feed-forward fashion. That is, a candidate label is first estimated by the base classifier, and the true membership of the example to the candidate class is estimated afterward. Previous works have developed an iterative threshold selection algorithm for rejecting examples from classes which were not present at training time. In those studies, a Platt-calibrated SVM was used as the base classifier, and the thresholds were applied to class posterior probabilities for rejection. In this work, we investigate the effectiveness of other base classifiers when paired with the threshold selection algorithm and compare their performance with the original SVM solution.
Algorithms for Brownian first-passage-time estimation
NASA Astrophysics Data System (ADS)
Adib, Artur B.
2009-09-01
A class of algorithms in discrete space and continuous time for Brownian first-passage-time estimation is considered. A simple algorithm is derived that yields exact mean first-passage times (MFPTs) for linear potentials in one dimension, regardless of the lattice spacing. When applied to nonlinear potentials and/or higher spatial dimensions, numerical evidence suggests that this algorithm yields MFPT estimates that either outperform or rival Langevin-based (discrete time and continuous space) estimates.
Depaoli, Sarah
2013-06-01
Growth mixture modeling (GMM) represents a technique that is designed to capture change over time for unobserved subgroups (or latent classes) that exhibit qualitatively different patterns of growth. The aim of the current article was to explore the impact of latent class separation (i.e., how similar growth trajectories are across latent classes) on GMM performance. Several estimation conditions were compared: maximum likelihood via the expectation maximization (EM) algorithm and the Bayesian framework implementing diffuse priors, "accurate" informative priors, weakly informative priors, data-driven informative priors, priors reflecting partial-knowledge of parameters, and "inaccurate" (but informative) priors. The main goal was to provide insight about the optimal estimation condition under different degrees of latent class separation for GMM. Results indicated that optimal parameter recovery was obtained though the Bayesian approach using "accurate" informative priors, and partial-knowledge priors showed promise for the recovery of the growth trajectory parameters. Maximum likelihood and the remaining Bayesian estimation conditions yielded poor parameter recovery for the latent class proportions and the growth trajectories. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-26
...: Class I Motor Carriers of Passengers. Estimated Number of Respondents: 2 (per year). Estimated Time per Response: 18 minutes per response. Expiration Date: September 30, 2012. Frequency of Response: Annually and..., passenger carriers are classified into two groups: (1) Class I carriers are those having average annual...
Maximum likelihood estimation for semiparametric transformation models with interval-censored data
Mao, Lu; Lin, D. Y.
2016-01-01
Abstract Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656
van der Meer, Jolanda M J; Hartman, Catharina A; Thissen, Andrieke J A M; Oerlemans, Anoek M; Luman, Marjolein; Buitelaar, Jan K; Rommelse, Nanda N J
2016-04-01
Children with attention-deficit/hyperactivity disorder (ADHD) have motor timing difficulties. This study examined whether affected motor timing accuracy and variability are specific for ADHD, or that comorbidity with autism spectrum disorders (ASD) contributes to these motor timing difficulties. An 80-trial motor timing task measuring accuracy (μ), variability (σ) and infrequent long response times (τ) in estimating a 1-s interval was administered to 283 children and adolescents (8-17 years) from both a clinic and population based sample. They were divided into four latent classes based on the SCQ and L data. These classes were: without behavioral problems 'Normal-class' (n = 154), with only ADHD symptoms 'ADHD-class' (n = 49), and two classes with both ASD and ADHD symptoms; ADHD(+ASD)-class (n = 39) and ASD(+ADHD)-class (n = 41). The pure ADHD-class did not deviate from the Normal class on any of the motor timing measures (mean RTs 916 and 925 ms, respectively). The comorbid ADHD(+ASD) and ASD(+ADHD) classes were significantly less accurate (more time underestimations) compared to the Normal class (mean RTs 847 and 870 ms, respectively). Variability in motor timing was reduced in the younger children in the ADHD(+ASD) class, which may reflect a tendency to rush the tedious task. Only patients with more severe behavioral symptoms show motor timing deficiencies. This cannot merely be explained by high ADHD severity with ASD playing no role, as ADHD symptom severity in the pure ADHD-class and the ASD(+ADHD) class was highly similar, with the former class showing no motor timing deficits.
Latent class instrumental variables: A clinical and biostatistical perspective
Baker, Stuart G.; Kramer, Barnett S.; Lindeman, Karen S.
2015-01-01
In some two-arm randomized trials, some participants receive the treatment assigned to the other arm as a result of technical problems, refusal of a treatment invitation, or a choice of treatment in an encouragement design. In some before-and-after studies, the availability of a new treatment changes from one time period to this next. Under assumptions that are often reasonable, the latent class instrumental variable (IV) method estimates the effect of treatment received in the aforementioned scenarios involving all-or-none compliance and all-or-none availability. Key aspects are four initial latent classes (sometimes called principal strata) based on treatment received if in each randomization group or time period, the exclusion restriction assumption (in which randomization group or time period is an instrumental variable), the monotonicity assumption (which drops an implausible latent class from the analysis), and the estimated effect of receiving treatment in one latent class (sometimes called efficacy, the local average treatment effect, or the complier average causal effect). Since its independent formulations in the biostatistics and econometrics literatures, the latent class IV method (which has no well-established name) has gained increasing popularity. We review the latent class IV method from a clinical and biostatistical perspective, focusing on underlying assumptions, methodological extensions, and applications in our fields of obstetrics and cancer research. PMID:26239275
NASA Astrophysics Data System (ADS)
Alam, N. M.; Sharma, G. C.; Moreira, Elsa; Jana, C.; Mishra, P. K.; Sharma, N. K.; Mandal, D.
2017-08-01
Markov chain and 3-dimensional log-linear models were attempted to model drought class transitions derived from the newly developed drought index the Standardized Precipitation Evapotranspiration Index (SPEI) at a 12 month time scale for six major drought prone areas of India. Log-linear modelling approach has been used to investigate differences relative to drought class transitions using SPEI-12 time series derived form 48 yeas monthly rainfall and temperature data. In this study, the probabilities of drought class transition, the mean residence time, the 1, 2 or 3 months ahead prediction of average transition time between drought classes and the drought severity class have been derived. Seasonality of precipitation has been derived for non-homogeneous Markov chains which could be used to explain the effect of the potential retreat of drought. Quasi-association and Quasi-symmetry log-linear models have been fitted to the drought class transitions derived from SPEI-12 time series. The estimates of odds along with their confidence intervals were obtained to explain the progression of drought and estimation of drought class transition probabilities. For initial months as the drought severity increases the calculated odds shows lower value and the odds decreases for the succeeding months. This indicates that the ratio of expected frequencies of occurrence of transition from drought class to the non-drought class decreases as compared to transition to any drought class when the drought severity of the present class increases. From 3-dimensional log-linear model it is clear that during the last 24 years the drought probability has increased for almost all the six regions. The findings from the present study will immensely help to assess the impact of drought on the gross primary production and to develop future contingent planning in similar regions worldwide.
New Methodology for Estimating Fuel Economy by Vehicle Class
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, Shih-Miao; Dabbs, Kathryn; Hwang, Ho-Ling
2011-01-01
Office of Highway Policy Information to develop a new methodology to generate annual estimates of average fuel efficiency and number of motor vehicles registered by vehicle class for Table VM-1 of the Highway Statistics annual publication. This paper describes the new methodology developed under this effort and compares the results of the existing manual method and the new systematic approach. The methodology developed under this study takes a two-step approach. First, the preliminary fuel efficiency rates are estimated based on vehicle stock models for different classes of vehicles. Then, a reconciliation model is used to adjust the initial fuel consumptionmore » rates from the vehicle stock models and match the VMT information for each vehicle class and the reported total fuel consumption. This reconciliation model utilizes a systematic approach that produces documentable and reproducible results. The basic framework utilizes a mathematical programming formulation to minimize the deviations between the fuel economy estimates published in the previous year s Highway Statistics and the results from the vehicle stock models, subject to the constraint that fuel consumptions for different vehicle classes must sum to the total fuel consumption estimate published in Table MF-21 of the current year Highway Statistics. The results generated from this new approach provide a smoother time series for the fuel economies by vehicle class. It also utilizes the most up-to-date and best available data with sound econometric models to generate MPG estimates by vehicle class.« less
Variability and predictability of finals times of elite rowers.
Smith, Tiaki Brett; Hopkins, Will G
2011-11-01
Little is known about the competitive performance characteristics of elite rowers. We report here analyses of performance times for finalists in world-class regattas from 1999 to 2009. The data were official race times for the 10 men's and 7 women's single and crewed boat classes, each with ∼ 200-300 different boats competing in 1-33 of the 46 regattas at 18 venues. A linear mixed model of race times for each boat class provided estimates of variability as coefficients of variation after adjustment for means of calendar year, level of competition (Olympics, world championship, World Cup), venue, and level of final (A, B, C, …). Mean performance was substantially slower between consecutive levels of competition (1.5%, 2.7%) and consecutive levels of finals (∼ 1%-2%). Differences in the effects of venue and of environmental conditions, estimated as variability in mean race time between venues and finals, were extremely large (∼ 3.0%). Within-boat race-to-race variability for A finalists was 1.1% for single sculls and 0.9% for crewed boats, with little difference between men and women and only a small increase in lower-level finalists. Predictability of performance, expressed as intraclass correlation coefficients, showed considerable differences between boat classes, but the mean was high (∼ 0.63), with little difference between crewed and single boats, between men and women, and between within and between years. The race-to-race variability of boat times of ∼ 1.0% is similar to that in comparable endurance sports performed against water or air resistance. Estimates of the smallest important performance enhancement (∼ 0.3%) and the effects of level of competition, level of final, venue, environment, and boat class will help inform investigations of factors affecting elite competitive rowing performance.
An Evaluation Method of Words Tendency Depending on Time-Series Variation and Its Improvements.
ERIC Educational Resources Information Center
Atlam, El-Sayed; Okada, Makoto; Shishibori, Masami; Aoe, Jun-ichi
2002-01-01
Discussion of word frequency and keywords in text focuses on a method to estimate automatically the stability classes that indicate a word's popularity with time-series variations based on the frequency change in past electronic text data. Compares the evaluation of decision tree stability class results with manual classification results.…
Source localization of non-stationary acoustic data using time-frequency analysis
NASA Astrophysics Data System (ADS)
Stoughton, Jack; Edmonson, William
2005-04-01
An improvement in temporal locality of the generalized cross-correlation (GCC) for angle of arrival (AOA) estimation can be achieved by employing 2-D cross-correlation of infrasonic sensor data transformed to its time-frequency (TF) representation. Intermediate to the AOA evaluation is the time delay between pairs of sensors. The signal class of interest includes far field sources which are partially coherent across the array, nonstationary, and wideband. In addition, signals can occur as multiple short bursts, for which TF representations may be more appropriate for time delay estimation. The GCC tends to smooth out such temporal energy bursts. Simulation and experimental results will demonstrate the improvement in using a TF-based GCC, using the Cohen class, over the classic GCC method. Comparative demonstration of the methods will be performed on data captured on an infrasonic sensor array located at NASA Langley Research Center (LaRC). The infrasonic data sources include Delta IV and Space Shuttle launches from Kennedy Space Center which belong to the stated signal class. Of interest is to apply this method to the AOA estimation of atmospheric turbulence. [Work supported by NASA LaRC Creativity and Innovation project: Infrasonic Detection of Clear Air Turbulence and Severe Storms.
Latent class instrumental variables: a clinical and biostatistical perspective.
Baker, Stuart G; Kramer, Barnett S; Lindeman, Karen S
2016-01-15
In some two-arm randomized trials, some participants receive the treatment assigned to the other arm as a result of technical problems, refusal of a treatment invitation, or a choice of treatment in an encouragement design. In some before-and-after studies, the availability of a new treatment changes from one time period to this next. Under assumptions that are often reasonable, the latent class instrumental variable (IV) method estimates the effect of treatment received in the aforementioned scenarios involving all-or-none compliance and all-or-none availability. Key aspects are four initial latent classes (sometimes called principal strata) based on treatment received if in each randomization group or time period, the exclusion restriction assumption (in which randomization group or time period is an instrumental variable), the monotonicity assumption (which drops an implausible latent class from the analysis), and the estimated effect of receiving treatment in one latent class (sometimes called efficacy, the local average treatment effect, or the complier average causal effect). Since its independent formulations in the biostatistics and econometrics literatures, the latent class IV method (which has no well-established name) has gained increasing popularity. We review the latent class IV method from a clinical and biostatistical perspective, focusing on underlying assumptions, methodological extensions, and applications in our fields of obstetrics and cancer research. Copyright © 2015 John Wiley & Sons, Ltd.
A new class of finite-time nonlinear consensus protocols for multi-agent systems
NASA Astrophysics Data System (ADS)
Zuo, Zongyu; Tie, Lin
2014-02-01
This paper is devoted to investigating the finite-time consensus problem for a multi-agent system in networks with undirected topology. A new class of global continuous time-invariant consensus protocols is constructed for each single-integrator agent dynamics with the aid of Lyapunov functions. In particular, it is shown that the settling time of the proposed new class of finite-time consensus protocols is upper bounded for arbitrary initial conditions. This makes it possible for network consensus problems that the convergence time is designed and estimated offline for a given undirected information flow and a group volume of agents. Finally, a numerical simulation example is presented as a proof of concept.
Large-scale changes in bloater growth and condition in Lake Huron
Prichard, Carson G.; Roseman, Edward F.; Keeler, Kevin M.; O'Brien, Timothy P.; Riley, Stephen C.
2016-01-01
Native Bloaters Coregonus hoyi have exhibited multiple strong year-classes since 2005 and now are the most abundant benthopelagic offshore prey fish in Lake Huron, following the crash of nonnative AlewivesAlosa pseudoharengus and substantial declines in nonnative Rainbow Smelt Osmerus mordax. Despite recent recoveries in Bloater abundance, marketable-size (>229 mm) Bloaters remain scarce. We used annual survey data to assess temporal and spatial dynamics of Bloater body condition and lengths at age in the main basin of Lake Huron from 1973 to 2014. Basinwide lengths at age were modeled by cohort for the 1973–2003 year-classes using a von Bertalanffy growth model with time-varying Brody growth coefficient (k) and asymptotic length () parameters. Median Bloater weights at selected lengths were estimated to assess changes in condition by modeling weight–length relations with an allometric growth model that allowed growth parameters to vary spatially and temporally. Estimated Bloater lengths at age declined 14–24% among ages 4–8 for all year-classes between 1973 and 2004. Estimates of declined from a peak of 394 mm (1973 year-class) to a minimum of 238 mm (1998 year-class). Observed mean lengths at age in 2014 were at all-time lows, suggesting that year-classes comprising the current Bloater population would have to follow growth trajectories unlike those characterizing the 1973–2003 year-classes to attain marketable size. Furthermore, estimated weights of 250-mm Bloaters (i.e., a large, commercially valuable size-class) declined 17% among all regions from 1976 to 2007. Decreases in body condition of large Bloaters are associated with lower lipid content and may be linked to marked declines in abundance of the amphipodsDiporeia spp. in Lake Huron. We hypothesize that since at least 1976, large Bloaters have become more negatively buoyant and may have incurred an increasingly greater metabolic cost performing diel vertical migrations to prey upon the opossum shrimp Mysis diluviana and zooplankton.
Grade of Membership Response Time Model for Detecting Guessing Behaviors
ERIC Educational Resources Information Center
Pokropek, Artur
2016-01-01
A response model that is able to detect guessing behaviors and produce unbiased estimates in low-stake conditions using timing information is proposed. The model is a special case of the grade of membership model in which responses are modeled as partial members of a class that is affected by motivation and a class that responds only according to…
Modeling US Adult Obesity Trends: A System Dynamics Model for Estimating Energy Imbalance Gap
Rahmandad, Hazhir; Huang, Terry T.-K.; Bures, Regina M.; Glass, Thomas A.
2014-01-01
Objectives. We present a system dynamics model that quantifies the energy imbalance gap responsible for the US adult obesity epidemic among gender and racial subpopulations. Methods. We divided the adult population into gender–race/ethnicity subpopulations and body mass index (BMI) classes. We defined transition rates between classes as a function of metabolic dynamics of individuals within each class. We estimated energy intake in each BMI class within the past 4 decades as a multiplication of the equilibrium energy intake of individuals in that class. Through calibration, we estimated the energy gap multiplier for each gender–race–BMI group by matching simulated BMI distributions for each subpopulation against national data with maximum likelihood estimation. Results. No subpopulation showed a negative or zero energy gap, suggesting that the obesity epidemic continues to worsen, albeit at a slower rate. In the past decade the epidemic has slowed for non-Hispanic Whites, is starting to slow for non-Hispanic Blacks, but continues to accelerate among Mexican Americans. Conclusions. The differential energy balance gap across subpopulations and over time suggests that interventions should be tailored to subpopulations’ needs. PMID:24832405
Airport take-off noise assessment aimed at identify responsible aircraft classes.
Sanchez-Perez, Luis A; Sanchez-Fernandez, Luis P; Shaout, Adnan; Suarez-Guerra, Sergio
2016-01-15
Assessment of aircraft noise is an important task of nowadays airports in order to fight environmental noise pollution given the recent discoveries on the exposure negative effects on human health. Noise monitoring and estimation around airports mostly use aircraft noise signals only for computing statistical indicators and depends on additional data sources so as to determine required inputs such as the aircraft class responsible for noise pollution. In this sense, the noise monitoring and estimation systems have been tried to improve by creating methods for obtaining more information from aircraft noise signals, especially real-time aircraft class recognition. Consequently, this paper proposes a multilayer neural-fuzzy model for aircraft class recognition based on take-off noise signal segmentation. It uses a fuzzy inference system to build a final response for each class p based on the aggregation of K parallel neural networks outputs Op(k) with respect to Linear Predictive Coding (LPC) features extracted from K adjacent signal segments. Based on extensive experiments over two databases with real-time take-off noise measurements, the proposed model performs better than other methods in literature, particularly when aircraft classes are strongly correlated to each other. A new strictly cross-checked database is introduced including more complex classes and real-time take-off noise measurements from modern aircrafts. The new model is at least 5% more accurate with respect to previous database and successfully classifies 87% of measurements in the new database. Copyright © 2015 Elsevier B.V. All rights reserved.
Prediction of hemoglobin in blood donors using a latent class mixed-effects transition model.
Nasserinejad, Kazem; van Rosmalen, Joost; de Kort, Wim; Rizopoulos, Dimitris; Lesaffre, Emmanuel
2016-02-20
Blood donors experience a temporary reduction in their hemoglobin (Hb) value after donation. At each visit, the Hb value is measured, and a too low Hb value leads to a deferral for donation. Because of the recovery process after each donation as well as state dependence and unobserved heterogeneity, longitudinal data of Hb values of blood donors provide unique statistical challenges. To estimate the shape and duration of the recovery process and to predict future Hb values, we employed three models for the Hb value: (i) a mixed-effects models; (ii) a latent-class mixed-effects model; and (iii) a latent-class mixed-effects transition model. In each model, a flexible function was used to model the recovery process after donation. The latent classes identify groups of donors with fast or slow recovery times and donors whose recovery time increases with the number of donations. The transition effect accounts for possible state dependence in the observed data. All models were estimated in a Bayesian way, using data of new entrant donors from the Donor InSight study. Informative priors were used for parameters of the recovery process that were not identified using the observed data, based on results from the clinical literature. The results show that the latent-class mixed-effects transition model fits the data best, which illustrates the importance of modeling state dependence, unobserved heterogeneity, and the recovery process after donation. The estimated recovery time is much longer than the current minimum interval between donations, suggesting that an increase of this interval may be warranted. Copyright © 2015 John Wiley & Sons, Ltd.
Profiling Physical Activity, Diet, Screen and Sleep Habits in Portuguese Children
Pereira, Sara; Katzmarzyk, Peter T.; Gomes, Thayse Natacha; Borges, Alessandra; Santos, Daniel; Souza, Michele; dos Santos, Fernanda K.; Chaves, Raquel N.; Champagne, Catherine M.; Barreira, Tiago V.; Maia, José A.R.
2015-01-01
Obesity in children is partly due to unhealthy lifestyle behaviours, e.g., sedentary activity and poor dietary choices. This trend has been seen globally. To determine the extent of these behaviours in a Portuguese population of children, 686 children 9.5 to 10.5 years of age were studied. Our aims were to: (1) describe profiles of children’s lifestyle behaviours; (2) identify behaviour pattern classes; and (3) estimate combined effects of individual/socio-demographic characteristics in predicting class membership. Physical activity and sleep time were estimated by 24-h accelerometry. Nutritional habits, screen time and socio-demographics were obtained. Latent Class Analysis was used to determine unhealthy lifestyle behaviours. Logistic regression analysis predicted class membership. About 78% of children had three or more unhealthy lifestyle behaviours, while 0.2% presented no risk. Two classes were identified: Class 1-Sedentary, poorer diet quality; and Class 2-Insufficiently active, better diet quality, 35% and 65% of the population, respectively. More mature children (Odds Ratio (OR) = 6.75; 95%CI = 4.74–10.41), and boys (OR = 3.06; 95% CI = 1.98–4.72) were more likely to be overweight/obese. However, those belonging to Class 2 were less likely to be overweight/obese (OR = 0.60; 95% CI = 0.43–0.84). Maternal education level and household income did not significantly predict weight status (p ≥ 0.05). PMID:26043034
Observers for a class of systems with nonlinearities satisfying an incremental quadratic inequality
NASA Technical Reports Server (NTRS)
Acikmese, Ahmet Behcet; Martin, Corless
2004-01-01
We consider the problem of state estimation from nonlinear time-varying system whose nonlinearities satisfy an incremental quadratic inequality. Observers are presented which guarantee that the state estimation error exponentially converges to zero.
NASA Technical Reports Server (NTRS)
Menga, G.
1975-01-01
An approach, is proposed for the design of approximate, fixed order, discrete time realizations of stochastic processes from the output covariance over a finite time interval, was proposed. No restrictive assumptions are imposed on the process; it can be nonstationary and lead to a high dimension realization. Classes of fixed order models are defined, having the joint covariance matrix of the combined vector of the outputs in the interval of definition greater or equal than the process covariance; (the difference matrix is nonnegative definite). The design is achieved by minimizing, in one of those classes, a measure of the approximation between the model and the process evaluated by the trace of the difference of the respective covariance matrices. Models belonging to these classes have the notable property that, under the same measurement system and estimator structure, the output estimation error covariance matrix computed on the model is an upper bound of the corresponding covariance on the real process. An application of the approach is illustrated by the modeling of random meteorological wind profiles from the statistical analysis of historical data.
Yu, Dapao; Wang, Xiaoyu; Yin, You; Zhan, Jinyu; Lewis, Bernard J.; Tian, Jie; Bao, Ye; Zhou, Wangming; Zhou, Li; Dai, Limin
2014-01-01
Accurate estimates of forest carbon storage and changes in storage capacity are critical for scientific assessment of the effects of forest management on the role of forests as carbon sinks. Up to now, several studies reported forest biomass carbon (FBC) in Liaoning Province based on data from China's Continuous Forest Inventory, however, their accuracy were still not known. This study compared estimates of FBC in Liaoning Province derived from different methods. We found substantial variation in estimates of FBC storage for young and middle-age forests. For provincial forests with high proportions in these age classes, the continuous biomass expansion factor method (CBM) by forest type with age class is more accurate and therefore more appropriate for estimating forest biomass. Based on the above approach designed for this study, forests in Liaoning Province were found to be a carbon sink, with carbon stocks increasing from 63.0 TgC in 1980 to 120.9 TgC in 2010, reflecting an annual increase of 1.9 TgC. The average carbon density of forest biomass in the province has increased from 26.2 Mg ha−1 in 1980 to 31.0 Mg ha−1 in 2010. While the largest FBC occurred in middle-age forests, the average carbon density decreased in this age class during these three decades. The increase in forest carbon density resulted primarily from the increased area and carbon storage of mature forests. The relatively long age interval in each age class for slow-growing forest types increased the uncertainty of FBC estimates by CBM-forest type with age class, and further studies should devote more attention to the time span of age classes in establishing biomass expansion factors for use in CBM calculations. PMID:24586881
Yu, Dapao; Wang, Xiaoyu; Yin, You; Zhan, Jinyu; Lewis, Bernard J; Tian, Jie; Bao, Ye; Zhou, Wangming; Zhou, Li; Dai, Limin
2014-01-01
Accurate estimates of forest carbon storage and changes in storage capacity are critical for scientific assessment of the effects of forest management on the role of forests as carbon sinks. Up to now, several studies reported forest biomass carbon (FBC) in Liaoning Province based on data from China's Continuous Forest Inventory, however, their accuracy were still not known. This study compared estimates of FBC in Liaoning Province derived from different methods. We found substantial variation in estimates of FBC storage for young and middle-age forests. For provincial forests with high proportions in these age classes, the continuous biomass expansion factor method (CBM) by forest type with age class is more accurate and therefore more appropriate for estimating forest biomass. Based on the above approach designed for this study, forests in Liaoning Province were found to be a carbon sink, with carbon stocks increasing from 63.0 TgC in 1980 to 120.9 TgC in 2010, reflecting an annual increase of 1.9 TgC. The average carbon density of forest biomass in the province has increased from 26.2 Mg ha(-1) in 1980 to 31.0 Mg ha(-1) in 2010. While the largest FBC occurred in middle-age forests, the average carbon density decreased in this age class during these three decades. The increase in forest carbon density resulted primarily from the increased area and carbon storage of mature forests. The relatively long age interval in each age class for slow-growing forest types increased the uncertainty of FBC estimates by CBM-forest type with age class, and further studies should devote more attention to the time span of age classes in establishing biomass expansion factors for use in CBM calculations.
A new device to estimate abundance of moist-soil plant seeds
Penny, E.J.; Kaminski, R.M.; Reinecke, K.J.
2006-01-01
Methods to sample the abundance of moist-soil seeds efficiently and accurately are critical for evaluating management practices and determining food availability. We adapted a portable, gasoline-powered vacuum to estimate abundance of seeds on the surface of a moist-soil wetland in east-central Mississippi and evaluated the sampler by simulating conditions that researchers and managers may experience when sampling moist-soil areas for seeds. We measured the percent recovery of known masses of seeds by the vacuum sampler in relation to 4 experimentally controlled factors (i.e., seed-size class, sample mass, soil moisture class, and vacuum time) with 2-4 levels per factor. We also measured processing time of samples in the laboratory. Across all experimental factors, seed recovery averaged 88.4% and varied little (CV = 0.68%, n = 474). Overall, mean time to process a sample was 30.3 ? 2.5 min (SE, n = 417). Our estimate of seed recovery rate (88%) may be used to adjust estimates for incomplete seed recovery, or project-specific correction factors may be developed by investigators. Our device was effective for estimating surface abundance of moist-soil plant seeds after dehiscence and before habitats were flooded.
Vďačný, Peter
2015-08-01
The class Litostomatea comprises a diverse assemblage of free-living and endosymbiotic ciliates. To understand diversification dynamic of litostomateans, divergence times of their main groups were estimated with the Bayesian molecular dating, a technique allowing relaxation of molecular clock and incorporation of flexible calibration points. The class Litostomatea very likely emerged during the Cryogenian around 680 Mya. The origin of the subclass Rhynchostomatia is dated to about 415 Mya, while that of the subclass Haptoria to about 654 Mya. The order Pleurostomatida, emerging about 556 Mya, was recognized as the oldest group within the subclass Haptoria. The order Spathidiida appeared in the Paleozoic about 442 Mya. The three remaining haptorian orders evolved in the Paleozoic/Mesozoic periods: Didiniida about 419 Mya, Lacrymariida about 269 Mya, and Haptorida about 194 Mya. The subclass Trichostomatia originated from a spathidiid ancestor in the Mesozoic about 260 Mya. A further goal of this study was to investigate the impact of various settings on posterior divergence time estimates. The root placement and tree topology as well as the priors of the rate-drift model, birth-death process and nucleotide substitution rate, had no significant effect on calculation of posterior divergence time estimates. However, removal of calibration points could significantly change time estimates at some nodes. Copyright © 2015 Elsevier GmbH. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Hongjian; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.
2016-07-01
This paper deals with the robust H∞ state estimation problem for a class of memristive recurrent neural networks with stochastic time-delays. The stochastic time-delays under consideration are governed by a Bernoulli-distributed stochastic sequence. The purpose of the addressed problem is to design the robust state estimator such that the dynamics of the estimation error is exponentially stable in the mean square, and the prescribed ? performance constraint is met. By utilizing the difference inclusion theory and choosing a proper Lyapunov-Krasovskii functional, the existence condition of the desired estimator is derived. Based on it, the explicit expression of the estimator gain is given in terms of the solution to a linear matrix inequality. Finally, a numerical example is employed to demonstrate the effectiveness and applicability of the proposed estimation approach.
Latent trajectory studies: the basics, how to interpret the results, and what to report.
van de Schoot, Rens
2015-01-01
In statistics, tools have been developed to estimate individual change over time. Also, the existence of latent trajectories, where individuals are captured by trajectories that are unobserved (latent), can be evaluated (Muthén & Muthén, 2000). The method used to evaluate such trajectories is called Latent Growth Mixture Modeling (LGMM) or Latent Class Growth Modeling (LCGA). The difference between the two models is whether variance within latent classes is allowed for (Jung & Wickrama, 2008). The default approach most often used when estimating such models begins with estimating a single cluster model, where only a single underlying group is presumed. Next, several additional models are estimated with an increasing number of clusters (latent groups or classes). For each of these models, the software is allowed to estimate all parameters without any restrictions. A final model is chosen based on model comparison tools, for example, using the BIC, the bootstrapped chi-square test, or the Lo-Mendell-Rubin test. To ease the use of LGMM/LCGA step by step in this symposium (Van de Schoot, 2015) guidelines are presented which can be used for researchers applying the methods to longitudinal data, for example, the development of posttraumatic stress disorder (PTSD) after trauma (Depaoli, van de Schoot, van Loey, & Sijbrandij, 2015; Galatzer-Levy, 2015). The guidelines include how to use the software Mplus (Muthén & Muthén, 1998-2012) to run the set of models needed to answer the research question: how many latent classes exist in the data? The next step described in the guidelines is how to add covariates/predictors to predict class membership using the three-step approach (Vermunt, 2010). Lastly, it described what essentials to report in the paper. When applying LGMM/LCGA models for the first time, the guidelines presented can be used to guide what models to run and what to report.
Validity and Generalizability of Measuring Student Engaged Time in Physical Education.
ERIC Educational Resources Information Center
Silverman, Stephen; Zotos, Connee
The validity of interval and time sampling methods of measuring student engaged time was investigated in a study estimating the actual time students spent engaged in relevant motor performance in physical education classes. Two versions of the interval Academic Learning Time in Physical Education (ALT-PE) instrument and an equivalent time sampling…
Design of partially supervised classifiers for multispectral image data
NASA Technical Reports Server (NTRS)
Jeon, Byeungwoo; Landgrebe, David
1993-01-01
A partially supervised classification problem is addressed, especially when the class definition and corresponding training samples are provided a priori only for just one particular class. In practical applications of pattern classification techniques, a frequently observed characteristic is the heavy, often nearly impossible requirements on representative prior statistical class characteristics of all classes in a given data set. Considering the effort in both time and man-power required to have a well-defined, exhaustive list of classes with a corresponding representative set of training samples, this 'partially' supervised capability would be very desirable, assuming adequate classifier performance can be obtained. Two different classification algorithms are developed to achieve simplicity in classifier design by reducing the requirement of prior statistical information without sacrificing significant classifying capability. The first one is based on optimal significance testing, where the optimal acceptance probability is estimated directly from the data set. In the second approach, the partially supervised classification is considered as a problem of unsupervised clustering with initially one known cluster or class. A weighted unsupervised clustering procedure is developed to automatically define other classes and estimate their class statistics. The operational simplicity thus realized should make these partially supervised classification schemes very viable tools in pattern classification.
NASA Astrophysics Data System (ADS)
Uitz, Julia; Stramski, Dariusz; Gentili, Bernard; D'Ortenzio, Fabrizio; Claustre, Hervé
2012-06-01
An approach that combines a recently developed procedure for improved estimation of surface chlorophyll a concentration (Chlsurf) from ocean color and a phytoplankton class-specific bio-optical model was used to examine primary production in the Mediterranean Sea. Specifically, this approach was applied to the 10 year time series of satellite Chlsurfdata from the Sea-viewing Wide Field-of-view Sensor. We estimated the primary production associated with three major phytoplankton classes (micro, nano, and picophytoplankton), which also yielded new estimates of the total primary production (Ptot). These estimates of Ptot (e.g., 68 g C m-2 yr-1for the entire Mediterranean basin) are lower by a factor of ˜2 and show a different seasonal cycle when compared with results from conventional approaches based on standard ocean color chlorophyll algorithm and a non-class-specific primary production model. Nanophytoplankton are found to be dominant contributors to Ptot (43-50%) throughout the year and entire basin. Micro and picophytoplankton exhibit variable contributions to Ptot depending on the season and ecological regime. In the most oligotrophic regime, these contributions are relatively stable all year long with picophytoplankton (˜32%) playing a larger role than microphytoplankton (˜22%). In the blooming regime, picophytoplankton dominate over microphytoplankton most of the year, except during the spring bloom when microphytoplankton (27-38%) are considerably more important than picophytoplankton (20-27%).
A New Monte Carlo Method for Estimating Marginal Likelihoods.
Wang, Yu-Bo; Chen, Ming-Hui; Kuo, Lynn; Lewis, Paul O
2018-06-01
Evaluating the marginal likelihood in Bayesian analysis is essential for model selection. Estimators based on a single Markov chain Monte Carlo sample from the posterior distribution include the harmonic mean estimator and the inflated density ratio estimator. We propose a new class of Monte Carlo estimators based on this single Markov chain Monte Carlo sample. This class can be thought of as a generalization of the harmonic mean and inflated density ratio estimators using a partition weighted kernel (likelihood times prior). We show that our estimator is consistent and has better theoretical properties than the harmonic mean and inflated density ratio estimators. In addition, we provide guidelines on choosing optimal weights. Simulation studies were conducted to examine the empirical performance of the proposed estimator. We further demonstrate the desirable features of the proposed estimator with two real data sets: one is from a prostate cancer study using an ordinal probit regression model with latent variables; the other is for the power prior construction from two Eastern Cooperative Oncology Group phase III clinical trials using the cure rate survival model with similar objectives.
Econometric analysis of the factors influencing forest acreage trends in the southeast.
Ralph J. Alig
1986-01-01
Econometric models of changes in land use acreages in the Southeast by physiographic region have been developed by pooling cross-section and time series data. Separate acreage equations have been estimated for the three major private forestland owner classes and the three major classes of nonforest land use. Observations were drawn at three or four different points in...
Forest inventory using multistage sampling with probability proportional to size. [Brazil
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Lee, D. C. L.; Hernandezfilho, P.; Shimabukuro, Y. E.; Deassis, O. R.; Demedeiros, J. S.
1984-01-01
A multistage sampling technique, with probability proportional to size, for forest volume inventory using remote sensing data is developed and evaluated. The study area is located in the Southeastern Brazil. The LANDSAT 4 digital data of the study area are used in the first stage for automatic classification of reforested areas. Four classes of pine and eucalypt with different tree volumes are classified utilizing a maximum likelihood classification algorithm. Color infrared aerial photographs are utilized in the second stage of sampling. In the third state (ground level) the time volume of each class is determined. The total time volume of each class is expanded through a statistical procedure taking into account all the three stages of sampling. This procedure results in an accurate time volume estimate with a smaller number of aerial photographs and reduced time in field work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Youngrok
2013-05-15
Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates ofmore » nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.« less
Kim, Jae-Hyun; Yoo, Ki-Bong; Park, Eun-Cheol; Lee, Sang Gyu; Kim, Tae Hyun
2015-11-02
To examine the combined effects of education level and perceived social class on self-rated health and life satisfaction in South Korea. We used data drawn from the 8 to 15th wave of the Korean Labor and Income Panel Study (KLIPS). Using wave 8 at baseline, data included 11,175 individuals. We performed a longitudinal analysis at baseline estimating the prevalence of self-rated health and life satisfaction among individuals by education level (high, middle, and low education level) and perceived social class (high, middle, and low social class). For self-rated health, odds ratio (OR) of individuals with low education and low perceived social class was 0.604 times lower (95% CI: 0.555-0.656) and the OR of individuals with low education and middle perceived social class was 0.853 time lower (95% CI: 0.790-0.922) when compared to individuals with high education and high perceived social class. For life satisfaction, OR of individuals with low education and low perceived social class was 0.068 times lower (95% CI: 0.063-0.074) and the OR of individuals with middle education and middle perceived social class was 0.235 time lower (95% CI: 0.221-0.251) compared to individuals with high education and high perceived social class. This study shows that the combined effects of education level and perceived social class associated with self-rated health and life satisfaction. Our study suggests increasing education level and perceived social class. Additionally, it will be important to develop multi-dimensional measurement tools including education level and subjective social class.
Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems.
Wolf, Elizabeth Skubak; Anderson, David F
2015-01-21
Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased for a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.
Classes of Split-Plot Response Surface Designs for Equivalent Estimation
NASA Technical Reports Server (NTRS)
Parker, Peter A.; Kowalski, Scott M.; Vining, G. Geoffrey
2006-01-01
When planning an experimental investigation, we are frequently faced with factors that are difficult or time consuming to manipulate, thereby making complete randomization impractical. A split-plot structure differentiates between the experimental units associated with these hard-to-change factors and others that are relatively easy-to-change and provides an efficient strategy that integrates the restrictions imposed by the experimental apparatus. Several industrial and scientific examples are presented to illustrate design considerations encountered in the restricted randomization context. In this paper, we propose classes of split-plot response designs that provide an intuitive and natural extension from the completely randomized context. For these designs, the ordinary least squares estimates of the model are equivalent to the generalized least squares estimates. This property provides best linear unbiased estimators and simplifies model estimation. The design conditions that allow for equivalent estimation are presented enabling design construction strategies to transform completely randomized Box-Behnken, equiradial, and small composite designs into a split-plot structure.
A Repeated Trajectory Class Model for Intensive Longitudinal Categorical Outcome
Lin, Haiqun; Han, Ling; Peduzzi, Peter N.; Murphy, Terrence E.; Gill, Thomas M.; Allore, Heather G.
2014-01-01
This paper presents a novel repeated latent class model for a longitudinal response that is frequently measured as in our prospective study of older adults with monthly data on activities of daily living (ADL) for more than ten years. The proposed method is especially useful when the longitudinal response is measured much more frequently than other relevant covariates. The repeated trajectory classes represent distinct temporal patterns of the longitudinal response wherein an individual’s membership in the trajectory classes may renew or change over time. Within a trajectory class, the longitudinal response is modeled by a class-specific generalized linear mixed model. Effectively, an individual may remain in a trajectory class or switch to another as the class membership predictors are updated periodically over time. The identification of a common set of trajectory classes allows changes among the temporal patterns to be distinguished from local fluctuations in the response. An informative event such as death is jointly modeled by class-specific probability of the event through shared random effects. We do not impose the conditional independence assumption given the classes. The method is illustrated by analyzing the change over time in ADL trajectory class among 754 older adults with 70500 person-months of follow-up in the Precipitating Events Project. We also investigate the impact of jointly modeling the class-specific probability of the event on the parameter estimates in a simulation study. The primary contribution of our paper is the periodic updating of trajectory classes for a longitudinal categorical response without assuming conditional independence. PMID:24519416
NASA Astrophysics Data System (ADS)
Uitz, Julia; Claustre, Hervé; Gentili, Bernard; Stramski, Dariusz
2010-09-01
We apply an innovative approach to time series data of surface chlorophyll from satellite observations with SeaWiFS (Sea-viewing Wide Field-of-view Sensor) to estimate the primary production associated with three major phytoplankton classes (micro-, nano-, and picophytoplankton) within the world's oceans. Statistical relationships, determined from an extensive in situ database of phytoplankton pigments, are used to infer class-specific vertical profiles of chlorophyll a concentration from satellite-derived surface chlorophyll a. This information is combined with a primary production model and class-specific photophysiological parameters to compute global seasonal fields of class-specific primary production over a 10-year period from January 1998 through December 2007. Microphytoplankton (mostly diatoms) appear as a major contributor to total primary production in coastal upwelling systems (70%) and temperate and subpolar regions (50%) during the spring-summer season. The contribution of picophytoplankton (e.g., prokaryotes) reaches maximum values (45%) in subtropical oligotrophic gyres. Nanophytoplankton (e.g., prymnesiophytes) provide a ubiquitous, substantial contribution (30-60%). Annual global estimates of class-specific primary production amount to 15 Gt C yr-1 (32% of total), 20 Gt C yr-1 (44%) and 11 Gt C yr-1 (24%) for micro-, nano-, and picophytoplankton, respectively. The analysis of interannual variations revealed large anomalies in class-specific primary production as compared to the 10-year mean cycle in both the productive North Atlantic basin and the more stable equatorial Pacific upwelling. Microphytoplankton show the largest range of variability of the three phytoplankton classes on seasonal and interannual time scales. Our results contribute to an understanding and quantification of carbon cycle in the ocean.
Albrecht, Sandra S; Mayer-Davis, Elizabeth; Popkin, Barry M
2017-07-01
For the same body mass index (BMI) level, waist circumference (WC) is higher in more recent years. How this impacts diabetes and prediabetes prevalence in the United States and for different race/ethnic groups is unknown. We examined prevalence differences in diabetes and prediabetes by BMI over time, investigated whether estimates were attenuated after adjusting for waist circumference, and evaluated implications of these patterns on race/ethnic disparities in glycemic outcomes. Data came from 12 614 participants aged 20 to 74 years from the National Health and Nutrition Examination Surveys (1988-1994 and 2007-2012). We estimated prevalence differences in diabetes and prediabetes by BMI over time in multivariable models. Relevant interactions evaluated race/ethnic differences. Among normal, overweight, and class I obese individuals, there were no significant differences in diabetes prevalence over time. However, among individuals with class II/III obesity, diabetes prevalence rose 7.6 percentage points in 2007-2012 vs 1988-1994. This estimate was partly attenuated after adjustment for mean waist circumference but not mean BMI. For prediabetes, prevalence was 10 to 13 percentage points higher over time at lower BMI values, with minimal attenuation after adjustment for WC. All patterns held within race/ethnic groups. Diabetes disparities among blacks and Mexican Americans relative to whites remained in both periods, regardless of BMI, and persisted after adjustment for WC. Diabetes prevalence rose over time among individuals with class II/III obesity and may be partly due to increasing waist circumference. Anthropometric measures did not appear to account for temporal increases in prediabetes, nor did they attenuate race/ethnic disparities in diabetes. Reasons underlying these trends require further investigation. Copyright © 2017 John Wiley & Sons, Ltd.
Autonomous optical navigation using nanosatellite-class instruments: a Mars approach case study
NASA Astrophysics Data System (ADS)
Enright, John; Jovanovic, Ilija; Kazemi, Laila; Zhang, Harry; Dzamba, Tom
2018-02-01
This paper examines the effectiveness of small star trackers for orbital estimation. Autonomous optical navigation has been used for some time to provide local estimates of orbital parameters during close approach to celestial bodies. These techniques have been used extensively on spacecraft dating back to the Voyager missions, but often rely on long exposures and large instrument apertures. Using a hyperbolic Mars approach as a reference mission, we present an EKF-based navigation filter suitable for nanosatellite missions. Observations of Mars and its moons allow the estimator to correct initial errors in both position and velocity. Our results show that nanosatellite-class star trackers can produce good quality navigation solutions with low position (<300 {m}) and velocity (<0.15 {m/s}) errors as the spacecraft approaches periapse.
NASA Astrophysics Data System (ADS)
Loredo, Thomas; Budavari, Tamas; Scargle, Jeffrey D.
2018-01-01
This presentation provides an overview of open-source software packages addressing two challenging classes of astrostatistics problems. (1) CUDAHM is a C++ framework for hierarchical Bayesian modeling of cosmic populations, leveraging graphics processing units (GPUs) to enable applying this computationally challenging paradigm to large datasets. CUDAHM is motivated by measurement error problems in astronomy, where density estimation and linear and nonlinear regression must be addressed for populations of thousands to millions of objects whose features are measured with possibly complex uncertainties, potentially including selection effects. An example calculation demonstrates accurate GPU-accelerated luminosity function estimation for simulated populations of $10^6$ objects in about two hours using a single NVIDIA Tesla K40c GPU. (2) Time Series Explorer (TSE) is a collection of software in Python and MATLAB for exploratory analysis and statistical modeling of astronomical time series. It comprises a library of stand-alone functions and classes, as well as an application environment for interactive exploration of times series data. The presentation will summarize key capabilities of this emerging project, including new algorithms for analysis of irregularly-sampled time series.
Score Estimating Equations from Embedded Likelihood Functions under Accelerated Failure Time Model
NING, JING; QIN, JING; SHEN, YU
2014-01-01
SUMMARY The semiparametric accelerated failure time (AFT) model is one of the most popular models for analyzing time-to-event outcomes. One appealing feature of the AFT model is that the observed failure time data can be transformed to identically independent distributed random variables without covariate effects. We describe a class of estimating equations based on the score functions for the transformed data, which are derived from the full likelihood function under commonly used semiparametric models such as the proportional hazards or proportional odds model. The methods of estimating regression parameters under the AFT model can be applied to traditional right-censored survival data as well as more complex time-to-event data subject to length-biased sampling. We establish the asymptotic properties and evaluate the small sample performance of the proposed estimators. We illustrate the proposed methods through applications in two examples. PMID:25663727
Regions of absolute ultimate boundedness for discrete-time systems.
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Weissenberger, S.
1972-01-01
This paper considers discrete-time systems of the Lur'e-Postnikov class where the linear part is not asymptotically stable and the nonlinear characteristic satisfies only partially the usual sector condition. Estimates of the resulting finite regions of absolute ultimate boundedness are calculated by means of a quadratic Liapunov function.
Price elasticity and medication use: cost sharing across multiple clinical conditions.
Gatwood, Justin; Gibson, Teresa B; Chernew, Michael E; Farr, Amanda M; Vogtmann, Emily; Fendrick, A Mark
2014-11-01
To address the impact that out-of-pocket prices may have on medication use, it is vital to understand how the demand for medications may be affected when patients are faced with changes in the price to acquire treatment and how price responsiveness differs across medication classes. To examine the impact of cost-sharing changes on the demand for 8 classes of prescription medications. This was a retrospective database analysis of 11,550,363 commercially insured enrollees within the 2005-2009 MarketScan Database. Patient cost sharing, expressed as a price index for each medication class, was the main explanatory variable to examine the price elasticity of demand. Negative binomial fixed effect models were estimated to examine medication fills. The elasticity estimates reflect how use changes over time as a function of changes in copayments. Model estimates revealed that price elasticity of demand ranged from -0.015 to -0.157 within the 8 categories of medications (P less than 0.01 for 7 of 8 categories). The price elasticity of demand for smoking deterrents was largest (-0.157, P less than 0.0001), while demand for antiplatelet agents was not responsive to price (P greater than 0.05). The price elasticity of demand varied considerably by medication class, suggesting that the influence of cost sharing on medication use may be related to characteristics inherent to each medication class or underlying condition.
ERIC Educational Resources Information Center
Chung, Hwan; Anthony, James C.
2013-01-01
This article presents a multiple-group latent class-profile analysis (LCPA) by taking a Bayesian approach in which a Markov chain Monte Carlo simulation is employed to achieve more robust estimates for latent growth patterns. This article describes and addresses a label-switching problem that involves the LCPA likelihood function, which has…
Demographic analysis from summaries of an age-structured population
Link, William A.; Royle, J. Andrew; Hatfield, Jeff S.
2003-01-01
Demographic analyses of age-structured populations typically rely on life history data for individuals, or when individual animals are not identified, on information about the numbers of individuals in each age class through time. While it is usually difficult to determine the age class of a randomly encountered individual, it is often the case that the individual can be readily and reliably assigned to one of a set of age classes. For example, it is often possible to distinguish first-year from older birds. In such cases, the population age structure can be regarded as a latent variable governed by a process prior, and the data as summaries of this latent structure. In this article, we consider the problem of uncovering the latent structure and estimating process parameters from summaries of age class information. We present a demographic analysis for the critically endangered migratory population of whooping cranes (Grus americana), based only on counts of first-year birds and of older birds. We estimate age and year-specific survival rates. We address the controversial issue of whether management action on the breeding grounds has influenced recruitment, relating recruitment rates to the number of seventh-year and older birds, and examining the pattern of variation through time in this rate.
Evaluation of bursal depth as an indicator of age class of harlequin ducks
Mather, D.D.; Esler, Daniel N.
1999-01-01
We contrasted the estimated age class of recaptured Harlequin Ducks (Histrionicus histrionicus) (n = 255) based on bursal depth with expected age class based on bursal depth at first capture and time since first capture. Although neither estimated nor expected ages can be assumed to be correct, rates of discrepancies between the two for within-year recaptures indicate sampling error, while between-year recaptures test assumptions about rates of bursal involution. Within-year, between-year, and overall discrepancy rates were 10%, 24%, and 18%, respectively. Most (86%) between-year discrepancies occurred for birds expected to be after-third-year (ATY) but estimated to be third-year (TY). Of these ATY-TY discrepancies, 22 of 25 (88%) birds had bursal depths of 2 or 3 mm. Further, five of six between-year recaptures that were known to be ATY but estimated to be TY had 2 mm bursas. Reclassifying birds with 2 or 3 mm bursas as ATY resulted in reduction in between-year (24% to 10%) and overall (18% to 11%) discrepancy rates. We conclude that age determination of Harlequin Ducks based on bursal depth, particularly using our modified criteria, is a relatively consistent and reliable technique.
Sheng, Li; Wang, Zidong; Zou, Lei; Alsaadi, Fuad E
2017-10-01
In this paper, the event-based finite-horizon H ∞ state estimation problem is investigated for a class of discrete time-varying stochastic dynamical networks with state- and disturbance-dependent noises [also called (x,v) -dependent noises]. An event-triggered scheme is proposed to decrease the frequency of the data transmission between the sensors and the estimator, where the signal is transmitted only when certain conditions are satisfied. The purpose of the problem addressed is to design a time-varying state estimator in order to estimate the network states through available output measurements. By employing the completing-the-square technique and the stochastic analysis approach, sufficient conditions are established to ensure that the error dynamics of the state estimation satisfies a prescribed H ∞ performance constraint over a finite horizon. The desired estimator parameters can be designed via solving coupled backward recursive Riccati difference equations. Finally, a numerical example is exploited to demonstrate the effectiveness of the developed state estimation scheme.
Estimation of modal parameters using bilinear joint time frequency distributions
NASA Astrophysics Data System (ADS)
Roshan-Ghias, A.; Shamsollahi, M. B.; Mobed, M.; Behzad, M.
2007-07-01
In this paper, a new method is proposed for modal parameter estimation using time-frequency representations. Smoothed Pseudo Wigner-Ville distribution which is a member of the Cohen's class distributions is used to decouple vibration modes completely in order to study each mode separately. This distribution reduces cross-terms which are troublesome in Wigner-Ville distribution and retains the resolution as well. The method was applied to highly damped systems, and results were superior to those obtained via other conventional methods.
Eren, Metin I.; Chao, Anne; Hwang, Wen-Han; Colwell, Robert K.
2012-01-01
Background Estimating assemblage species or class richness from samples remains a challenging, but essential, goal. Though a variety of statistical tools for estimating species or class richness have been developed, they are all singly-bounded: assuming only a lower bound of species or classes. Nevertheless there are numerous situations, particularly in the cultural realm, where the maximum number of classes is fixed. For this reason, a new method is needed to estimate richness when both upper and lower bounds are known. Methodology/Principal Findings Here, we introduce a new method for estimating class richness: doubly-bounded confidence intervals (both lower and upper bounds are known). We specifically illustrate our new method using the Chao1 estimator, rarefaction, and extrapolation, although any estimator of asymptotic richness can be used in our method. Using a case study of Clovis stone tools from the North American Lower Great Lakes region, we demonstrate that singly-bounded richness estimators can yield confidence intervals with upper bound estimates larger than the possible maximum number of classes, while our new method provides estimates that make empirical sense. Conclusions/Significance Application of the new method for constructing doubly-bound richness estimates of Clovis stone tools permitted conclusions to be drawn that were not otherwise possible with singly-bounded richness estimates, namely, that Lower Great Lakes Clovis Paleoindians utilized a settlement pattern that was probably more logistical in nature than residential. However, our new method is not limited to archaeological applications. It can be applied to any set of data for which there is a fixed maximum number of classes, whether that be site occupancy models, commercial products (e.g. athletic shoes), or census information (e.g. nationality, religion, age, race). PMID:22666316
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplanoglu, Erkan; Safak, Koray K.; Varol, H. Selcuk
2009-01-12
An experiment based method is proposed for parameter estimation of a class of linear multivariable systems. The method was applied to a pressure-level control process. Experimental time domain input/output data was utilized in a gray-box modeling approach. Prior knowledge of the form of the system transfer function matrix elements is assumed to be known. Continuous-time system transfer function matrix parameters were estimated in real-time by the least-squares method. Simulation results of experimentally determined system transfer function matrix compare very well with the experimental results. For comparison and as an alternative to the proposed real-time estimation method, we also implemented anmore » offline identification method using artificial neural networks and obtained fairly good results. The proposed methods can be implemented conveniently on a desktop PC equipped with a data acquisition board for parameter estimation of moderately complex linear multivariable systems.« less
O'Donnell, Matthew J.; Horton, Gregg E.; Letcher, Benjamin H.
2010-01-01
Portable passive integrated transponder (PIT) tag antenna systems can be valuable in providing reliable estimates of the abundance of tagged Atlantic salmon Salmo salar in small streams under a wide range of conditions. We developed and employed PIT tag antenna wand techniques in two controlled experiments and an additional case study to examine the factors that influenced our ability to estimate population size. We used Pollock's robust-design capture–mark–recapture model to obtain estimates of the probability of first detection (p), the probability of redetection (c), and abundance (N) in the two controlled experiments. First, we conducted an experiment in which tags were hidden in fixed locations. Although p and c varied among the three observers and among the three passes that each observer conducted, the estimates of N were identical to the true values and did not vary among observers. In the second experiment using free-swimming tagged fish, p and c varied among passes and time of day. Additionally, estimates of N varied between day and night and among age-classes but were within 10% of the true population size. In the case study, we used the Cormack–Jolly–Seber model to examine the variation in p, and we compared counts of tagged fish found with the antenna wand with counts collected via electrofishing. In that study, we found that although p varied for age-classes, sample dates, and time of day, antenna and electrofishing estimates of N were similar, indicating that population size can be reliably estimated via PIT tag antenna wands. However, factors such as the observer, time of day, age of fish, and stream discharge can influence the initial and subsequent detection probabilities.
NASA Astrophysics Data System (ADS)
Petrillo, Marta; Cherubini, Paolo; Fravolini, Giulia; Marchetti, Marco; Ascher-Jenull, Judith; Schärer, Michael; Synal, Hans-Arno; Bertoldi, Daniela; Camin, Federica; Larcher, Roberto; Egli, Markus
2016-03-01
Due to the large size (e.g. sections of tree trunks) and highly heterogeneous spatial distribution of deadwood, the timescales involved in the coarse woody debris (CWD) decay of Picea abies (L.) Karst. and Larix decidua Mill. in Alpine forests are largely unknown. We investigated the CWD decay dynamics in an Alpine valley in Italy using the chronosequence approach and the five-decay class system that is based on a macromorphological assessment. For the decay classes 1-3, most of the dendrochronological samples were cross-dated to assess the time that had elapsed since tree death, but for decay classes 4 and 5 (poorly preserved tree rings) radiocarbon dating was used. In addition, density, cellulose, and lignin data were measured for the dated CWD. The decay rate constants for spruce and larch were estimated on the basis of the density loss using a single negative exponential model, a regression approach, and the stage-based matrix model. In the decay classes 1-3, the ages of the CWD were similar and varied between 1 and 54 years for spruce and 3 and 40 years for larch, with no significant differences between the classes; classes 1-3 are therefore not indicative of deadwood age. This seems to be due to a time lag between the death of a standing tree and its contact with the soil. We found distinct tree-species-specific differences in decay classes 4 and 5, with larch CWD reaching an average age of 210 years in class 5 and spruce only 77 years. The mean CWD rate constants were estimated to be in the range 0.018 to 0.022 y-1 for spruce and to about 0.012 y-1 for larch. Snapshot sampling (chronosequences) may overestimate the age and mean residence time of CWD. No sampling bias was, however, detectable using the stage-based matrix model. Cellulose and lignin time trends could be derived on the basis of the ages of the CWD. The half-lives for cellulose were 21 years for spruce and 50 years for larch. The half-life of lignin is considerably higher and may be more than 100 years in larch CWD. Consequently, the decay of Picea abies and Larix decidua is very low. Several uncertainties, however, remain: 14C dating of CWD from decay classes 4 and 5 and having a pre-bomb age is often difficult (large age range due to methodological constraints) and fall rates of both European larch and Norway spruce are missing.
A Class of Factor Analysis Estimation Procedures with Common Asymptotic Sampling Properties
ERIC Educational Resources Information Center
Swain, A. J.
1975-01-01
Considers a class of estimation procedures for the factor model. The procedures are shown to yield estimates possessing the same asymptotic sampling properties as those from estimation by maximum likelihood or generalized last squares, both special members of the class. General expressions for the derivatives needed for Newton-Raphson…
Estimating Stability Class in the Field
Leonidas G. Lavdas
1997-01-01
A simple and easily remembered method is described for estimating cloud ceiling height in the field. Estimating ceiling height provides the means to estimate stability class, a parameter used to help determine Dispersion Index and Low Visibility Occurrence Risk Index, indices used as smoke management aids. Stability class is also used as an input to VSMOKE, an...
Age, year‐class strength variability, and partial age validation of Kiyis from Lake Superior
Lepak, Taylor A.; Ogle, Derek H.; Vinson, Mark
2017-01-01
ge estimates of Lake Superior Kiyis Coregonus kiyi from scales and otoliths were compared and 12 years (2003–2014) of length frequency data were examined to assess year‐class strength and validate age estimates. Ages estimated from otoliths were precise and were consistently older than ages estimated from scales. Maximum otolith‐derived ages were 20 years for females and 12 years for males. Age estimates showed high numbers of fish of ages 5, 6, and 11 in 2014, corresponding to the 2009, 2008, and 2003 year‐classes, respectively. Strong 2003 and 2009 year‐classes, along with the 2005 year‐class, were also evident based on distinct modes of age‐1 fish (<110 mm) in the length frequency distributions from 2004, 2010, and 2006, respectively. Modes from these year‐classes were present as progressively larger fish in subsequent years. Few to no age‐1 fish (<110 mm) were present in all other years. Ages estimated from otoliths were generally within 1 year of the ages corresponding to strong year‐classes, at least for age‐5 and older fish, suggesting that Kiyi age may be reliably estimated to within 1 year by careful examination of thin‐sectioned otoliths.
Arbitrary-order corrections for finite-time drift and diffusion coefficients
NASA Astrophysics Data System (ADS)
Anteneodo, C.; Riera, R.
2009-09-01
We address a standard class of diffusion processes with linear drift and quadratic diffusion coefficients. These contributions to dynamic equations can be directly drawn from data time series. However, real data are constrained to finite sampling rates and therefore it is crucial to establish a suitable mathematical description of the required finite-time corrections. Based on Itô-Taylor expansions, we present the exact corrections to the finite-time drift and diffusion coefficients. These results allow to reconstruct the real hidden coefficients from the empirical estimates. We also derive higher-order finite-time expressions for the third and fourth conditional moments that furnish extra theoretical checks for this class of diffusion models. The analytical predictions are compared with the numerical outcomes of representative artificial time series.
A class of semiparametric cure models with current status data.
Diao, Guoqing; Yuan, Ao
2018-02-08
Current status data occur in many biomedical studies where we only know whether the event of interest occurs before or after a particular time point. In practice, some subjects may never experience the event of interest, i.e., a certain fraction of the population is cured or is not susceptible to the event of interest. We consider a class of semiparametric transformation cure models for current status data with a survival fraction. This class includes both the proportional hazards and the proportional odds cure models as two special cases. We develop efficient likelihood-based estimation and inference procedures. We show that the maximum likelihood estimators for the regression coefficients are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in finite samples. For illustration, we provide an application of the models to a study on the calcification of the hydrogel intraocular lenses.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1977-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1978-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, Elizabeth Skubak, E-mail: ewolf@saintmarys.edu; Anderson, David F., E-mail: anderson@math.wisc.edu
2015-01-21
Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased formore » a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.« less
H∞ state estimation of stochastic memristor-based neural networks with time-varying delays.
Bao, Haibo; Cao, Jinde; Kurths, Jürgen; Alsaedi, Ahmed; Ahmad, Bashir
2018-03-01
This paper addresses the problem of H ∞ state estimation for a class of stochastic memristor-based neural networks with time-varying delays. Under the framework of Filippov solution, the stochastic memristor-based neural networks are transformed into systems with interval parameters. The present paper is the first to investigate the H ∞ state estimation problem for continuous-time Itô-type stochastic memristor-based neural networks. By means of Lyapunov functionals and some stochastic technique, sufficient conditions are derived to ensure that the estimation error system is asymptotically stable in the mean square with a prescribed H ∞ performance. An explicit expression of the state estimator gain is given in terms of linear matrix inequalities (LMIs). Compared with other results, our results reduce control gain and control cost effectively. Finally, numerical simulations are provided to demonstrate the efficiency of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Modeling urban expansion in Yangon, Myanmar using Landsat time-series and stereo GeoEye Images
NASA Astrophysics Data System (ADS)
Sritarapipat, Tanakorn; Takeuchi, Wataru
2016-06-01
This research proposed a methodology to model the urban expansion based dynamic statistical model using Landsat and GeoEye Images. Landsat Time-Series from 1978 to 2010 have been applied to extract land covers from the past to the present. Stereo GeoEye Images have been employed to obtain the height of the building. The class translation was obtained by observing land cover from the past to the present. The height of the building can be used to detect the center of the urban area (mainly commercial area). It was assumed that the class translation and the distance of multi-centers of the urban area also the distance of the roads affect the urban growth. The urban expansion model based on the dynamic statistical model was defined to refer to three factors; (1) the class translation, (2) the distance of the multicenters of the urban areas, and (3) the distance from the roads. Estimation and prediction of urban expansion by using our model were formulated and expressed in this research. The experimental area was set up in Yangon, Myanmar. Since it is the major of country's economic with more than five million population and the urban areas have rapidly increased. The experimental results indicated that our model of urban expansion estimated urban growth in both estimation and prediction steps in efficiency.
Conley, Marguerite M; Gastin, Paul B; Brown, Helen; Shaw, Christine
2011-03-01
Physical activity recommendations for children in several countries advise that all young people should accumulate at least 60 min of moderate to vigorous physical activity every day. Perceiving physical activity intensity, however, can be a difficult task for children and it is not clear whether children can identify their levels of moderate to vigorous physical activity in accordance with the recommended guidelines. This study aimed to (1) explore whether children can identify time spent in moderate to vigorous physical activity; and (2) investigate whether heart rate biofeedback would improve children's ability to estimate time spent in moderate to vigorous physical activity. Thirty seven children (15 boys and 22 girls, mean age 12.6 years) wore data recording Polar E600 heart rate monitors during eight physical education lessons. At the end of each lesson children's estimated time in zone was compared to their actual time in zone. During a six lesson Intervention phase, one class was assigned to a biofeedback group whilst the other class acted as the control group and received no heart rate biofeedback. Post-Intervention, students in the biofeedback group were no better than the control group at estimating time spent in zone (mean relative error of estimation biofeedback group: Pre-Intervention 41±32% to Post-Intervention 28±26%; control group: Pre-Intervention 40±39% to Post-Intervention 31±37%). Thus it seems that identifying time spent in moderate to vigorous physical activity remains a complex task for children aged 11-13 even with the help of heart rate biofeedback. Copyright © 2010 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Resolvent estimates in homogenisation of periodic problems of fractional elasticity
NASA Astrophysics Data System (ADS)
Cherednichenko, Kirill; Waurick, Marcus
2018-03-01
We provide operator-norm convergence estimates for solutions to a time-dependent equation of fractional elasticity in one spatial dimension, with rapidly oscillating coefficients that represent the material properties of a viscoelastic composite medium. Assuming periodicity in the coefficients, we prove operator-norm convergence estimates for an operator fibre decomposition obtained by applying to the original fractional elasticity problem the Fourier-Laplace transform in time and Gelfand transform in space. We obtain estimates on each fibre that are uniform in the quasimomentum of the decomposition and in the period of oscillations of the coefficients as well as quadratic with respect to the spectral variable. On the basis of these uniform estimates we derive operator-norm-type convergence estimates for the original fractional elasticity problem, for a class of sufficiently smooth densities of applied forces.
NASA Astrophysics Data System (ADS)
Vadivel, P.; Sakthivel, R.; Mathiyalagan, K.; Arunkumar, A.
2013-09-01
This paper addresses the issue of robust state estimation for a class of fuzzy bidirectional associative memory (BAM) neural networks with time-varying delays and parameter uncertainties. By constructing the Lyapunov-Krasovskii functional, which contains the triple-integral term and using the free-weighting matrix technique, a set of sufficient conditions are derived in terms of linear matrix inequalities (LMIs) to estimate the neuron states through available output measurements such that the dynamics of the estimation error system is robustly asymptotically stable. In particular, we consider a generalized activation function in which the traditional assumptions on the boundedness, monotony and differentiability of the activation functions are removed. More precisely, the design of the state estimator for such BAM neural networks can be obtained by solving some LMIs, which are dependent on the size of the time derivative of the time-varying delays. Finally, a numerical example with simulation result is given to illustrate the obtained theoretical results.
Cornick, Matthew; Hunt, Brian; Ott, Edward; Kurtuldu, Huseyin; Schatz, Michael F
2009-03-01
Data assimilation refers to the process of estimating a system's state from a time series of measurements (which may be noisy or incomplete) in conjunction with a model for the system's time evolution. Here we demonstrate the applicability of a recently developed data assimilation method, the local ensemble transform Kalman filter, to nonlinear, high-dimensional, spatiotemporally chaotic flows in Rayleigh-Bénard convection experiments. Using this technique we are able to extract the full temperature and velocity fields from a time series of shadowgraph measurements. In addition, we describe extensions of the algorithm for estimating model parameters. Our results suggest the potential usefulness of our data assimilation technique to a broad class of experimental situations exhibiting spatiotemporal chaos.
ERIC Educational Resources Information Center
Goldhaber, Dan; Hansen, Michael
2010-01-01
Economic theory commonly models unobserved worker quality as a given parameter that is fixed over time, but empirical evidence supporting this assumption is sparse. In this paper we report on work estimating the stability of value-added estimates of teacher effects, an important area of investigation given that new workforce policies implicitly…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, J.R.; Heger, A.S.; Koen, B.V.
1984-04-01
This report is the result of a preliminary feasibility study of the applicability of Stein and related parametric empirical Bayes (PEB) estimators to the Nuclear Plant Reliability Data System (NPRDS). A new estimator is derived for the means of several independent Poisson distributions with different sampling times. This estimator is applied to data from NPRDS in an attempt to improve failure rate estimation. Theoretical and Monte Carlo results indicate that the new PEB estimator can perform significantly better than the standard maximum likelihood estimator if the estimation of the individual means can be combined through the loss function or throughmore » a parametric class of prior distributions.« less
Modern Methods for Modeling Change in Obesity Research in Nursing.
Sereika, Susan M; Zheng, Yaguang; Hu, Lu; Burke, Lora E
2017-08-01
Persons receiving treatment for weight loss often demonstrate heterogeneity in lifestyle behaviors and health outcomes over time. Traditional repeated measures approaches focus on the estimation and testing of an average temporal pattern, ignoring the interindividual variability about the trajectory. An alternate person-centered approach, group-based trajectory modeling, can be used to identify distinct latent classes of individuals following similar trajectories of behavior or outcome change as a function of age or time and can be expanded to include time-invariant and time-dependent covariates and outcomes. Another latent class method, growth mixture modeling, builds on group-based trajectory modeling to investigate heterogeneity within the distinct trajectory classes. In this applied methodologic study, group-based trajectory modeling for analyzing changes in behaviors or outcomes is described and contrasted with growth mixture modeling. An illustration of group-based trajectory modeling is provided using calorie intake data from a single-group, single-center prospective study for weight loss in adults who are either overweight or obese.
Borghese, Michael M; Janssen, Ian
2018-03-22
Children participate in four main types of physical activity: organized sport, active travel, outdoor active play, and curriculum-based physical activity. The objective of this study was to develop a valid approach that can be used to concurrently measure time spent in each of these types of physical activity. Two samples (sample 1: n = 50; sample 2: n = 83) of children aged 10-13 wore an accelerometer and a GPS watch continuously over 7 days. They also completed a log where they recorded the start and end times of organized sport sessions. Sample 1 also completed an outdoor time log where they recorded the times they went outdoors and a description of the outdoor activity. Sample 2 also completed a curriculum log where they recorded times they participated in physical activity (e.g., physical education) during class time. We describe the development of a measurement approach that can be used to concurrently assess the time children spend participating in specific types of physical activity. The approach uses a combination of data from accelerometers, GPS, and activity logs and relies on merging and then processing these data using several manual (e.g., data checks and cleaning) and automated (e.g., algorithms) procedures. In the new measurement approach time spent in organized sport is estimated using the activity log. Time spent in active travel is estimated using an existing algorithm that uses GPS data. Time spent in outdoor active play is estimated using an algorithm (with a sensitivity and specificity of 85%) that was developed using data collected in sample 1 and which uses all of the data sources. Time spent in curriculum-based physical activity is estimated using an algorithm (with a sensitivity of 78% and specificity of 92%) that was developed using data collected in sample 2 and which uses accelerometer data collected during class time. There was evidence of excellent intra- and inter-rater reliability of the estimates for all of these types of physical activity when the manual steps were duplicated. This novel measurement approach can be used to estimate the time that children participate in different types of physical activity.
Chen, Ling; Feng, Yanqin; Sun, Jianguo
2017-10-01
This paper discusses regression analysis of clustered failure time data, which occur when the failure times of interest are collected from clusters. In particular, we consider the situation where the correlated failure times of interest may be related to cluster sizes. For inference, we present two estimation procedures, the weighted estimating equation-based method and the within-cluster resampling-based method, when the correlated failure times of interest arise from a class of additive transformation models. The former makes use of the inverse of cluster sizes as weights in the estimating equations, while the latter can be easily implemented by using the existing software packages for right-censored failure time data. An extensive simulation study is conducted and indicates that the proposed approaches work well in both the situations with and without informative cluster size. They are applied to a dental study that motivated this study.
Chen, Rui; Hyrien, Ollivier
2011-01-01
This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356
Estimating transmission probability in schools for the 2009 H1N1 influenza pandemic in Italy.
Clamer, Valentina; Dorigatti, Ilaria; Fumanelli, Laura; Rizzo, Caterina; Pugliese, Andrea
2016-10-12
Epidemic models are being extensively used to understand the main pathways of spread of infectious diseases, and thus to assess control methods. Schools are well known to represent hot spots for epidemic spread; hence, understanding typical patterns of infection transmission within schools is crucial for designing adequate control strategies. The attention that was given to the 2009 A/H1N1pdm09 flu pandemic has made it possible to collect detailed data on the occurrence of influenza-like illness (ILI) symptoms in two primary schools of Trento, Italy. The data collected in the two schools were used to calibrate a discrete-time SIR model, which was designed to estimate the probabilities of influenza transmission within the classes, grades and schools using Markov Chain Monte Carlo (MCMC) methods. We found that the virus was mainly transmitted within class, with lower levels of transmission between students in the same grade and even lower, though not significantly so, among different grades within the schools. We estimated median values of R 0 from the epidemic curves in the two schools of 1.16 and 1.40; on the other hand, we estimated the average number of students infected by the first school case to be 0.85 and 1.09 in the two schools. The discrepancy between the values of R 0 estimated from the epidemic curve or from the within-school transmission probabilities suggests that household and community transmission played an important role in sustaining the school epidemics. The high probability of infection between students in the same class confirms that targeting within-class transmission is key to controlling the spread of influenza in school settings and, as a consequence, in the general population.
Zhang, Dan; Wang, Qing-Guo; Srinivasan, Dipti; Li, Hongyi; Yu, Li
2018-05-01
This paper is concerned with the asynchronous state estimation for a class of discrete-time switched complex networks with communication constraints. An asynchronous estimator is designed to overcome the difficulty that each node cannot access to the topology/coupling information. Also, the event-based communication, signal quantization, and the random packet dropout problems are studied due to the limited communication resource. With the help of switched system theory and by resorting to some stochastic system analysis method, a sufficient condition is proposed to guarantee the exponential stability of estimation error system in the mean-square sense and a prescribed performance level is also ensured. The characterization of the desired estimator gains is derived in terms of the solution to a convex optimization problem. Finally, the effectiveness of the proposed design approach is demonstrated by a simulation example.
McGowan, C.P.; Millspaugh, J.J.; Ryan, M.R.; Kruse, C.D.; Pavelka, G.
2009-01-01
Estimating reproductive success for birds with precocial young can be difficult because chicks leave nests soon after hatching and individuals or broods can be difficult to track. Researchers often turn to estimating survival during the prefledging period and, though effective, mark-recapture based approaches are not always feasible due to cost, time, and animal welfare concerns. Using a threatened population of Piping Plovers (Charadrius melodus) that breeds along the Missouri River, we present an approach for estimating chick survival during the prefledging period using long-term (1993-2005), count-based, age-class data. We used a modified catch-curve analysis, and data collected during three 5-day sampling periods near the middle of the breeding season. The approach has several ecological and statistical assumptions and our analyses were designed to minimize the probability of violating those assumptions. For example, limiting the sampling periods to only 5 days gave reasonable assurance that population size was stable during the sampling period. Annual daily survival estimates ranged from 0.825 (SD = 0.03) to 0.931 (0.02) depending on year and sampling period, with these estimates assuming constant survival during the prefledging period and no change in the age structure of the population. The average probability of survival to fledging ranged from 0.126 to 0.188. Our results are similar to other published estimates for this species in similar habitats. This method of estimating chick survival may be useful for a variety of precocial bird species when mark-recapture methods are not feasible and only count-based age class data are available. ?? 2009 Association of Field Ornithologists.
NASA Astrophysics Data System (ADS)
Wang, Shibin; Chen, Xuefeng; Selesnick, Ivan W.; Guo, Yanjie; Tong, Chaowei; Zhang, Xingwu
2018-02-01
Synchrosqueezing transform (SST) can effectively improve the readability of the time-frequency (TF) representation (TFR) of nonstationary signals composed of multiple components with slow varying instantaneous frequency (IF). However, for signals composed of multiple components with fast varying IF, SST still suffers from TF blurs. In this paper, we introduce a time-frequency analysis (TFA) method called matching synchrosqueezing transform (MSST) that achieves a highly concentrated TF representation comparable to the standard TF reassignment methods (STFRM), even for signals with fast varying IF, and furthermore, MSST retains the reconstruction benefit of SST. MSST captures the philosophy of STFRM to simultaneously consider time and frequency variables, and incorporates three estimators (i.e., the IF estimator, the group delay estimator, and a chirp-rate estimator) into a comprehensive and accurate IF estimator. In this paper, we first introduce the motivation of MSST with three heuristic examples. Then we introduce a precise mathematical definition of a class of chirp-like intrinsic-mode-type functions that locally can be viewed as a sum of a reasonably small number of approximate chirp signals, and we prove that MSST does indeed succeed in estimating chirp-rate and IF of arbitrary functions in this class and succeed in decomposing these functions. Furthermore, we describe an efficient numerical algorithm for the practical implementation of the MSST, and we provide an adaptive IF extraction method for MSST reconstruction. Finally, we verify the effectiveness of the MSST in practical applications for machine fault diagnosis, including gearbox fault diagnosis for a wind turbine in variable speed conditions and rotor rub-impact fault diagnosis for a dual-rotor turbofan engine.
NASA Astrophysics Data System (ADS)
Arevalo, P. A.; Olofsson, P.; Woodcock, C. E.
2017-12-01
Unbiased estimation of the areas of conversion between land categories ("activity data") and their uncertainty is crucial for providing more robust calculations of carbon emissions to the atmosphere, as well as their removals. This is particularly important for the REDD+ mechanism of UNFCCC where an economic compensation is tied to the magnitude and direction of such fluxes. Dense time series of Landsat data and statistical protocols are becoming an integral part of forest monitoring efforts, but there are relatively few studies in the tropics focused on using these methods to advance operational MRV systems (Monitoring, Reporting and Verification). We present the results of a prototype methodology for continuous monitoring and unbiased estimation of activity data that is compliant with the IPCC Approach 3 for representation of land. We used a break detection algorithm (Continuous Change Detection and Classification, CCDC) to fit pixel-level temporal segments to time series of Landsat data in the Colombian Amazon. The segments were classified using a Random Forest classifier to obtain annual maps of land categories between 2001 and 2016. Using these maps, a biannual stratified sampling approach was implemented and unbiased stratified estimators constructed to calculate area estimates with confidence intervals for each of the stable and change classes. Our results provide evidence of a decrease in primary forest as a result of conversion to pastures, as well as increase in secondary forest as pastures are abandoned and the forest allowed to regenerate. Estimating areas of other land transitions proved challenging because of their very small mapped areas compared to stable classes like forest, which corresponds to almost 90% of the study area. Implications on remote sensing data processing, sample allocation and uncertainty reduction are also discussed.
Parameter Estimation for a Model of Space-Time Rainfall
NASA Astrophysics Data System (ADS)
Smith, James A.; Karr, Alan F.
1985-08-01
In this paper, parameter estimation procedures, based on data from a network of rainfall gages, are developed for a class of space-time rainfall models. The models, which are designed to represent the spatial distribution of daily rainfall, have three components, one that governs the temporal occurrence of storms, a second that distributes rain cells spatially for a given storm, and a third that determines the rainfall pattern within a rain cell. Maximum likelihood and method of moments procedures are developed. We illustrate that limitations on model structure are imposed by restricting data sources to rain gage networks. The estimation procedures are applied to a 240-mi2 (621 km2) catchment in the Potomac River basin.
NASA Technical Reports Server (NTRS)
Murphy, K. A.
1988-01-01
A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.
NASA Technical Reports Server (NTRS)
Murphy, K. A.
1990-01-01
A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.
Nonparametric Transfer Function Models
Liu, Jun M.; Chen, Rong; Yao, Qiwei
2009-01-01
In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between ‘input’ and ‘output’ time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example. PMID:20628584
A general methodology for maximum likelihood inference from band-recovery data
Conroy, M.J.; Williams, B.K.
1984-01-01
A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.
NASA Astrophysics Data System (ADS)
Maginnis, P. A.; West, M.; Dullerud, G. E.
2016-10-01
We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a ;black-box;, i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.
On the error in crop acreage estimation using satellite (LANDSAT) data
NASA Technical Reports Server (NTRS)
Chhikara, R. (Principal Investigator)
1983-01-01
The problem of crop acreage estimation using satellite data is discussed. Bias and variance of a crop proportion estimate in an area segment obtained from the classification of its multispectral sensor data are derived as functions of the means, variances, and covariance of error rates. The linear discriminant analysis and the class proportion estimation for the two class case are extended to include a third class of measurement units, where these units are mixed on ground. Special attention is given to the investigation of mislabeling in training samples and its effect on crop proportion estimation. It is shown that the bias and variance of the estimate of a specific crop acreage proportion increase as the disparity in mislabeling rates between two classes increases. Some interaction is shown to take place, causing the bias and the variance to decrease at first and then to increase, as the mixed unit class varies in size from 0 to 50 percent of the total area segment.
NDVI, C3 and C4 production, and distributions in Great Plains grassland land cover classes
Tieszen, L.L.; Reed, Bradley C.; Bliss, Norman B.; Wylie, Bruce K.; DeJong, Benjamin D.
1997-01-01
The distributions of C3 and C4 grasses were used to interpret the distribution, seasonal performance, and potential production of grasslands in the Great Plains of North America. Thirteen major grassland seasonal land cover classes were studied with data from three distinct sources. Normalized Difference Vegetation Index (NDVI) data derived from the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) sensor were collected for each pixel over a 5-yr period (1989–1993), analyzed for quantitative attributes and seasonal relationships, and then aggregated by land cover class. Data from the State Soil Geographic (STATSGO) database were used to identify dominant plant species contributing to the potential production in each map unit. These species were identified as C3 or C4, and contributions to production were aggregated to provide estimates of the percentage of C3 and C4 production for each intersection of the STATSGO map units and the seasonal land cover classes. Carbon isotope values were obtained at specific sites from the soil organic matter of the upper horizon of soil cores and were related to STATSGO estimates of potential production.The grassland classes were distributed with broad northwest-to-southeast orientations. Some classes had large variations in C3 and C4 composition with high proportions of C4species in the south and low proportions in the north. This diversity of photosynthetic types within land cover classes that cross regions of different temperature and precipitation results in similar seasonal patterns and magnitudes of NDVI. The easternmost class, 65, containing tallgrass prairie components, bluestem, Indiangrass, and switchgrass, possessed the highest maximum NDVI and time-integrated NDVI values each year. Grassland classes varied over 5 yr from a high integrated NDVI mean of 4.9 in class 65 in the east to a low of 1.2 in class 76 (sand sage, blue grama, wheatgrass, and buffalograss) in the southwest. Although environmental conditions varied widely during the 5 yr, the rankings of class performance were consistent across years for these NDVI metrics. Land cover classes were less consistent in time of onset, which was often earlier in areas in the north dominated by C3 grasses than in areas to the south dominated by C4grasses. At the level of seasonal land cover classes, no significant relationship was found between the proportions of C3 and C4 species and estimates of potential production derived from the STATSGO database or inferred from the seasonal patterns of NDVI. The isotopic data from specific sites and the potential production data from STATSGO suggest similar patterns of high proportional production by C4 species throughout the south and a decline in proportional production north of the central Great Plains. The land cover classes integrate ecosystem units that encompass a wide diversity of species and C3 and C4 proportions and provide a classification that consistently captures significant ecosystem parameters for the Great Plains.
Sheingold, Steven; Nguyen, Nguyen Xuan
2014-01-01
This study estimates the effects of generic competition, increased cost-sharing, and benefit practices on utilization and spending for prescription drugs. We examined changes in Medicare price and utilization from 2007 to 2009 of all drugs in 28 therapeutic classes. The classes accounted for 80% of Medicare Part D spending in 2009 and included the 6 protected classes and 6 classes with practically no generic competition. All variables were constructed to measure each drug relative to its class at a specific plan sponsor. We estimated that the shift toward generic utilization had cut in half the rate of increase in the price of a prescription during 2007-2009. Specifically, the results showed that (1) rapid generic penetration had significantly held down costs per prescription, (2) copayment and other benefit practices shifted utilization to generics and favored brands, and (3) price increases were generally greater in less competitive classes of drugs. In many ways, Part D was implemented at a fortuitous time; since 2006, there have been relatively few new blockbuster drugs introduced, and many existing high-volume drugs used by beneficiaries were in therapeutic classes with multiple brands and generic alternatives. Under these conditions, our paper showed that plan sponsors have been able to contain costs by encouraging use of generics or drugs offering greater value within therapeutic classes. It is less clear what will happen to future Part D costs if a number of new and effective drugs for beneficiaries enter the market with no real competitors.
Martinussen, Torben; Vansteelandt, Stijn; Tchetgen Tchetgen, Eric J; Zucker, David M
2017-12-01
The use of instrumental variables for estimating the effect of an exposure on an outcome is popular in econometrics, and increasingly so in epidemiology. This increasing popularity may be attributed to the natural occurrence of instrumental variables in observational studies that incorporate elements of randomization, either by design or by nature (e.g., random inheritance of genes). Instrumental variables estimation of exposure effects is well established for continuous outcomes and to some extent for binary outcomes. It is, however, largely lacking for time-to-event outcomes because of complications due to censoring and survivorship bias. In this article, we make a novel proposal under a class of structural cumulative survival models which parameterize time-varying effects of a point exposure directly on the scale of the survival function; these models are essentially equivalent with a semi-parametric variant of the instrumental variables additive hazards model. We propose a class of recursive instrumental variable estimators for these exposure effects, and derive their large sample properties along with inferential tools. We examine the performance of the proposed method in simulation studies and illustrate it in a Mendelian randomization study to evaluate the effect of diabetes on mortality using data from the Health and Retirement Study. We further use the proposed method to investigate potential benefit from breast cancer screening on subsequent breast cancer mortality based on the HIP-study. © 2017, The International Biometric Society.
Measurements and Modeling of Total Solar Irradiance in X-class Solar Flares
NASA Technical Reports Server (NTRS)
Moore, Christopher S.; Chamberlin, Phillip Clyde; Hock, Rachel
2014-01-01
The Total Irradiance Monitor (TIM) from NASA's SOlar Radiation and Climate Experiment can detect changes in the total solar irradiance (TSI) to a precision of 2 ppm, allowing observations of variations due to the largest X-class solar flares for the first time. Presented here is a robust algorithm for determining the radiative output in the TIM TSI measurements, in both the impulsive and gradual phases, for the four solar flares presented in Woods et al., as well as an additional flare measured on 2006 December 6. The radiative outputs for both phases of these five flares are then compared to the vacuum ultraviolet (VUV) irradiance output from the Flare Irradiance Spectral Model (FISM) in order to derive an empirical relationship between the FISM VUV model and the TIM TSI data output to estimate the TSI radiative output for eight other X-class flares. This model provides the basis for the bolometric energy estimates for the solar flares analyzed in the Emslie et al. study.
Detection and Parameter Estimation of Chirped Radar Signals.
2000-01-10
Wigner - Ville distribution ( WVD ): The WVD belongs to the Cohen’s class of energy distributions ...length. 28 6. Pseudo Wigner - Ville distribution (PWVD): The PWVD introduces a time-window to the WVD definition, thereby reducing the interferences...Frequency normalized to sampling frequency 26 Figure V.2: Wigner - Ville distribution ; time normalized to the pulse length 28 Figure V.3:
Zhang, Peng; Luo, Dandan; Li, Pengfei; Sharpsten, Lucie; Medeiros, Felipe A.
2015-01-01
Glaucoma is a progressive disease due to damage in the optic nerve with associated functional losses. Although the relationship between structural and functional progression in glaucoma is well established, there is disagreement on how this association evolves over time. In addressing this issue, we propose a new class of non-Gaussian linear-mixed models to estimate the correlations among subject-specific effects in multivariate longitudinal studies with a skewed distribution of random effects, to be used in a study of glaucoma. This class provides an efficient estimation of subject-specific effects by modeling the skewed random effects through the log-gamma distribution. It also provides more reliable estimates of the correlations between the random effects. To validate the log-gamma assumption against the usual normality assumption of the random effects, we propose a lack-of-fit test using the profile likelihood function of the shape parameter. We apply this method to data from a prospective observation study, the Diagnostic Innovations in Glaucoma Study, to present a statistically significant association between structural and functional change rates that leads to a better understanding of the progression of glaucoma over time. PMID:26075565
An Illustration of Generalised Arma (garma) Time Series Modeling of Forest Area in Malaysia
NASA Astrophysics Data System (ADS)
Pillai, Thulasyammal Ramiah; Shitan, Mahendran
Forestry is the art and science of managing forests, tree plantations, and related natural resources. The main goal of forestry is to create and implement systems that allow forests to continue a sustainable provision of environmental supplies and services. Forest area is land under natural or planted stands of trees, whether productive or not. Forest area of Malaysia has been observed over the years and it can be modeled using time series models. A new class of GARMA models have been introduced in the time series literature to reveal some hidden features in time series data. For these models to be used widely in practice, we illustrate the fitting of GARMA (1, 1; 1, δ) model to the Annual Forest Area data of Malaysia which has been observed from 1987 to 2008. The estimation of the model was done using Hannan-Rissanen Algorithm, Whittle's Estimation and Maximum Likelihood Estimation.
ARMA models for earthquake ground motions. Seismic safety margins research program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, M. K.; Kwiatkowski, J. W.; Nau, R. F.
1981-02-01
Four major California earthquake records were analyzed by use of a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It was possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters, and test the residuals generated by these models. It was also possible to show the connections, similarities, and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum-likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed is suitable for simulatingmore » earthquake ground motions in the time domain, and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. 60 references, 19 figures, 9 tables.« less
Richter, Jacob T.; Sloss, Brian L.; Isermann, Daniel A.
2016-01-01
Previous research has generally ignored the potential effects of spawning habitat availability and quality on recruitment of Walleye Sander vitreus, largely because information on spawning habitat is lacking for many lakes. Furthermore, traditional transect-based methods used to describe habitat are time and labor intensive. Our objectives were to determine if side-scan sonar could be used to accurately classify Walleye spawning habitat in the nearshore littoral zone and provide lakewide estimates of spawning habitat availability similar to estimates obtained from a transect–quadrat-based method. Based on assessments completed on 16 northern Wisconsin lakes, interpretation of side-scan sonar images resulted in correct identification of substrate size-class for 93% (177 of 191) of selected locations and all incorrect classifications were within ± 1 class of the correct substrate size-class. Gravel, cobble, and rubble substrates were incorrectly identified from side-scan images in only two instances (1% misclassification), suggesting that side-scan sonar can be used to accurately identify preferred Walleye spawning substrates. Additionally, we detected no significant differences in estimates of lakewide littoral zone substrate compositions estimated using side-scan sonar and a traditional transect–quadrat-based method. Our results indicate that side-scan sonar offers a practical, accurate, and efficient technique for assessing substrate composition and quantifying potential Walleye spawning habitat in the nearshore littoral zone of north temperate lakes.
Koval'chuk, V K
2004-01-01
The article presents medicoecological estimation of quantitative relations between monsoon climate and urolithiasis primary morbidity in the Primorsky Territory. Quantitative estimation of the climate was performed by V. I. Rusanov (1973) who calculated daily meteorological data for 1 p.m. throughout 1991-1999. Primary urolithiasis morbidity for this period of time was provided by regional health department. The data were processed by methods of medical mapping and paired correlation analysis. In the Territory, mapping revealed the same location of the zones with high frequency of discomfortable weather of class V and VI causing chilblain in positive air temperatures and zones with elevated primary urolithiasis morbidity in children and adults. Correlation analysis confirmed mapping results and determined significant negative correlations between frequency of relatively comfortable moment weather classes II-IV and morbidity of children and adults, positive correlation between frequency of discomfortable class VI and adult morbidity. Thus, high frequency of days per year with discomfortable classes of moment weather in low positive air temperatures may be one of the factors of urolithiasis risk in population of the Primorsky Territory. Climatic factors should be taken into consideration in planning primary prophylaxis of this disease in the Primorsky Territory.
Strelioff, Christopher C; Crutchfield, James P; Hübler, Alfred W
2007-07-01
Markov chains are a natural and well understood tool for describing one-dimensional patterns in time or space. We show how to infer kth order Markov chains, for arbitrary k , from finite data by applying Bayesian methods to both parameter estimation and model-order selection. Extending existing results for multinomial models of discrete data, we connect inference to statistical mechanics through information-theoretic (type theory) techniques. We establish a direct relationship between Bayesian evidence and the partition function which allows for straightforward calculation of the expectation and variance of the conditional relative entropy and the source entropy rate. Finally, we introduce a method that uses finite data-size scaling with model-order comparison to infer the structure of out-of-class processes.
Kumaraswamy autoregressive moving average models for double bounded environmental data
NASA Astrophysics Data System (ADS)
Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme
2017-12-01
In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.
Shabbir, Javid
2018-01-01
In the present paper we propose an improved class of estimators in the presence of measurement error and non-response under stratified random sampling for estimating the finite population mean. The theoretical and numerical studies reveal that the proposed class of estimators performs better than other existing estimators. PMID:29401519
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.
Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
NASA Astrophysics Data System (ADS)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-01
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systemsmore » with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.« less
Timing of seed dispersal and seed dormancy in Brazilian savanna: two solutions to face seasonality.
Escobar, Diego F E; Silveira, Fernando A O; Morellato, Leonor Patricia C
2018-05-11
The relationship between fruiting phenology and seed dispersal syndrome is widely recognized; however, the interaction of dormancy classes and plant life-history traits in relation to fruiting phenology and seed dispersal is understudied. Here we examined the relationship between fruiting season and seed dormancy and how this relationship is modulated by dormancy classes, dispersal syndromes, seed mass and seed moisture content in a Brazilian savanna (cerrado). Dormancy classes (non-dormancy and physical, morphological, morphophysiological, physiological and physiophysical dormancy) of 34 cerrado species were experimentally determined. Their seed dispersal syndrome (autochory, anemochory, zoochory), dispersal season (rainy, dry, rainy-to-dry and dry-to-rainy transitions), seed mass and moisture contents, and the estimated germination date were also determined. Log-linear models were used to evaluate how dormancy and dormancy classes are related to dispersal season and syndrome. The proportions of dormant and non-dormant species were similar in cerrado. The community-estimated germination date was seasonal, occurring at the onset of rainy season. Overall, anemochorous non-dormant species released seeds during the dry-to-rainy transition; autochorous physically dormant species dispersed seeds during the dry season and rainy-to-dry transition; zoochorous species dispersed non-dormant seeds during the dry and rainy seasons, while species with morphological, morphophysiological or physiological dormancy dispersed seeds in the transitional seasons. Seed mass differed among dispersal seasons and dormancy classes, but seed moisture content did not vary with dispersal syndrome, season or dormancy class. The beginning of the rainy season was the most favourable period for seed germination in cerrado, and the germination phenology was controlled by both the timing of seed dispersal and seed dormancy. Dormancy class was influenced by dispersal syndrome and season. Moreover, dormancy avoided seed germination during the rainy-to-dry transition, independently of dispersal syndrome. The variability of dormancy classes with dispersal syndrome allowed animal-dispersed species to fruit all year round, but seeds germinated only during the rainy season. Conversely, seasonally restricted wind-dispersal species dispersed and germinated their non-dormant seeds only in the rainy season.
ERIC Educational Resources Information Center
Sen, Sedat
2018-01-01
Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…
Bayes estimation on parameters of the single-class classifier. [for remotely sensed crop data
NASA Technical Reports Server (NTRS)
Lin, G. C.; Minter, T. C.
1976-01-01
Normal procedures used for designing a Bayes classifier to classify wheat as the major crop of interest require not only training samples of wheat but also those of nonwheat. Therefore, ground truth must be available for the class of interest plus all confusion classes. The single-class Bayes classifier classifies data into the class of interest or the class 'other' but requires training samples only from the class of interest. This paper will present a procedure for Bayes estimation on the mean vector, covariance matrix, and a priori probability of the single-class classifier using labeled samples from the class of interest and unlabeled samples drawn from the mixture density function.
The NASA Meter Class Autonomous Telescope: Ascension Island
2013-09-01
understand the debris environment by providing high fidelity data in a timely manner to protect satellites and spacecraft in orbit around the Earth...gigabytes of image data nightly. With fainter detection limits, precision detection, acquisition and tracking of targets, multi-color photometry ...ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for
Observers for Systems with Nonlinearities Satisfying an Incremental Quadratic Inequality
NASA Technical Reports Server (NTRS)
Acikmese, Ahmet Behcet; Corless, Martin
2004-01-01
We consider the problem of state estimation for nonlinear time-varying systems whose nonlinearities satisfy an incremental quadratic inequality. These observer results unifies earlier results in the literature; and extend it to some additional classes of nonlinearities. Observers are presented which guarantee that the state estimation error exponentially converges to zero. Observer design involves solving linear matrix inequalities for the observer gain matrices. Results are illustrated by application to a simple model of an underwater.
2013-01-01
Background Falls among the elderly are a major public health concern. Therefore, the possibility of a modeling technique which could better estimate fall probability is both timely and needed. Using biomedical, pharmacological and demographic variables as predictors, latent class analysis (LCA) is demonstrated as a tool for the prediction of falls among community dwelling elderly. Methods Using a retrospective data-set a two-step LCA modeling approach was employed. First, we looked for the optimal number of latent classes for the seven medical indicators, along with the patients’ prescription medication and three covariates (age, gender, and number of medications). Second, the appropriate latent class structure, with the covariates, were modeled on the distal outcome (fall/no fall). The default estimator was maximum likelihood with robust standard errors. The Pearson chi-square, likelihood ratio chi-square, BIC, Lo-Mendell-Rubin Adjusted Likelihood Ratio test and the bootstrap likelihood ratio test were used for model comparisons. Results A review of the model fit indices with covariates shows that a six-class solution was preferred. The predictive probability for latent classes ranged from 84% to 97%. Entropy, a measure of classification accuracy, was good at 90%. Specific prescription medications were found to strongly influence group membership. Conclusions In conclusion the LCA method was effective at finding relevant subgroups within a heterogenous at-risk population for falling. This study demonstrated that LCA offers researchers a valuable tool to model medical data. PMID:23705639
NASA Technical Reports Server (NTRS)
Park, Han G. (Inventor); Zak, Michail (Inventor); James, Mark L. (Inventor); Mackey, Ryan M. E. (Inventor)
2003-01-01
A general method of anomaly detection from time-correlated sensor data is disclosed. Multiple time-correlated signals are received. Their cross-signal behavior is compared against a fixed library of invariants. The library is constructed during a training process, which is itself data-driven using the same time-correlated signals. The method is applicable to a broad class of problems and is designed to respond to any departure from normal operation, including faults or events that lie outside the training envelope.
Mbizah, Moreangels M; Steenkamp, Gerhard; Groom, Rosemary J
2016-01-01
African wild dogs (Lycaon pictus) are endangered and their population continues to decline throughout their range. Given their conservation status, more research focused on their population dynamics, population growth and age specific mortality is needed and this requires reliable estimates of age and age of mortality. Various age determination methods from teeth and skull measurements have been applied in numerous studies and it is fundamental to test the validity of these methods and their applicability to different species. In this study we assessed the accuracy of estimating chronological age and age class of African wild dogs, from dental age measured by (i) counting cementum annuli (ii) pulp cavity/tooth width ratio, (iii) tooth wear (measured by tooth crown height) (iv) tooth wear (measured by tooth crown width/crown height ratio) (v) tooth weight and (vi) skull measurements (length, width and height). A sample of 29 African wild dog skulls, from opportunistically located carcasses was analysed. Linear and ordinal regression analysis was done to investigate the performance of each of the six age determination methods in predicting wild dog chronological age and age class. Counting cementum annuli was the most accurate method for estimating chronological age of wild dogs with a 79% predictive capacity, while pulp cavity/tooth width ratio was also a reliable method with a 68% predictive capacity. Counting cementum annuli and pulp cavity/tooth width ratio were again the most accurate methods for separating wild dogs into three age classes (6-24 months; 25-60 months and > 60 months), with a McFadden's Pseudo-R2 of 0.705 and 0.412 respectively. The use of the cementum annuli method is recommended when estimating age of wild dogs since it is the most reliable method. However, its use is limited as it requires tooth extraction and shipping, is time consuming and expensive, and is not applicable to living individuals. Pulp cavity/tooth width ratio is a moderately reliable method for estimating both chronological age and age class. This method gives a balance between accuracy, cost and practicability, therefore it is recommended when precise age estimations are not paramount.
Steenkamp, Gerhard; Groom, Rosemary J.
2016-01-01
African wild dogs (Lycaon pictus) are endangered and their population continues to decline throughout their range. Given their conservation status, more research focused on their population dynamics, population growth and age specific mortality is needed and this requires reliable estimates of age and age of mortality. Various age determination methods from teeth and skull measurements have been applied in numerous studies and it is fundamental to test the validity of these methods and their applicability to different species. In this study we assessed the accuracy of estimating chronological age and age class of African wild dogs, from dental age measured by (i) counting cementum annuli (ii) pulp cavity/tooth width ratio, (iii) tooth wear (measured by tooth crown height) (iv) tooth wear (measured by tooth crown width/crown height ratio) (v) tooth weight and (vi) skull measurements (length, width and height). A sample of 29 African wild dog skulls, from opportunistically located carcasses was analysed. Linear and ordinal regression analysis was done to investigate the performance of each of the six age determination methods in predicting wild dog chronological age and age class. Counting cementum annuli was the most accurate method for estimating chronological age of wild dogs with a 79% predictive capacity, while pulp cavity/tooth width ratio was also a reliable method with a 68% predictive capacity. Counting cementum annuli and pulp cavity/tooth width ratio were again the most accurate methods for separating wild dogs into three age classes (6–24 months; 25–60 months and > 60 months), with a McFadden’s Pseudo-R2 of 0.705 and 0.412 respectively. The use of the cementum annuli method is recommended when estimating age of wild dogs since it is the most reliable method. However, its use is limited as it requires tooth extraction and shipping, is time consuming and expensive, and is not applicable to living individuals. Pulp cavity/tooth width ratio is a moderately reliable method for estimating both chronological age and age class. This method gives a balance between accuracy, cost and practicability, therefore it is recommended when precise age estimations are not paramount. PMID:27732663
Dereymaeker, Anneleen; Pillay, Kirubin; Vervisch, Jan; Van Huffel, Sabine; Naulaers, Gunnar; Jansen, Katrien; De Vos, Maarten
2017-09-01
Sleep state development in preterm neonates can provide crucial information regarding functional brain maturation and give insight into neurological well being. However, visual labeling of sleep stages from EEG requires expertise and is very time consuming, prompting the need for an automated procedure. We present a robust method for automated detection of preterm sleep from EEG, over a wide postmenstrual age ([Formula: see text] age) range, focusing first on Quiet Sleep (QS) as an initial marker for sleep assessment. Our algorithm, CLuster-based Adaptive Sleep Staging (CLASS), detects QS if it remains relatively more discontinuous than non-QS over PMA. CLASS was optimized on a training set of 34 recordings aged 27-42 weeks PMA, and performance then assessed on a distinct test set of 55 recordings of the same age range. Results were compared to visual QS labeling from two independent raters (with inter-rater agreement [Formula: see text]), using Sensitivity, Specificity, Detection Factor ([Formula: see text] of visual QS periods correctly detected by CLASS) and Misclassification Factor ([Formula: see text] of CLASS-detected QS periods that are misclassified). CLASS performance proved optimal across recordings at 31-38 weeks (median [Formula: see text], median MF 0-0.25, median Sensitivity 0.93-1.0, and median Specificity 0.80-0.91 across this age range), with minimal misclassifications at 35-36 weeks (median [Formula: see text]). To illustrate the potential of CLASS in facilitating clinical research, normal maturational trends over PMA were derived from CLASS-estimated QS periods, visual QS estimates, and nonstate specific periods (containing QS and non-QS) in the EEG recording. CLASS QS trends agreed with those from visual QS, with both showing stronger correlations than nonstate specific trends. This highlights the benefit of automated QS detection for exploring brain maturation.
Kim, Minjung; Lamont, Andrea E.; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M. Lee
2015-01-01
Regression mixture models are a novel approach for modeling heterogeneous effects of predictors on an outcome. In the model building process residual variances are often disregarded and simplifying assumptions made without thorough examination of the consequences. This simulation study investigated the impact of an equality constraint on the residual variances across latent classes. We examine the consequence of constraining the residual variances on class enumeration (finding the true number of latent classes) and parameter estimates under a number of different simulation conditions meant to reflect the type of heterogeneity likely to exist in applied analyses. Results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted estimated class sizes and showed the potential to greatly impact parameter estimates in each class. Results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions were made. PMID:26139512
Mapped Plot Patch Size Estimates
Paul C. Van Deusen
2005-01-01
This paper demonstrates that the mapped plot design is relatively easy to analyze and describes existing formulas for mean and variance estimators. New methods are developed for using mapped plots to estimate average patch size of condition classes. The patch size estimators require assumptions about the shape of the condition class, limiting their utility. They may...
Time-varying higher order spectra
NASA Astrophysics Data System (ADS)
Boashash, Boualem; O'Shea, Peter
1991-12-01
A general solution for the problem of time-frequency signal representation of nonlinear FM signals is provided, based on a generalization of the Wigner-Ville distribution. The Wigner- Ville distribution (WVD) is a second order time-frequency representation. That is, it is able to give ideal energy concentration for quadratic phase signals and its ensemble average is a second order time-varying spectrum. The same holds for Cohen's class of time-frequency distributions, which are smoothed versions of the WVD. The WVD may be extended so as to achieve ideal energy concentration for higher order phase laws, and such that the expectation is a time-varying higher order spectrum. The usefulness of these generalized Wigner-Ville distributions (GWVD) is twofold. Firstly, because they achieve ideal energy concentration for polynomial phase signals, they may be used for optimal instantaneous frequency estimation. Second, they are useful for discriminating between nonstationary processes of differing higher order moments. In the same way that the WVD is generalized, we generalize Cohen's class of TFDs by defining a class of generalized time-frequency distributions (GTFDs) obtained by a two dimensional smoothing of the GWVD. Another results derived from this approach is a method based on higher order spectra which allows the separation of cross-terms and auto- terms in the WVD.
Issues in the deregulation of the electric industry
NASA Astrophysics Data System (ADS)
Tyler, Cleve Brent
The electric industry is undergoing a major restructuring which allows competition in the generation portion of the industry. This dissertation explores several pricing issues relevant to this restructuring. First, an extensive overview examines the industry's history, discusses major regulation theories, and relays the major issues of deregulation. Second, a literature review recounts major works in the economics literature on price discrimination, pricing efficiency, and cost estimation. Then, customer specific generation, transmission, distribution, and general and administration costs are estimated for each company. The customer classes are residential, general service, large general service, and large industrial, representing a finer division of customer classes than found in previous studies. Average prices are compiled and marginal prices are determined from a set of utility schedules. Average and marginal price/cost ratios are computed for each customer class. These ratios show that larger use customers face relative price discrimination but operate under more efficient price structures than small use consumers. Finally, issues in peak load pricing are discussed using a model which predicts inefficient capital choice by regulated utilities. Efficiency losses are estimated to be $620 million dollars a year from the lack of peak load prices under regulation. This result is based on the time-of-use pricing predictions from the Department of Energy.
2013-03-01
members of Class 0703–0704 (the Potato -Heads), of which I have many memories from our time spent in Shepherdstown, your status will always remain...have been issued during this time , no metrics are available to measure its effectiveness . The federal government, state and local governments, private...Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction
NASA Astrophysics Data System (ADS)
Petrillo, M.; Cherubini, P.; Fravolini, G.; Ascher, J.; Schärer, M.; Synal, H.-A.; Bertoldi, D.; Camin, F.; Larcher, R.; Egli, M.
2015-09-01
Due to the large size and highly heterogeneous spatial distribution of deadwood, the time scales involved in the coarse woody debris (CWD) decay of Picea abies (L.) Karst. and Larix decidua Mill. in Alpine forests have been poorly investigated and are largely unknown. We investigated the CWD decay dynamics in an Alpine valley in Italy using the five-decay class system commonly employed for forest surveys, based on a macromorphological and visual assessment. For the decay classes 1 to 3, most of the dendrochronological samples were cross-dated to assess the time that had elapsed since tree death, but for decay classes 4 and 5 (poorly preserved tree rings) and some others not having enough tree rings, radiocarbon dating was used. In addition, density, cellulose and lignin data were measured for the dated CWD. The decay rate constants for spruce and larch were estimated on the basis of the density loss using a single negative exponential model. In the decay classes 1 to 3, the ages of the CWD were similar varying between 1 and 54 years for spruce and 3 and 40 years for larch with no significant differences between the classes; classes 1-3 are therefore not indicative for deadwood age. We found, however, distinct tree species-specific differences in decay classes 4 and 5, with larch CWD reaching an average age of 210 years in class 5 and spruce only 77 years. The mean CWD rate constants were 0.012 to 0.018 yr-1 for spruce and 0.005 to 0.012 yr-1 for larch. Cellulose and lignin time trends half-lives (using a multiple-exponential model) could be derived on the basis of the ages of the CWD. The half-lives for cellulose were 21 yr for spruce and 50 yr for larch. The half-life of lignin is considerably higher and may be more than 100 years in larch CWD.
Busanello, Marcos; de Freitas, Larissa Nazareth; Winckler, João Pedro Pereira; Farias, Hiron Pereira; Dos Santos Dias, Carlos Tadeu; Cassoli, Laerte Dagher; Machado, Paulo Fernando
2017-01-01
Payment programs based on milk quality (PPBMQ) are used in several countries around the world as an incentive to improve milk quality. One of the principal milk parameters used in such programs is the bulk tank somatic cell count (BTSCC). In this study, using data from an average of 37,000 farms per month in Brazil where milk was analyzed, BTSCC data were divided into different payment classes based on milk quality. Then, descriptive and graphical analyses were performed. The probability of a change to a worse payment class was calculated, future BTSCC values were predicted using time series models, and financial losses due to the failure to reach the maximum bonus for the payment based on milk quality were simulated. In Brazil, the mean BTSCC has remained high in recent years, without a tendency to improve. The probability of changing to a worse payment class was strongly affected by both the BTSCC average and BTSCC standard deviation for classes 1 and 2 (1000-200,000 and 201,000-400,000 cells/mL, respectively) and only by the BTSCC average for classes 3 and 4 (401,000-500,000 and 501,000-800,000 cells/mL, respectively). The time series models indicated that at some point in the year, farms would not remain in their current class and would accrue financial losses due to payments based on milk quality. The BTSCC for Brazilian dairy farms has not recently improved. The probability of a class change to a worse class is a metric that can aid in decision-making and stimulate farmers to improve milk quality. A time series model can be used to predict the future value of the BTSCC, making it possible to estimate financial losses and to show, moreover, that financial losses occur in all classes of the PPBMQ because the farmers do not remain in the best payment class in all months.
Stability and error estimation for Component Adaptive Grid methods
NASA Technical Reports Server (NTRS)
Oliger, Joseph; Zhu, Xiaolei
1994-01-01
Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.
Sheng, Li; Wang, Zidong; Tian, Engang; Alsaadi, Fuad E
2016-12-01
This paper deals with the H ∞ state estimation problem for a class of discrete-time neural networks with stochastic delays subject to state- and disturbance-dependent noises (also called (x,v)-dependent noises) and fading channels. The time-varying stochastic delay takes values on certain intervals with known probability distributions. The system measurement is transmitted through fading channels described by the Rice fading model. The aim of the addressed problem is to design a state estimator such that the estimation performance is guaranteed in the mean-square sense against admissible stochastic time-delays, stochastic noises as well as stochastic fading signals. By employing the stochastic analysis approach combined with the Kronecker product, several delay-distribution-dependent conditions are derived to ensure that the error dynamics of the neuron states is stochastically stable with prescribed H ∞ performance. Finally, a numerical example is provided to illustrate the effectiveness of the obtained results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Patrick L. Zimmerman; Greg C. Liknes
2010-01-01
Dot grids are often used to estimate the proportion of land cover belonging to some class in an aerial photograph. Interpreter misclassification is an often-ignored source of error in dot-grid sampling that has the potential to significantly bias proportion estimates. For the case when the true class of items is unknown, we present a maximum-likelihood estimator of...
Classification of daily solar irradiation by fractional analysis of 10-min-means of solar irradiance
NASA Astrophysics Data System (ADS)
Harrouni, S.; Guessoum, A.; Maafi, A.
2005-02-01
This paper deals with fractal analysis of daily solar irradiances measured with a time step of 10 minutes at Golden and Boulder located in Colorado. The aim is to estimate the fractal dimensions in order to perform classification of daily solar irradiances. The estimated fractal dimension hat{D} and the clearness index KT are used as classification criteria. The results show that these criteria lead to three classes: clear sky, partially covered sky and overcast sky. The results also show that the evaluation of the fractal dimension of the irradiance signal based on a data set with 10 minutes time step is possible.
Estimating accuracy of land-cover composition from two-stage cluster sampling
Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.
2009-01-01
Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.
Adaptive Bayes classifiers for remotely sensed data
NASA Technical Reports Server (NTRS)
Raulston, H. S.; Pace, M. O.; Gonzalez, R. C.
1975-01-01
An algorithm is developed for a learning, adaptive, statistical pattern classifier for remotely sensed data. The estimation procedure consists of two steps: (1) an optimal stochastic approximation of the parameters of interest, and (2) a projection of the parameters in time and space. The results reported are for Gaussian data in which the mean vector of each class may vary with time or position after the classifier is trained.
Joeng, Hee-Koung; Chen, Ming-Hui; Kang, Sangwook
2015-01-01
Discrete survival data are routinely encountered in many fields of study including behavior science, economics, epidemiology, medicine, and social science. In this paper, we develop a class of proportional exponentiated link transformed hazards (ELTH) models. We carry out a detailed examination of the role of links in fitting discrete survival data and estimating regression coefficients. Several interesting results are established regarding the choice of links and baseline hazards. We also characterize the conditions for improper survival functions and the conditions for existence of the maximum likelihood estimates under the proposed ELTH models. An extensive simulation study is conducted to examine the empirical performance of the parameter estimates under the Cox proportional hazards model by treating discrete survival times as continuous survival times, and the model comparison criteria, AIC and BIC, in determining links and baseline hazards. A SEER breast cancer dataset is analyzed in details to further demonstrate the proposed methodology. PMID:25772374
A Latent Class Approach to Estimating Test-Score Reliability
ERIC Educational Resources Information Center
van der Ark, L. Andries; van der Palm, Daniel W.; Sijtsma, Klaas
2011-01-01
This study presents a general framework for single-administration reliability methods, such as Cronbach's alpha, Guttman's lambda-2, and method MS. This general framework was used to derive a new approach to estimating test-score reliability by means of the unrestricted latent class model. This new approach is the latent class reliability…
Estimation of a Frontier Production Function for the South Carolina Educational Process.
ERIC Educational Resources Information Center
Cooper, Samuel T.; Cohn, Elchanan
1997-01-01
Estimates frontier production functions for South Carolina's educational process, using data from 541 classes. Classes taught by teachers who received merit awards show greater mathematics and reading achievement gain scores, as do classes with fewer free-lunch students. There was a positive relationship between achievement and (larger) class…
Zhu, Guang-Hui; Jia, Zheng-Jun; Yu, Xiao-Jun; Wu, Ku-Sheng; Chen, Lu-Shi; Lv, Jun-Yao; Eric Benbow, M
2017-05-01
Preadult development of necrophagous flies is commonly recognized as an accurate method for estimating the minimum postmortem interval (PMImin). However, once the PMImin exceeds the duration of preadult development, the method is less accurate. Recently, fly puparial hydrocarbons were found to significantly change with weathering time in the field, indicating their potential use for PMImin estimates. However, additional studies are required to demonstrate how the weathering varies among species. In this study, the puparia of Chrysomya rufifacies were placed in the field to experience natural weathering to characterize hydrocarbon composition change over time. We found that weathering of the puparial hydrocarbons was regular and highly predictable in the field. For most of the hydrocarbons, the abundance decreased significantly and could be modeled using a modified exponent function. In addition, the weathering rate was significantly correlated with the hydrocarbon classes. The weathering rate of 2-methyl alkanes was significantly lower than that of alkenes and internal methyl alkanes, and alkenes were higher than the other two classes. For mono-methyl alkanes, the rate was significantly and positively associated with carbon chain length and branch position. These results indicate that puparial hydrocarbon weathering is highly predictable and can be used for estimating long-term PMImin.
Bécares, Laia; Zhang, Nan
2018-01-01
Abstract Experiencing discrimination is associated with poor mental health, but how cumulative experiences of perceived interpersonal discrimination across attributes, domains, and time are associated with mental disorders is still unknown. Using data from the Study of Women’s Health Across the Nation (1996–2008), we applied latent class analysis and generalized linear models to estimate the association between cumulative exposure to perceived interpersonal discrimination and older women’s mental health. We found 4 classes of perceived interpersonal discrimination, ranging from cumulative exposure to discrimination over attributes, domains, and time to none or minimal reports of discrimination. Women who experienced cumulative perceived interpersonal discrimination over time and across attributes and domains had the highest risk of depression (Center for Epidemiologic Studies Depression Scale score ≥16) compared with women in all other classes. This was true for all women regardless of race/ethnicity, although the type and severity of perceived discrimination differed across racial/ethnic groups. Cumulative exposure to perceived interpersonal discrimination across attributes, domains, and time has an incremental negative long-term association with mental health. Studies that examine exposure to perceived discrimination due to a single attribute in 1 domain or at 1 point in time underestimate the magnitude and complexity of discrimination and its association with health. PMID:29036550
Chernobyl accident: reconstruction of thyroid dose for inhabitants of the Republic of Belarus.
Gavrilin, Y I; Khrouch, V T; Shinkarev, S M; Krysenko, N A; Skryabin, A M; Bouville, A; Anspaugh, L R
1999-02-01
The Chernobyl accident in April 1986 resulted in widespread contamination of the environment with radioactive materials, including (131)I and other radioiodines. This environmental contamination led to substantial radiation doses in the thyroids of many inhabitants of the Republic of Belarus. The reconstruction of thyroid doses received by Belarussians is based primarily on exposure rates measured against the neck of more than 200,000 people in the more contaminated territories; these measurements were carried out within a few weeks after the accident and before the decay of (131)I to negligible levels. Preliminary estimates of thyroid dose have been divided into 3 classes: Class 1 ("measured" doses), Class 2 (doses "derived by affinity"), and Class 3 ("empirically-derived" doses). Class 1 doses are estimated directly from the measured thyroidal (131)I content of the person considered, plus information on lifestyle and dietary habits. Such estimates are available for about 130,000 individuals from the contaminated areas of the Gomel and Mogilev Oblasts and from the city of Minsk. Maximum individual doses are estimated to range up to about 60 Gy. For every village with a sufficient number of residents with Class 1 doses, individual thyroid dose distributions are determined for several age groups and levels of milk consumption. These data are used to derive Class 2 thyroid dose estimates for unmeasured inhabitants of these villages. For any village where the number of residents with Class 1 thyroid doses is small or equal to zero, individual thyroid doses of Class 3 are derived from the relationship obtained between the mean adult thyroid dose and the deposition density of (131)I or 137Cs in villages with Class 2 thyroid doses presenting characteristics similar to those of the village considered. In order to improve the reliability of the Class 3 thyroid doses, an extensive program of measurement of (129)I in soils is envisaged.
Finite-time output feedback control of uncertain switched systems via sliding mode design
NASA Astrophysics Data System (ADS)
Zhao, Haijuan; Niu, Yugang; Song, Jun
2018-04-01
The problem of sliding mode control (SMC) is investigated for a class of uncertain switched systems subject to unmeasurable state and assigned finite (possible short) time constraint. A key issue is how to ensure the finite-time boundedness (FTB) of system state during reaching phase and sliding motion phase. To this end, a state observer is constructed to estimate the unmeasured states. And then, a state estimate-based SMC law is designed such that the state trajectories can be driven onto the specified integral sliding surface during the assigned finite time interval. By means of partitioning strategy, the corresponding FTB over reaching phase and sliding motion phase are guaranteed and the sufficient conditions are derived via average dwell time technique. Finally, an illustrative example is given to illustrate the proposed method.
Kim, Eun Sook; Wang, Yan
2017-01-01
Population heterogeneity in growth trajectories can be detected with growth mixture modeling (GMM). It is common that researchers compute composite scores of repeated measures and use them as multiple indicators of growth factors (baseline performance and growth) assuming measurement invariance between latent classes. Considering that the assumption of measurement invariance does not always hold, we investigate the impact of measurement noninvariance on class enumeration and parameter recovery in GMM through a Monte Carlo simulation study (Study 1). In Study 2, we examine the class enumeration and parameter recovery of the second-order growth mixture modeling (SOGMM) that incorporates measurement models at the first order level. Thus, SOGMM estimates growth trajectory parameters with reliable sources of variance, that is, common factor variance of repeated measures and allows heterogeneity in measurement parameters between latent classes. The class enumeration rates are examined with information criteria such as AIC, BIC, sample-size adjusted BIC, and hierarchical BIC under various simulation conditions. The results of Study 1 showed that the parameter estimates of baseline performance and growth factor means were biased to the degree of measurement noninvariance even when the correct number of latent classes was extracted. In Study 2, the class enumeration accuracy of SOGMM depended on information criteria, class separation, and sample size. The estimates of baseline performance and growth factor mean differences between classes were generally unbiased but the size of measurement noninvariance was underestimated. Overall, SOGMM is advantageous in that it yields unbiased estimates of growth trajectory parameters and more accurate class enumeration compared to GMM by incorporating measurement models. PMID:28928691
Kim, Minjung; Lamont, Andrea E; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M Lee
2016-06-01
Regression mixture models are a novel approach to modeling the heterogeneous effects of predictors on an outcome. In the model-building process, often residual variances are disregarded and simplifying assumptions are made without thorough examination of the consequences. In this simulation study, we investigated the impact of an equality constraint on the residual variances across latent classes. We examined the consequences of constraining the residual variances on class enumeration (finding the true number of latent classes) and on the parameter estimates, under a number of different simulation conditions meant to reflect the types of heterogeneity likely to exist in applied analyses. The results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted on the estimated class sizes and showed the potential to greatly affect the parameter estimates in each class. These results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions are made.
Li, Liang; Mao, Huzhang; Ishwaran, Hemant; Rajeswaran, Jeevanantham; Ehrlinger, John; Blackstone, Eugene H.
2016-01-01
Atrial fibrillation (AF) is an abnormal heart rhythm characterized by rapid and irregular heart beat, with or without perceivable symptoms. In clinical practice, the electrocardiogram (ECG) is often used for diagnosis of AF. Since the AF often arrives as recurrent episodes of varying frequency and duration and only the episodes that occur at the time of ECG can be detected, the AF is often underdiagnosed when a limited number of repeated ECGs are used. In studies evaluating the efficacy of AF ablation surgery, each patient undergo multiple ECGs and the AF status at the time of ECG is recorded. The objective of this paper is to estimate the marginal proportions of patients with or without AF in a population, which are important measures of the efficacy of the treatment. The underdiagnosis problem is addressed by a three-class mixture regression model in which a patient’s probability of having no AF, paroxysmal AF, and permanent AF is modeled by auxiliary baseline covariates in a nested logistic regression. A binomial regression model is specified conditional on a subject being in the paroxysmal AF group. The model parameters are estimated by the EM algorithm. These parameters are themselves nuisance parameters for the purpose of this research, but the estimators of the marginal proportions of interest can be expressed as functions of the data and these nuisance parameters and their variances can be estimated by the sandwich method. We examine the performance of the proposed methodology in simulations and two real data applications. PMID:27983754
Li, Liang; Mao, Huzhang; Ishwaran, Hemant; Rajeswaran, Jeevanantham; Ehrlinger, John; Blackstone, Eugene H
2017-03-01
Atrial fibrillation (AF) is an abnormal heart rhythm characterized by rapid and irregular heartbeat, with or without perceivable symptoms. In clinical practice, the electrocardiogram (ECG) is often used for diagnosis of AF. Since the AF often arrives as recurrent episodes of varying frequency and duration and only the episodes that occur at the time of ECG can be detected, the AF is often underdiagnosed when a limited number of repeated ECGs are used. In studies evaluating the efficacy of AF ablation surgery, each patient undergoes multiple ECGs and the AF status at the time of ECG is recorded. The objective of this paper is to estimate the marginal proportions of patients with or without AF in a population, which are important measures of the efficacy of the treatment. The underdiagnosis problem is addressed by a three-class mixture regression model in which a patient's probability of having no AF, paroxysmal AF, and permanent AF is modeled by auxiliary baseline covariates in a nested logistic regression. A binomial regression model is specified conditional on a subject being in the paroxysmal AF group. The model parameters are estimated by the Expectation-Maximization (EM) algorithm. These parameters are themselves nuisance parameters for the purpose of this research, but the estimators of the marginal proportions of interest can be expressed as functions of the data and these nuisance parameters and their variances can be estimated by the sandwich method. We examine the performance of the proposed methodology in simulations and two real data applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
1982-10-01
thermal noise and radioastronomy is probably the application Shirman had in mind for that work. Kuriksha considers a wide class of two-dimensional...this point has been discussed In terms of EM wave propagation, signal detection, and parameter estimation in such fields as radar and radioastronomy
10 CFR 50.33 - Contents of applications; general information.
Code of Federal Regulations, 2010 CFR
2010-01-01
.... (e) The class of license applied for, the use to which the facility will be put, the period of time... applicable, the following should be provided: (1) If the application is for a construction permit, the... assurance of obtaining the funds necessary to cover estimated construction costs and related fuel cycle...
Daniel C. Dey
1995-01-01
Manipulation of stand stocking through thinning can increase the amount of oak in the upper crown classes and enhance individual tree characteristics that promote good acorn production. Identification of good acorn producers before thinning or shelterwood harvests can be used to retain them in a stand. Stocking charts can be used to time thinnings and to estimate acorn...
Education System Benefits of U.S. Metric Conversion.
ERIC Educational Resources Information Center
Phelps, Richard P.
1996-01-01
U.S. metric conversion efforts are reviewed as they have affected education. Education system benefits and costs are estimated for three possible system conversion plans. The soft-conversion-to-metric plan, which drops all inch-pound instruction, appears to provide the largest net benefits. The primary benefit is in class time saved. (SLD)
MECHANISTIC INFORMATION ON DISINFECTION BY-PRODUCTS FOR RISK ASSESSMENT
Colon cancer is the second most common cancer in people from developed countries, and populations exposed t o 50?g/L or more of trihalomethanes for at 1east 35 years have been estimated to be 1.5 times more likely to develop colon cancer. Trihalomethanes are one of the classes ...
Key algorithms used in GR02: A computer simulation model for predicting tree and stand growth
Garrett A. Hughes; Paul E. Sendak; Paul E. Sendak
1985-01-01
GR02 is an individual tree, distance-independent simulation model for predicting tree and stand growth over time. It performs five major functions during each run: (1) updates diameter at breast height, (2) updates total height, (3) estimates mortality, (4) determines regeneration, and (5) updates crown class.
7 CFR 1744.66 - The financial requirement statement (FRS).
Code of Federal Regulations, 2010 CFR
2010-01-01
... amount, exclusive of the amount for class B stock, of each loan advance, at the time of such advance. (5) Operating expenses—(i) Working capital—new system. Based on the borrower's itemized estimate. (ii) Current... part 1753. (iv) Real estate. Upon request by the borrower after submission of evidence of a valid title...
Typology of club drug use among young adults recruited using time-space sampling
Ramo, Danielle E.; Grov, Christian; Delucchi, Kevin; Kelly, Brian C.; Parsons, Jeffrey T.
2009-01-01
The present study examined patterns of recent club drug use among 400 young adults (18–29) recruited using time-space sampling in NYC. Subjects had used at least one of six club drugs (MDMA, Ketamine, GHB, Cocaine, Methamphetamine, and LSD) within the prior 3 months. We used latent class analysis (LCA) to estimate latent groups based on patterns of recent club drug use and examined differences in demographic and psychological variables by class. A 3-class model fit the data best. Patterns were: Primary cocaine users (42% of sample), Mainstream users (44% of sample), and Wide-range users (14% of sample). Those most likely to be Primary cocaine users were significantly less likely to be heterosexual males and had higher educational attainment than the other two classes. Those most likely to be Wide-range users were less likely to be heterosexual females, more likely to be gay/bisexual males, dependent on club drugs, had significantly greater drug and sexual sensation-seeking, and were more likely to use when experiencing physical discomfort or pleasant times with others compared to the other two groups. Findings highlight the utility of using person-centered approaches to understand patterns of substance use, as well as highlight several patterns of club drug use among young adults. PMID:19939585
Yang, Xiong; Liu, Derong; Wang, Ding; Wei, Qinglai
2014-07-01
In this paper, a reinforcement-learning-based direct adaptive control is developed to deliver a desired tracking performance for a class of discrete-time (DT) nonlinear systems with unknown bounded disturbances. We investigate multi-input-multi-output unknown nonaffine nonlinear DT systems and employ two neural networks (NNs). By using Implicit Function Theorem, an action NN is used to generate the control signal and it is also designed to cancel the nonlinearity of unknown DT systems, for purpose of utilizing feedback linearization methods. On the other hand, a critic NN is applied to estimate the cost function, which satisfies the recursive equations derived from heuristic dynamic programming. The weights of both the action NN and the critic NN are directly updated online instead of offline training. By utilizing Lyapunov's direct method, the closed-loop tracking errors and the NN estimated weights are demonstrated to be uniformly ultimately bounded. Two numerical examples are provided to show the effectiveness of the present approach. Copyright © 2014 Elsevier Ltd. All rights reserved.
Land-cover change in the conterminous United States from 1973 to 2000
Sleeter, Benjamin M.; Sohl, Terry L.; Loveland, Thomas R.; Auch, Roger F.; Acevedo, William; Drummond, Mark A.; Sayler, Kristi L.; Stehman, Stephen V.
2013-01-01
Land-cover change in the conterminous United States was quantified by interpreting change from satellite imagery for a sample stratified by 84 ecoregions. Gross and net changes between 11 land-cover classes were estimated for 5 dates of Landsat imagery (1973, 1980, 1986, 1992, and 2000). An estimated 673,000 km2(8.6%) of the United States’ land area experienced a change in land cover at least one time during the study period. Forest cover experienced the largest net decline of any class with 97,000 km2 lost between 1973 and 2000. The large decline in forest cover was prominent in the two regions with the highest percent of overall change, the Marine West Coast Forests (24.5% of the region experienced a change in at least one time period) and the Eastern Temperate Forests (11.4% of the region with at least one change). Agriculture declined by approximately 90,000 km2 with the largest annual net loss of 12,000 km2 yr−1 occurring between 1986 and 1992. Developed area increased by 33% and with the rate of conversion to developed accelerating rate over time. The time interval with the highest annual rate of change of 47,000 km2 yr−1 (0.6% per year) was 1986–1992. This national synthesis documents a spatially and temporally dynamic era of land change between 1973 and 2000. These results quantify land change based on a nationally consistent monitoring protocol and contribute fundamental estimates critical to developing understanding of the causes and consequences of land change in the conterminous United States.
Wu, Cai; Li, Liang
2018-05-15
This paper focuses on quantifying and estimating the predictive accuracy of prognostic models for time-to-event outcomes with competing events. We consider the time-dependent discrimination and calibration metrics, including the receiver operating characteristics curve and the Brier score, in the context of competing risks. To address censoring, we propose a unified nonparametric estimation framework for both discrimination and calibration measures, by weighting the censored subjects with the conditional probability of the event of interest given the observed data. The proposed method can be extended to time-dependent predictive accuracy metrics constructed from a general class of loss functions. We apply the methodology to a data set from the African American Study of Kidney Disease and Hypertension to evaluate the predictive accuracy of a prognostic risk score in predicting end-stage renal disease, accounting for the competing risk of pre-end-stage renal disease death, and evaluate its numerical performance in extensive simulation studies. Copyright © 2018 John Wiley & Sons, Ltd.
Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won
2011-01-01
To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.
NASA Astrophysics Data System (ADS)
Sardina, V.
2017-12-01
The Pacific Tsunami Warning Center's round the clock operations rely on the rapid determination of the source parameters of earthquakes occurring around the world. To rapidly estimate source parameters such as earthquake location and magnitude the PTWC analyzes data streams ingested in near-real time from a global network of more than 700 seismic stations. Both the density of this network and the data latency of its member stations at any given time have a direct impact on the speed at which the PTWC scientists on duty can locate an earthquake and estimate its magnitude. In this context, it turns operationally advantageous to have the ability of assessing how quickly the PTWC operational system can reasonably detect and locate and earthquake, estimate its magnitude, and send the corresponding tsunami message whenever appropriate. For this purpose, we designed and implemented a multithreaded C++ software package to generate detection time grids for both P- and S-waves after taking into consideration the seismic network topology and the data latency of its member stations. We first encapsulate all the parameters of interest at a given geographic point, such as geographic coordinates, P- and S-waves detection time in at least a minimum number of stations, and maximum allowed azimuth gap into a DetectionTimePoint class. Then we apply composition and inheritance to define a DetectionTimeLine class that handles a vector of DetectionTimePoint objects along a given latitude. A DetectionTimesGrid class in turn handles the dynamic allocation of new TravelTimeLine objects and assigning the calculation of the corresponding P- and S-waves' detection times to new threads. Finally, we added a GUI that allows the user to interactively set all initial calculation parameters and output options. Initial testing in an eight core system shows that generation of a global 2D grid at 1 degree resolution setting detection on at least 5 stations and no azimuth gap restriction takes under 25 seconds. Under the same initial conditions, generation of a 2D grid at 0.1 degree resolution (2.6 million grid points) takes no more than 22 minutes. This preliminary results show a significant gain in grid generation speed when compared to other implementation via either scripts, or previous versions of the C++ code that did not implement multithreading.
New Brown Dwarf Discs in Upper Scorpius Observed with WISE
NASA Technical Reports Server (NTRS)
Dawson, P.; Scholz, A.; Ray, T. P.; Natta, A.; Marsh, K. A.; Padgett, D.; Ressler, M. E.
2013-01-01
We present a census of the disc population for UKIDSS selected brown dwarfs in the 5-10 Myr old Upper Scorpius OB association. For 116 objects originally identified in UKIDSS, the majority of them not studied in previous publications, we obtain photometry from the Wide-Field Infrared Survey Explorer data base. The resulting colour magnitude and colour colour plots clearly show two separate populations of objects, interpreted as brown dwarfs with discs (class II) and without discs (class III). We identify 27 class II brown dwarfs, 14 of them not previously known. This disc fraction (27 out of 116, or 23%) among brown dwarfs was found to be similar to results for K/M stars in Upper Scorpius, suggesting that the lifetimes of discs are independent of the mass of the central object for low-mass stars and brown dwarfs. 5 out of 27 discs (19 per cent) lack excess at 3.4 and 4.6 microns and are potential transition discs (i.e. are in transition from class II to class III). The transition disc fraction is comparable to low-mass stars.We estimate that the time-scale for a typical transition from class II to class III is less than 0.4 Myr for brown dwarfs. These results suggest that the evolution of brown dwarf discs mirrors the behaviour of discs around low-mass stars, with disc lifetimes of the order of 5 10 Myr and a disc clearing time-scale significantly shorter than 1 Myr.
Temporal patterns of apparent leg band retention in North American geese
Zimmerman, Guthrie S.; Kendall, William L.; Moser, Timothy J.; White, Gary C.; Doherty, Paul F.
2009-01-01
An important assumption of mark?recapture studies is that individuals retain their marks, which has not been assessed for goose reward bands. We estimated aluminum leg band retention probabilities and modeled how band retention varied with band type (standard vs. reward band), band age (1-40 months), and goose characteristics (species and size class) for Canada (Branta canadensis), cackling (Branta hutchinsii), snow (Chen caerulescens), and Ross?s (Chen rossii) geese that field coordinators double-leg banded during a North American goose reward band study (N = 40,999 individuals from 15 populations). We conditioned all models in this analysis on geese that were encountered with >1 leg band still attached (n = 5,747 dead recoveries and live recaptures). Retention probabilities for standard aluminum leg bands were high (estimate of 0.9995, SE = 0.001) and constant over 1-40 months. In contrast, apparent retention probabilities for reward bands demonstrated an interactive relationship between 5 size and species classes (small cackling, medium Canada, large Canada, snow, and Ross?s geese). In addition, apparent retention probabilities for each of the 5 classes varied quadratically with time, being lower immediately after banding and at older age classes. The differential retention probabilities among band type (reward vs. standard) that we observed suggests that 1) models estimating reporting probability should incorporate differential band loss if it is nontrivial, 2) goose managers should consider the costs and benefits of double-banding geese on an operational basis, and 3) the United States Geological Survey Bird Banding Lab should modify protocols for receiving recovery data.
Attitude Estimation or Quaternion Estimation?
NASA Technical Reports Server (NTRS)
Markley, F. Landis
2003-01-01
The attitude of spacecraft is represented by a 3x3 orthogonal matrix with unity determinant, which belongs to the three-dimensional special orthogonal group SO(3). The fact that all three-parameter representations of SO(3) are singular or discontinuous for certain attitudes has led to the use of higher-dimensional nonsingular parameterizations, especially the four-component quaternion. In attitude estimation, we are faced with the alternatives of using an attitude representation that is either singular or redundant. Estimation procedures fall into three broad classes. The first estimates a three-dimensional representation of attitude deviations from a reference attitude parameterized by a higher-dimensional nonsingular parameterization. The deviations from the reference are assumed to be small enough to avoid any singularity or discontinuity of the three-dimensional parameterization. The second class, which estimates a higher-dimensional representation subject to enough constraints to leave only three degrees of freedom, is difficult to formulate and apply consistently. The third class estimates a representation of SO(3) with more than three dimensions, treating the parameters as independent. We refer to the most common member of this class as quaternion estimation, to contrast it with attitude estimation. We analyze the first and third of these approaches in the context of an extended Kalman filter with simplified kinematics and measurement models.
Complex-time singularity and locality estimates for quantum lattice systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouch, Gabriel
2015-12-15
We present and prove a well-known locality bound for the complex-time dynamics of a general class of one-dimensional quantum spin systems. Then we discuss how one might hope to extend this same procedure to higher dimensions using ideas related to the Eden growth process and lattice trees. Finally, we demonstrate with a specific family of lattice trees in the plane why this approach breaks down in dimensions greater than one and prove that there exist interactions for which the complex-time dynamics blows-up in finite imaginary time. .
Saquib, Juliann; Saquib, Nazmus; Stefanick, Marcia L; Khanam, Masuma Akter; Anand, Shuchi; Rahman, Mahbubur; Chertow, Glenn M; Barry, Michele; Ahmed, Tahmeed; Cullen, Mark R
2016-07-01
The sustained economic growth in Bangladesh during the previous decade has created a substantial middle-class population, who have adequate income to spend on food, clothing, and lifestyle management. Along with the improvements in living standards, has also come negative impact on health for the middle class. The study objective was to assess sex differences in obesity prevalence, diet, and physical activity among urban middle-class Bangladeshi. In this cross-sectional study, conducted in 2012, we randomly selected 402 adults from Mohammedpur, Dhaka. The sampling technique was multi-stage random sampling. We used standardized questionnaires for data collection and measured height, weight, and waist circumference. Mean age (standard deviation) was 49.4 (12.7) years. The prevalence of both generalized (79% vs. 53%) and central obesity (85% vs. 42%) were significantly higher in women than men. Women reported spending more time watching TV and spending less time walking than men (p<.05); however, men reported a higher intake of unhealthy foods such as fast food and soft drinks. We conclude that the prevalence of obesity is significantly higher in urban middle-class Bangladeshis than previous urban estimates, and the burden of obesity disproportionately affects women. Future research and public health efforts are needed to address this severe obesity problem and to promote active lifestyles.
Estimating Root Mean Square Errors in Remotely Sensed Soil Moisture over Continental Scale Domains
NASA Technical Reports Server (NTRS)
Draper, Clara S.; Reichle, Rolf; de Jeu, Richard; Naeimi, Vahid; Parinussa, Robert; Wagner, Wolfgang
2013-01-01
Root Mean Square Errors (RMSE) in the soil moisture anomaly time series obtained from the Advanced Scatterometer (ASCAT) and the Advanced Microwave Scanning Radiometer (AMSR-E; using the Land Parameter Retrieval Model) are estimated over a continental scale domain centered on North America, using two methods: triple colocation (RMSETC ) and error propagation through the soil moisture retrieval models (RMSEEP ). In the absence of an established consensus for the climatology of soil moisture over large domains, presenting a RMSE in soil moisture units requires that it be specified relative to a selected reference data set. To avoid the complications that arise from the use of a reference, the RMSE is presented as a fraction of the time series standard deviation (fRMSE). For both sensors, the fRMSETC and fRMSEEP show similar spatial patterns of relatively highlow errors, and the mean fRMSE for each land cover class is consistent with expectations. Triple colocation is also shown to be surprisingly robust to representativity differences between the soil moisture data sets used, and it is believed to accurately estimate the fRMSE in the remotely sensed soil moisture anomaly time series. Comparing the ASCAT and AMSR-E fRMSETC shows that both data sets have very similar accuracy across a range of land cover classes, although the AMSR-E accuracy is more directly related to vegetation cover. In general, both data sets have good skill up to moderate vegetation conditions.
Time-frequency analysis of backscattered signals from diffuse radar targets
NASA Astrophysics Data System (ADS)
Kenny, O. P.; Boashash, B.
1993-06-01
The need for analysis of time-varying signals has led to the formulation of a class of joint time-frequency distributions (TFDs). One of these TFDs, the Wigner-Ville distribution (WVD), has useful properties which can be applied to radar imaging. The authors discuss time-frequency representation of the backscattered signal from a diffuse radar target. It is then shown that for point scatterers which are statistically dependent or for which the reflectivity coefficient has a nonzero mean value, reconstruction using time of flight positron emission tomography on time-frequency images is effective for estimating the scattering function of the target.
Francis, Jasmine H; Iyer, Saipriya; Gobin, Y Pierre; Brodie, Scott E; Abramson, David H
2017-10-01
To compare the efficacy and toxicity of treating class 3 retinoblastoma vitreous seeds with ophthalmic artery chemosurgery (OAC) alone versus OAC with intravitreous chemotherapy. Retrospective cohort study. Forty eyes containing clouds (class 3 vitreous seeds) of 40 retinoblastoma patients (19 treated with OAC alone and 21 treated with OAC plus intravitreous and periocular chemotherapy). Ocular survival, disease-free survival and time to regression of seeds were estimated with Kaplan-Meier estimates. Ocular toxicity was evaluated by clinical findings and electroretinography: 30-Hz flicker responses were compared at baseline and last follow-up visit. Continuous variables were compared with Student t test, and categorical variables were compared with the Fisher exact test. Ocular survival, disease-free survival, and time to regression of seeds. There were no disease- or treatment-related deaths and no patient demonstrated externalization of tumor or metastatic disease. There was no significant difference in the age, laterality, disease, or disease status (treatment naïve vs. previously treated) between the 2 groups. The time to regression of seeds was significantly shorter for eyes treated with OAC plus intravitreous chemotherapy (5.7 months) compared with eyes treated with OAC alone (14.6 months; P < 0.001). The 18-month Kaplan-Meier estimates of disease-free survival were significantly worse for the OAC alone group: 67.1% (95% confidence interval, 40.9%-83.6%) versus 94.1% (95% confidence interval, 65%-99.1%) for the OAC plus intravitreous chemotherapy group (P = 0.05). The 36-month Kaplan-Meier estimates of ocular survival were 83.3% (95% confidence interval, 56.7%-94.3%) for the OAC alone group and 100% for the OAC plus intravitreous chemotherapy group (P = 0.16). The mean change in electroretinography responses was not significantly different between groups, decreasing by 11 μV for the OAC alone group and 22 μV for the OAC plus intravitreous chemotherapy group (P = 0.4). Treating vitreous seed clouds with OAC and intravitreous and periocular chemotherapy, compared with OAC alone, resulted in a shorter time to regression and was associated with fewer recurrences requiring additional treatment and fewer enucleations. The toxicity to the retina does not seem to be significantly worse in the OAC plus intravitreous chemotherapy group. Copyright © 2017 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Nucleoprotein Changes in Plant Tumor Growth
Rasch, Ellen; Swift, Hewson; Klein, Richard M.
1959-01-01
Tumor cell transformation and growth were studied in a plant neoplasm, crown gall of bean, induced by Agrobacterium rubi. Ribose nucleic acid (RNA), deoxyribose nucleic acid (DNA), histone, and total protein were estimated by microphotometry of nuclei, nucleoli, and cytoplasm in stained tissue sections. Transformation of normal cells to tumor cells was accompanied by marked increases in ribonucleoprotein content of affected tissues, reaching a maximum 2 to 3 days after inoculation with virulent bacteria. Increased DNA levels were in part associated with increased mitotic frequency, but also with progressive accumulation of nuclei in the higher DNA classes, formed by repeated DNA doubling without intervening reduction by mitosis. Some normal nuclei of the higher DNA classes (with 2, 4, or 8 times the DNA content of diploid nuclei) were reduced to diploid levels by successive cell divisions without intervening DNA synthesis. The normal relation between DNA synthesis and mitosis was thus disrupted in tumor tissue. Nevertheless, clearly defined DNA classes, as found in homologous normal tissues, were maintained in the tumor at all times. PMID:13673042
Minimum Expected Risk Estimation for Near-neighbor Classification
2006-04-01
We consider the problems of class probability estimation and classification when using near-neighbor classifiers, such as k-nearest neighbors ( kNN ...estimate for weighted kNN classifiers with different prior information, for a broad class of risk functions. Theory and simulations show how significant...the difference is compared to the standard maximum likelihood weighted kNN estimates. Comparisons are made with uniform weights, symmetric weights
Reliability and agreement in student ratings of the class environment.
Nelson, Peter M; Christ, Theodore J
2016-09-01
The current study estimated the reliability and agreement of student ratings of the classroom environment obtained using the Responsive Environmental Assessment for Classroom Teaching (REACT; Christ, Nelson, & Demers, 2012; Nelson, Demers, & Christ, 2014). Coefficient alpha, class-level reliability, and class agreement indices were evaluated as each index provides important information for different interpretations and uses of student rating scale data. Data for 84 classes across 29 teachers in a suburban middle school were sampled to derive reliability and agreement indices for the REACT subscales across 4 class sizes: 25, 20, 15, and 10. All participating teachers were White and a larger number of 6th-grade classes were included (42%) relative to 7th- (33%) or 8th- (23%) grade classes. Teachers were responsible for a variety of content areas, including language arts (26%), science (26%), math (20%), social studies (19%), communications (6%), and Spanish (3%). Coefficient alpha estimates were generally high across all subscales and class sizes (α = .70-.95); class-mean estimates were greatly impacted by the number of students sampled from each class, with class-level reliability values generally falling below .70 when class size was reduced from 25 to 20. Further, within-class student agreement varied widely across the REACT subscales (mean agreement = .41-.80). Although coefficient alpha and test-retest reliability are commonly reported in research with student rating scales, class-level reliability and agreement are not. The observed differences across coefficient alpha, class-level reliability, and agreement indices provide evidence for evaluating students' ratings of the class environment according to their intended use (e.g., differentiating between classes, class-level instructional decisions). (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Digital image classification approach for estimating forest clearing and regrowth rates and trends
NASA Technical Reports Server (NTRS)
Sader, Steven A.
1987-01-01
A technique is presented to monitor vegetation changes for a selected study area in Costa Rica. A normalized difference vegetation index was computed for three dates of Landsat satellite data and a modified parallelipiped classifier was employed to generate a multitemporal greenness image representing all three dates. A second-generation image was created by partitioning the intensity levels at each date into high, medium, and low and thereby reducing the number of classes to 21. A sampling technique was applied to describe forest and other land cover change occurring between time periods based on interpretation of aerial photography that closely matched the dates of satellite acquisition. Comparison of the Landsat-derived classes with the photo-interpreted sample areas can provide a basis for evaluating the satellite monitoring technique and the accuracy of estimating forest clearing and regrowth rates and trends.
Traces of business cycles in credit-rating migrations
Boreiko, Dmitri; Kaniovski, Serguei; Pflug, Georg
2017-01-01
Using migration data of a rating agency, this paper attempts to quantify the impact of macroeconomic conditions on credit-rating migrations. The migrations are modeled as a coupled Markov chain, where the macroeconomic factors are represented by unobserved tendency variables. In the simplest case, these binary random variables are static and credit-class-specific. A generalization treats tendency variables evolving as a time-homogeneous Markov chain. A more detailed analysis assumes a tendency variable for every combination of a credit class and an industry. The models are tested on a Standard and Poor’s (S&P’s) dataset. Parameters are estimated by the maximum likelihood method. According to the estimates, the investment-grade financial institutions evolve independently of the rest of the economy represented by the data. This might be an evidence of implicit too-big-to-fail bail-out guarantee policies of the regulatory authorities. PMID:28426758
NASA Astrophysics Data System (ADS)
Gassara, H.; El Hajjaji, A.; Chaabane, M.
2017-07-01
This paper investigates the problem of observer-based control for two classes of polynomial fuzzy systems with time-varying delay. The first class concerns a special case where the polynomial matrices do not depend on the estimated state variables. The second one is the general case where the polynomial matrices could depend on unmeasurable system states that will be estimated. For the last case, two design procedures are proposed. The first one gives the polynomial fuzzy controller and observer gains in two steps. In the second procedure, the designed gains are obtained using a single-step approach to overcome the drawback of a two-step procedure. The obtained conditions are presented in terms of sum of squares (SOS) which can be solved via the SOSTOOLS and a semi-definite program solver. Illustrative examples show the validity and applicability of the proposed results.
Traces of business cycles in credit-rating migrations.
Boreiko, Dmitri; Kaniovski, Serguei; Kaniovski, Yuri; Pflug, Georg
2017-01-01
Using migration data of a rating agency, this paper attempts to quantify the impact of macroeconomic conditions on credit-rating migrations. The migrations are modeled as a coupled Markov chain, where the macroeconomic factors are represented by unobserved tendency variables. In the simplest case, these binary random variables are static and credit-class-specific. A generalization treats tendency variables evolving as a time-homogeneous Markov chain. A more detailed analysis assumes a tendency variable for every combination of a credit class and an industry. The models are tested on a Standard and Poor's (S&P's) dataset. Parameters are estimated by the maximum likelihood method. According to the estimates, the investment-grade financial institutions evolve independently of the rest of the economy represented by the data. This might be an evidence of implicit too-big-to-fail bail-out guarantee policies of the regulatory authorities.
A Modularized Efficient Framework for Non-Markov Time Series Estimation
NASA Astrophysics Data System (ADS)
Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.
2018-06-01
We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.
NASA Astrophysics Data System (ADS)
Baillargeon, Jacques Guy
Though well-documented among numerous cohorts of male workers, little is known about how the healthy worker effect (HWE) and the internal HWE is expressed among cohorts of female workers. This investigation examines characteristics of the HWE and the internal HWE in a cohort of 12,668 female nuclear workers. The HWE, which was estimated by assessing SMRs for all causes of death combined, was found to be modified by race, occupational class and length of follow-up. Smaller variations in the HWE were observed for age at hire, occupational class, length of employment, monitored status, and interruption of monitoring. Examination of SMRs for all cancers combined revealed that the HWE was modified by race, occupational class, monitored status, interruption of monitoring, and length of follow-up. Smaller variations were observed for age at hire and length of employment. Investigators often try to circumvent the HWE by employing internal comparisons; that is, by directly comparing the mortality of subgroups within a defined occupational cohort with one another. However, internal comparisons are not necessarily free from certain biases related to the HWE. If employees are selected on the basis of health into subgroups which serve as the basis for internal comparisons, then a form of internal comparison bias, called the internal healthy worker effect (Stewart et al, 1991; Wilkinson, 1992) may occur. In this investigation, the expression of the internal HWE was examined by estimating the extent to which survival time was modified by the variables under study. Using the Cox PH model, time to death from all causes was found to be modified by occupational class and length of employment but not by race, age at hire, monitored status, or interruption of monitoring. Time to death from all cancers was found to be modified by race and interruption of monitoring but not by age at hire, occupational class, length of employment, or monitored status. These results are important because they may provide leads for other investigators to determine whether the exposure-disease relationships are confounded by characteristics of the employed female populations under study.
Holmøy, Ingrid H; Toft, Nils; Jørgensen, Hannah J; Mørk, Tormod; Sølverød, Liv; Nødtvedt, Ane
2018-06-01
Streptococcus agalactiae (S. agalactiae) has re-emerged as a mastitis pathogen among Norwegian dairy cows. The Norwegian cattle health services recommend that infected herds implement measures to eradicate S. agalactiae, this includes a screening of milk samples from all lactating cows. The performance of the qPCR-test currently in use for this purpose has not been evaluated under field conditions. The objective of this study was to estimate the sensitivity and specificity of the real-time qPCR assay in use in Norway (Mastitis 4 qPCR, DNA Diagnostics A/S, Risskov, Denmark) and compare it to conventional bacteriological culturing for detection of S. agalactiae in milk samples. Because none of these tests are considered a perfect reference test, the evaluation was performed using latent class models in a Bayesian analysis. Aseptically collected cow-composite milk samples from 578 cows belonging to 6 herds were cultured and tested by qPCR. While 37 (6.4%) samples were positive for S. agalactiae by bacteriological culture, 66 (11.4%) samples were positive by qPCR. The within-herd prevalence in the six herds, as estimated by the latent class models ranged from 7.7 to 50.8%. At the recommended cut-off (cycle threshold 37), the sensitivity of the qPCR was significantly higher at 95.3 (95% posterior probability interval [PPI] [84.2; 99.6]) than that of bacteriological culture at 58.2 (95% PPI [43.8; 74.4]). However, bacterial culture had a higher specificity of 99.7 (95% PPI [98.5; 100.0]) compared to the qPCR at 98.5 (95% PPI [94.6; 99.9]). The median estimated negative predictive values of qPCR was consistently higher than those of the BC at all estimated prevalences, and the superiority of the qPCR increased with increasing within-herd prevalence. The median positive predictive values of BC was in general higher than the estimates for the qPCR, however, at the highest prevalence the predictive ability of both tests were similar. Copyright © 2018 Elsevier B.V. All rights reserved.
Markov modulated Poisson process models incorporating covariates for rainfall intensity.
Thayakaran, R; Ramesh, N I
2013-01-01
Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.
A general class of multinomial mixture models for anuran calling survey data
Royle, J. Andrew; Link, W.A.
2005-01-01
We propose a general framework for modeling anuran abundance using data collected from commonly used calling surveys. The data generated from calling surveys are indices of calling intensity (vocalization of males) that do not have a precise link to actual population size and are sensitive to factors that influence anuran behavior. We formulate a model for calling-index data in terms of the maximum potential calling index that could be observed at a site (the 'latent abundance class'), given its underlying breeding population, and we focus attention on estimating the distribution of this latent abundance class. A critical consideration in estimating the latent structure is imperfect detection, which causes the observed abundance index to be less than or equal to the latent abundance class. We specify a multinomial sampling model for the observed abundance index that is conditional on the latent abundance class. Estimation of the latent abundance class distribution is based on the marginal likelihood of the index data, having integrated over the latent class distribution. We apply the proposed modeling framework to data collected as part of the North American Amphibian Monitoring Program (NAAMP).
Radar Imaging Using The Wigner-Ville Distribution
NASA Astrophysics Data System (ADS)
Boashash, Boualem; Kenny, Owen P.; Whitehouse, Harper J.
1989-12-01
The need for analysis of time-varying signals has led to the formulation of a class of joint time-frequency distributions (TFDs). One of these TFDs, the Wigner-Ville distribution (WVD), has useful properties which can be applied to radar imaging. This paper first discusses the radar equation in terms of the time-frequency representation of the signal received from a radar system. It then presents a method of tomographic reconstruction for time-frequency images to estimate the scattering function of the aircraft. An optical archi-tecture is then discussed for the real-time implementation of the analysis method based on the WVD.
7 CFR 1005.73 - Payments to producers and to cooperative associations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... the hundredweight of producer skim milk received times the uniform skim milk price for the month; (ii... operator's estimated use value of the milk using the most recent class prices available for skim milk and... first 15 days of the month at not less than 90 percent of the preceding month's uniform price, adjusted...
7 CFR 1007.73 - Payments to producers and to cooperative associations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... the hundredweight of producer skim milk received times the uniform skim milk price for the month; (ii... operator's estimated use value of the milk using the most recent class prices available for skim milk and... first 15 days of the month at not less than 90 percent of the preceding month's uniform price, adjusted...
Alternative method to validate the seasonal land cover regions of the conterminous United States
Zhiliang Zhu; Donald O. Ohlen; Raymond L. Czaplewski; Robert E. Burgan
1996-01-01
An accuracy assessment method involving double sampling and the multivariate composite estimator has been used to validate the prototype seasonal land cover characteristics database of the conterminous United States. The database consists of 159 land cover classes, classified using time series of 1990 1-km satellite data and augmented with ancillary data including...
Flavonoid intake from food and beverages: What We Eat in America, NHANES 2007-2008, Tables 1-4
USDA-ARS?s Scientific Manuscript database
The Food Surveys Research Group of the Beltsville Human Nutrition Research Center has released 4 flavonoid intake data tables that make available, for the first time, nationally representative estimates of the intake of 29 individual flavonoids in six classes (as well as the sum of those flavonoids)...
YouTube Fridays: Student Led Development of Engineering Estimate Problems
ERIC Educational Resources Information Center
Liberatore, Matthew W.; Vestal, Charles R.; Herring, Andrew M.
2012-01-01
YouTube Fridays devotes a small fraction of class time to student-selected videos related to the course topic, e.g., thermodynamics. The students then write and solve a homework-like problem based on the events in the video. Three recent pilots involving over 300 students have developed a database of videos and questions that reinforce important…
Decay of Correlations, Quantitative Recurrence and Logarithm Law for Contracting Lorenz Attractors
NASA Astrophysics Data System (ADS)
Galatolo, Stefano; Nisoli, Isaia; Pacifico, Maria Jose
2018-03-01
In this paper we prove that a class of skew products maps with non uniformly hyperbolic base has exponential decay of correlations. We apply this to obtain a logarithm law for the hitting time associated to a contracting Lorenz attractor at all the points having a well defined local dimension, and a quantitative recurrence estimation.
What's in a Name? Exploring the Impact of Naming Assignments
ERIC Educational Resources Information Center
Landrum, Brittany; Garza, Gilbert
2016-01-01
Past research has examined how various elements and style of a syllabus influence students' perceptions of the class. Furthermore, students' learning and grade orientations have been shown to impact academic performance and effort. We sought to add to this literature by exploring how an assignment's name might impact estimates of time to be spent…
Learning Time-Varying Coverage Functions
Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le
2015-01-01
Coverage functions are an important class of discrete functions that capture the law of diminishing returns arising naturally from applications in social network analysis, machine learning, and algorithmic game theory. In this paper, we propose a new problem of learning time-varying coverage functions, and develop a novel parametrization of these functions using random features. Based on the connection between time-varying coverage functions and counting processes, we also propose an efficient parameter learning algorithm based on likelihood maximization, and provide a sample complexity analysis. We applied our algorithm to the influence function estimation problem in information diffusion in social networks, and show that with few assumptions about the diffusion processes, our algorithm is able to estimate influence significantly more accurately than existing approaches on both synthetic and real world data. PMID:25960624
Learning Time-Varying Coverage Functions.
Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le
2014-12-08
Coverage functions are an important class of discrete functions that capture the law of diminishing returns arising naturally from applications in social network analysis, machine learning, and algorithmic game theory. In this paper, we propose a new problem of learning time-varying coverage functions, and develop a novel parametrization of these functions using random features. Based on the connection between time-varying coverage functions and counting processes, we also propose an efficient parameter learning algorithm based on likelihood maximization, and provide a sample complexity analysis. We applied our algorithm to the influence function estimation problem in information diffusion in social networks, and show that with few assumptions about the diffusion processes, our algorithm is able to estimate influence significantly more accurately than existing approaches on both synthetic and real world data.
Making historic loss data comparable over time and place
NASA Astrophysics Data System (ADS)
Eichner, Jan; Steuer, Markus; Löw, Petra
2017-04-01
When utilizing historic loss data for present day risk assessment, it is necessary to make the data comparable over time and place. To achieve this, the assessment of costs from natural hazard events requires consistent and homogeneous methodologies for loss estimation as well as a robust treatment of loss data to estimate and/or reduce distorting effects due to a temporal bias in the reporting of small-scale loss events. Here we introduce Munich Re's NatCatSERVICE loss database and present a novel methodology of peril-specific normalization of the historic losses (to account for socio-economic growth of assets over time), and we introduce a metric of severity classification (called CatClass) that allows for a global comparison of impact severity across countries of different stages of economic development.
Wurzer, Birgit; Waters, Debra Lynn; Hale, Leigh Anne
2016-01-01
To investigate reported injuries and circumstances and to estimate the costs related to falls experienced by older adults participating in Steady As You Go (SAYGO) peer-led fall prevention exercise classes. A 12-month prospective cohort study of 207 participants attending community-based SAYGO classes in Dunedin, New Zealand. Types and costs of medical treatment for injuries and circumstances of falls were obtained via standardized fall event questionnaires and phone-administered questionnaires. Eighty-four percent completed the study (160 females, 14 males, mean age = 77.5 [standard deviation = 6.5] years). More than a third of the total falls (55/148 total falls, 37%) did not result in any injuries. Most injuries (45%, n = 67) were sprains, grazes, and bruises. Medical attention was sought 26 times (18%), out of which 6 participants (4%) reported fractures (none femoral). The majority of falls occurred while walking. More falls and injuries occurred outdoors (n = 55). The number of times medical treatment was sought correlated with the number of falls in the previous year (r = 0.50, P = .02). The total number of years attending SAYGO was a significant predictor of lower total number of injuries (stepwise regression β = -0.157, t = -1.99, P = .048). The total cost of medical treatment across all reported injurious falls was estimated at NZ$6946 (US$5415). Older adults participating in SAYGO appear to sustain less severe injuries following a fall than previously reported. More falls and injuries occurred outdoors, suggesting better overall health of these participants. The role of long-term participation in fall prevention exercise classes on injurious falls warrants further investigation.
A unified framework for approximation in inverse problems for distributed parameter systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.
1988-01-01
A theoretical framework is presented that can be used to treat approximation techniques for very general classes of parameter estimation problems involving distributed systems that are either first or second order in time. Using the approach developed, one can obtain both convergence and stability (continuous dependence of parameter estimates with respect to the observations) under very weak regularity and compactness assumptions on the set of admissible parameters. This unified theory can be used for many problems found in the recent literature and in many cases offers significant improvements to existing results.
Comparison of algorithms to generate event times conditional on time-dependent covariates.
Sylvestre, Marie-Pierre; Abrahamowicz, Michal
2008-06-30
The Cox proportional hazards model with time-dependent covariates (TDC) is now a part of the standard statistical analysis toolbox in medical research. As new methods involving more complex modeling of time-dependent variables are developed, simulations could often be used to systematically assess the performance of these models. Yet, generating event times conditional on TDC requires well-designed and efficient algorithms. We compare two classes of such algorithms: permutational algorithms (PAs) and algorithms based on a binomial model. We also propose a modification of the PA to incorporate a rejection sampler. We performed a simulation study to assess the accuracy, stability, and speed of these algorithms in several scenarios. Both classes of algorithms generated data sets that, once analyzed, provided virtually unbiased estimates with comparable variances. In terms of computational efficiency, the PA with the rejection sampler reduced the time necessary to generate data by more than 50 per cent relative to alternative methods. The PAs also allowed more flexibility in the specification of the marginal distributions of event times and required less calibration.
NASA Astrophysics Data System (ADS)
Saini, K. K.; Sehgal, R. K.; Sethi, B. L.
2008-10-01
In this paper major reliability estimators are analyzed and there comparatively result are discussed. There strengths and weaknesses are evaluated in this case study. Each of the reliability estimators has certain advantages and disadvantages. Inter-rater reliability is one of the best ways to estimate reliability when your measure is an observation. However, it requires multiple raters or observers. As an alternative, you could look at the correlation of ratings of the same single observer repeated on two different occasions. Each of the reliability estimators will give a different value for reliability. In general, the test-retest and inter-rater reliability estimates will be lower in value than the parallel forms and internal consistency ones because they involve measuring at different times or with different raters. Since reliability estimates are often used in statistical analyses of quasi-experimental designs.
Matthew B. Russell; Christopher W. Woodall; Shawn Fraver; Anthony W. D' Amato
2013-01-01
Large-scale inventories of downed woody debris (DWD; downed dead wood of a minimum size) often record decay status by assigning pieces to classes of decay according to their visual/structural attributes (e.g., presence of branches, log shape, and texture and color of wood). DWD decay classes are not only essential for estimating current DWD biomass and carbon stocks,...
An information measure for class discrimination. [in remote sensing of crop observation
NASA Technical Reports Server (NTRS)
Shen, S. S.; Badhwar, G. D.
1986-01-01
This article describes a separability measure for class discrimination. This measure is based on the Fisher information measure for estimating the mixing proportion of two classes. The Fisher information measure not only provides a means to assess quantitatively the information content in the features for separating classes, but also gives the lower bound for the variance of any unbiased estimate of the mixing proportion based on observations of the features. Unlike most commonly used separability measures, this measure is not dependent on the form of the probability distribution of the features and does not imply a specific estimation procedure. This is important because the probability distribution function that describes the data for a given class does not have simple analytic forms, such as a Gaussian. Results of applying this measure to compare the information content provided by three Landsat-derived feature vectors for the purpose of separating small grains from other crops are presented.
A Framework for Inferring Taxonomic Class of Asteroids.
NASA Technical Reports Server (NTRS)
Dotson, J. L.; Mathias, D. L.
2017-01-01
Introduction: Taxonomic classification of asteroids based on their visible / near-infrared spectra or multi band photometry has proven to be a useful tool to infer other properties about asteroids. Meteorite analogs have been identified for several taxonomic classes, permitting detailed inference about asteroid composition. Trends have been identified between taxonomy and measured asteroid density. Thanks to NEOWise (Near-Earth-Object Wide-field Infrared Survey Explorer) and Spitzer (Spitzer Space Telescope), approximately twice as many asteroids have measured albedos than the number with taxonomic classifications. (If one only considers spectroscopically determined classifications, the ratio is greater than 40.) We present a Bayesian framework that provides probabilistic estimates of the taxonomic class of an asteroid based on its albedo. Although probabilistic estimates of taxonomic classes are not a replacement for spectroscopic or photometric determinations, they can be a useful tool for identifying objects for further study or for asteroid threat assessment models. Inputs and Framework: The framework relies upon two inputs: the expected fraction of each taxonomic class in the population and the albedo distribution of each class. Luckily, numerous authors have addressed both of these questions. For example, the taxonomic distribution by number, surface area and mass of the main belt has been estimated and a diameter limited estimate of fractional abundances of the near earth asteroid population was made. Similarly, the albedo distributions for taxonomic classes have been estimated for the combined main belt and NEA (Near Earth Asteroid) populations in different taxonomic systems and for the NEA population specifically. The framework utilizes a Bayesian inference appropriate for categorical data. The population fractions provide the prior while the albedo distributions allow calculation of the likelihood an albedo measurement is consistent with a given taxonomic class. These inputs allows calculation of the probability an asteroid with a specified albedo belongs to any given taxonomic class.
Testing for handling bias in survival estimation for black brant
Sedinger, J.S.; Lindberg, M.S.; Rexstad, E.A.; Chelgren, N.D.; Ward, D.H.
1997-01-01
We used an ultrastructure approach in program SURVIV to test for, and remove, bias in survival estimates for the year following mass banding of female black brant (Branta bernicla nigricans). We used relative banding-drive size as the independent variable to control for handling effects in our ultrastructure models, which took the form: S = S0(1 - ??D), where ?? was handling effect and D was the ratio of banding-drive size to the largest banding drive. Brant were divided into 3 classes: goslings, initial captures, and recaptures, based on their state at the time of banding, because we anticipated the potential for heterogeneity in model parameters among classes of brant. Among models examined, for which ?? was not constrained, a model with ?? constant across classes of brant and years, constant survival rates among years for initially captured brant but year-specific survival rates for goslings and recaptures, and year- and class-specific detection probabilities had the lowest Akaike Information Criterion (AIC). Handling effect, ??, was -0.47 ?? 0.13 SE, -0.14 ?? 0.057, and -0.12 ?? 0.049 for goslings, initially released adults, and recaptured adults. Gosling annual survival in the first year ranged from 0.738 ?? 0.072 for the 1986 cohort to 0.260 ?? 0.025 for the 1991 cohort. Inclusion of winter observations increased estimates of first-year survival rates by an average of 30%, suggesting that permanent emigration had an important influence on apparent survival, especially for later cohorts. We estimated annual survival for initially captured brant as 0.782 ?? 0.013, while that for recaptures varied from 0.726 ?? 0.034 to 0.900 ?? 0.062. Our analyses failed to detect a negative effect of handling on survival of brant, which is consistent with an hypothesis of substantial inherent heterogeneity in post-fledging survival rates, such that individuals most likely to die as a result of handling also have lower inherent survival probabilities.
Estimates of Social Contact in a Middle School Based on Self-Report and Wireless Sensor Data.
Leecaster, Molly; Toth, Damon J A; Pettey, Warren B P; Rainey, Jeanette J; Gao, Hongjiang; Uzicanin, Amra; Samore, Matthew
2016-01-01
Estimates of contact among children, used for infectious disease transmission models and understanding social patterns, historically rely on self-report logs. Recently, wireless sensor technology has enabled objective measurement of proximal contact and comparison of data from the two methods. These are mostly small-scale studies, and knowledge gaps remain in understanding contact and mixing patterns and also in the advantages and disadvantages of data collection methods. We collected contact data from a middle school, with 7th and 8th grades, for one day using self-report contact logs and wireless sensors. The data were linked for students with unique initials, gender, and grade within the school. This paper presents the results of a comparison of two approaches to characterize school contact networks, wireless proximity sensors and self-report logs. Accounting for incomplete capture and lack of participation, we estimate that "sensor-detectable", proximal contacts longer than 20 seconds during lunch and class-time occurred at 2 fold higher frequency than "self-reportable" talk/touch contacts. Overall, 55% of estimated talk-touch contacts were also sensor-detectable whereas only 15% of estimated sensor-detectable contacts were also talk-touch. Contacts detected by sensors and also in self-report logs had longer mean duration than contacts detected only by sensors (6.3 vs 2.4 minutes). During both lunch and class-time, sensor-detectable contacts demonstrated substantially less gender and grade assortativity than talk-touch contacts. Hallway contacts, which were ascertainable only by proximity sensors, were characterized by extremely high degree and short duration. We conclude that the use of wireless sensors and self-report logs provide complementary insight on in-school mixing patterns and contact frequency.
Estimates of Social Contact in a Middle School Based on Self-Report and Wireless Sensor Data
Leecaster, Molly; Toth, Damon J. A.; Pettey, Warren B. P.; Rainey, Jeanette J.; Gao, Hongjiang; Uzicanin, Amra; Samore, Matthew
2016-01-01
Estimates of contact among children, used for infectious disease transmission models and understanding social patterns, historically rely on self-report logs. Recently, wireless sensor technology has enabled objective measurement of proximal contact and comparison of data from the two methods. These are mostly small-scale studies, and knowledge gaps remain in understanding contact and mixing patterns and also in the advantages and disadvantages of data collection methods. We collected contact data from a middle school, with 7th and 8th grades, for one day using self-report contact logs and wireless sensors. The data were linked for students with unique initials, gender, and grade within the school. This paper presents the results of a comparison of two approaches to characterize school contact networks, wireless proximity sensors and self-report logs. Accounting for incomplete capture and lack of participation, we estimate that “sensor-detectable”, proximal contacts longer than 20 seconds during lunch and class-time occurred at 2 fold higher frequency than “self-reportable” talk/touch contacts. Overall, 55% of estimated talk-touch contacts were also sensor-detectable whereas only 15% of estimated sensor-detectable contacts were also talk-touch. Contacts detected by sensors and also in self-report logs had longer mean duration than contacts detected only by sensors (6.3 vs 2.4 minutes). During both lunch and class-time, sensor-detectable contacts demonstrated substantially less gender and grade assortativity than talk-touch contacts. Hallway contacts, which were ascertainable only by proximity sensors, were characterized by extremely high degree and short duration. We conclude that the use of wireless sensors and self-report logs provide complementary insight on in-school mixing patterns and contact frequency. PMID:27100090
An evaluation of rapid methods for monitoring vegetation characteristics of wetland bird habitat
Tavernia, Brian G.; Lyons, James E.; Loges, Brian W.; Wilson, Andrew; Collazo, Jaime A.; Runge, Michael C.
2016-01-01
Wetland managers benefit from monitoring data of sufficient precision and accuracy to assess wildlife habitat conditions and to evaluate and learn from past management decisions. For large-scale monitoring programs focused on waterbirds (waterfowl, wading birds, secretive marsh birds, and shorebirds), precision and accuracy of habitat measurements must be balanced with fiscal and logistic constraints. We evaluated a set of protocols for rapid, visual estimates of key waterbird habitat characteristics made from the wetland perimeter against estimates from (1) plots sampled within wetlands, and (2) cover maps made from aerial photographs. Estimated percent cover of annuals and perennials using a perimeter-based protocol fell within 10 percent of plot-based estimates, and percent cover estimates for seven vegetation height classes were within 20 % of plot-based estimates. Perimeter-based estimates of total emergent vegetation cover did not differ significantly from cover map estimates. Post-hoc analyses revealed evidence for observer effects in estimates of annual and perennial covers and vegetation height. Median time required to complete perimeter-based methods was less than 7 percent of the time needed for intensive plot-based methods. Our results show that rapid, perimeter-based assessments, which increase sample size and efficiency, provide vegetation estimates comparable to more intensive methods.
Headgear Accessories Classification Using an Overhead Depth Sensor
Luna, Carlos A.; Marron-Romera, Marta; Mazo, Manuel; Luengo-Sanchez, Sara; Macho-Pedroso, Roberto
2017-01-01
In this paper, we address the generation of semantic labels describing the headgear accessories carried out by people in a scene under surveillance, only using depth information obtained from a Time-of-Flight (ToF) camera placed in an overhead position. We propose a new method for headgear accessories classification based on the design of a robust processing strategy that includes the estimation of a meaningful feature vector that provides the relevant information about the people’s head and shoulder areas. This paper includes a detailed description of the proposed algorithmic approach, and the results obtained in tests with persons with and without headgear accessories, and with different types of hats and caps. In order to evaluate the proposal, a wide experimental validation has been carried out on a fully labeled database (that has been made available to the scientific community), including a broad variety of people and headgear accessories. For the validation, three different levels of detail have been defined, considering a different number of classes: the first level only includes two classes (hat/cap, and no hat/cap), the second one considers three classes (hat, cap and no hat/cap), and the last one includes the full class set with the five classes (no hat/cap, cap, small size hat, medium size hat, and large size hat). The achieved performance is satisfactory in every case: the average classification rates for the first level reaches 95.25%, for the second one is 92.34%, and for the full class set equals 84.60%. In addition, the online stage processing time is 5.75 ms per frame in a standard PC, thus allowing for real-time operation. PMID:28796177
NASA Astrophysics Data System (ADS)
Ahmed, Oumer S.; Franklin, Steven E.; Wulder, Michael A.; White, Joanne C.
2015-03-01
Many forest management activities, including the development of forest inventories, require spatially detailed forest canopy cover and height data. Among the various remote sensing technologies, LiDAR (Light Detection and Ranging) offers the most accurate and consistent means for obtaining reliable canopy structure measurements. A potential solution to reduce the cost of LiDAR data, is to integrate transects (samples) of LiDAR data with frequently acquired and spatially comprehensive optical remotely sensed data. Although multiple regression is commonly used for such modeling, often it does not fully capture the complex relationships between forest structure variables. This study investigates the potential of Random Forest (RF), a machine learning technique, to estimate LiDAR measured canopy structure using a time series of Landsat imagery. The study is implemented over a 2600 ha area of industrially managed coastal temperate forests on Vancouver Island, British Columbia, Canada. We implemented a trajectory-based approach to time series analysis that generates time since disturbance (TSD) and disturbance intensity information for each pixel and we used this information to stratify the forest land base into two strata: mature forests and young forests. Canopy cover and height for three forest classes (i.e. mature, young and mature and young (combined)) were modeled separately using multiple regression and Random Forest (RF) techniques. For all forest classes, the RF models provided improved estimates relative to the multiple regression models. The lowest validation error was obtained for the mature forest strata in a RF model (R2 = 0.88, RMSE = 2.39 m and bias = -0.16 for canopy height; R2 = 0.72, RMSE = 0.068% and bias = -0.0049 for canopy cover). This study demonstrates the value of using disturbance and successional history to inform estimates of canopy structure and obtain improved estimates of forest canopy cover and height using the RF algorithm.
Status of Pelagic Prey Fishes in Lake Michigan, 2014
Warner, David M.; Farha, Steven A.; Claramunt, Randall M.; Hanson, Dale; O'Brien, Timothy P.
2015-01-01
Acoustic surveys were conducted in late summer/early fall during the years 1992-1996 and 2001-2014 to estimate pelagic prey fish biomass in Lake Michigan. Midwater trawling during the surveys as well as target strength provided a measure of species and size composition of the fish community for use in scaling acoustic data and providing species-specific abundance estimates. The 2014 survey consisted of 27 acoustic transects (603 km total) and 31 midwater trawl tows. Four additional transects were sampled in Green Bay but were not included in lakewide estimates. Mean prey fish biomass was 6.5 kg/ha [31.7 kilotonnes (kt = 1,000 metric tons)], equivalent to 69.9 million pounds, which was similar to the estimate in 2013 (29.6 kt) and 25% of the long-term (19 years) mean. The numeric density of the 2014 alewife year-class was 3% of the time series average and was the lowest observed in the 19 years of sampling. This year-class contributed <1% of total alewife biomass (4.6 kg/ha). Alewife ≥age-1 comprised 99.5% of alewife biomass. Numeric density of alewife in Green Bay was more than three times that of the main lake. In 2014, alewife comprised 71% of total prey fish biomass, while rainbow smelt and bloater were 1% and 28% of total biomass, respectively. Rainbow smelt biomass in 2014 (0.08 kg/ha) was 66% lower than in 2013, 2% of the long-term mean, and lower than in any previous year. Bloater biomass in 2014 was 1.8 kg/ha, nearly three times more than the 2013 biomass, and 20% of the long-term mean. Mean density of small bloater in 2014 (122 fish/ha) was lower than peak values observed in 2007-2009 but was similar to the time series mean (124 fish/ha). In 2014, pelagic prey fish biomass in Lake Michigan was 71% of that in Lake Huron (all basins), where the community is dominated by bloater.
On the use of variability time-scales as an early classifier of radio transients and variables
NASA Astrophysics Data System (ADS)
Pietka, M.; Staley, T. D.; Pretorius, M. L.; Fender, R. P.
2017-11-01
We have shown previously that a broad correlation between the peak radio luminosity and the variability time-scales, approximately L ∝ τ5, exists for variable synchrotron emitting sources and that different classes of astrophysical sources occupy different regions of luminosity and time-scale space. Based on those results, we investigate whether the most basic information available for a newly discovered radio variable or transient - their rise and/or decline rate - can be used to set initial constraints on the class of events from which they originate. We have analysed a sample of ≈800 synchrotron flares, selected from light curves of ≈90 sources observed at 5-8 GHz, representing a wide range of astrophysical phenomena, from flare stars to supermassive black holes. Selection of outbursts from the noisy radio light curves has been done automatically in order to ensure reproducibility of results. The distribution of rise/decline rates for the selected flares is modelled as a Gaussian probability distribution for each class of object, and further convolved with estimated areal density of that class in order to correct for the strong bias in our sample. We show in this way that comparing the measured variability time-scale of a radio transient/variable of unknown origin can provide an early, albeit approximate, classification of the object, and could form part of a suite of measurements used to provide early categorization of such events. Finally, we also discuss the effect scintillating sources will have on our ability to classify events based on their variability time-scales.
NASA Astrophysics Data System (ADS)
Beckx, Carolien; Int Panis, Luc; Uljee, Inge; Arentze, Theo; Janssens, Davy; Wets, Geert
Traditional exposure studies that link concentrations with population data do not always take into account the temporal and spatial variations in both concentrations and population density. In this paper we present an integrated model chain for the determination of nation-wide exposure estimates that incorporates temporally and spatially resolved information about people's location and activities (obtained from an activity-based transport model) and about ambient pollutant concentrations (obtained from a dispersion model). To the best of our knowledge, it is the first time that such an integrated exercise was successfully carried out in a fully operational modus for all models under consideration. The evaluation of population level exposure in The Netherlands to NO 2 at different time-periods, locations, for different subpopulations (gender, socio-economic status) and during different activities (residential, work, transport, shopping) is chosen as a case-study to point out the new features of this methodology. Results demonstrate that, by neglecting people's travel behaviour, total average exposure to NO 2 will be underestimated by 4% and hourly exposure results can be underestimated by more than 30%. A more detailed exposure analysis reveals the intra-day variations in exposure estimates and the presence of large exposure differences between different activities (traffic > work > shopping > home) and between subpopulations (men > women, low socio-economic class > high socio-economic class). This kind of exposure analysis, disaggregated by activities or by subpopulations, per time of day, provides useful insight and information for scientific and policy purposes. It demonstrates that policy measures, aimed at reducing the overall (average) exposure concentration of the population may impact in a different way depending on the time of day or the subgroup considered. From a scientific point of view, this new approach can be used to reduce exposure misclassification.
Estimating aspen volume and weight for individual trees, diameter classes, or entire stands.
Bryce E. Schlaegel
1975-01-01
Presents allometric volume and weight equations for Minnesota quaking aspen. Volume, green weight, and dry weight estimates can be made for wood, bark, and limbs on the basis of individual trees, diameter classes, or entire stands.
Józwicki, Wojciech; Gołda, Ryszard; Domaniewska, Jolanta; Skok, Zdzisław; Jarzemski, Piotr; Przybylski, Grzegorz; Domaniewski, Jan
2009-01-01
The aim of the study was connected with smoking health behaviour estimation among public (SZP) and nonpublic (SZN) grammar school students. The analysis of 156 anonymous questionnaires was made. Questionnaires contained questions of parents' education, material situation of family, physical education, social relations with family and peers and positive or negative perception of smoking. In total trial we observed a strong positive correlation between style of smoking or number of smoked cigarettes and positive perception of smoking (r = 0.62 or r = 0.36 respectively). The latter correlated significantly with family presence of smoking (r = 0.18). Percentages of smoking students of SZP and SZN differed and amounted 22% and 18% respectively. Within I/II SZP classes the smoking depended on material position of family (r = 0.28) and positive perception of smoking (r = 0.68). Among students of III SZP classes the dependence on material situation was stronger (r = 0.49), while students of III SZN classes became to perceive smoking more positive (r = 0.82). Social relations of students of I/II SZN classes were inversely proportional to prevalence of smoking in their families. Smoking students of III SZN classes worked out much more variously in comparison with pupils of SZP. The main motivation of smoking within school students was the positive perception of smoking. The differences of smoking prevalence within both types of school probably formed in the families and observed in I/II classes pupils, vanished during the time of III class of studying. Elitism of school do not protect the student from smoking: during the time of III SZN class the smoking receives clearly positive appearance and became established. Probably existing antinicotinic school programs should much more decidedly deliver the negative appearance of health effects of smoking.
NASA Technical Reports Server (NTRS)
Kelecy, Tom; Payne, Tim; Thurston, Robin; Stansbery, Gene
2007-01-01
A population of deep space objects is thought to be high area-to-mass ratio (AMR) debris having origins from sources in the geosynchronous orbit (GEO) belt. The typical AMR values have been observed to range anywhere from 1's to 10's of m(sup 2)/kg, and hence, higher than average solar radiation pressure effects result in long-term migration of eccentricity (0.1-0.6) and inclination over time. However, the nature of the debris orientation-dependent dynamics also results time-varying solar radiation forces about the average which complicate the short-term orbit determination processing. The orbit determination results are presented for several of these debris objects, and highlight their unique and varied dynamic attributes. Estimation or the solar pressure dynamics over time scales suitable for resolving the shorter term dynamics improves the orbit estimation, and hence, the orbit predictions needed to conduct follow-up observations.
Sun, Min; Wong, David; Kronenfeld, Barry
2016-01-01
Despite conceptual and technology advancements in cartography over the decades, choropleth map design and classification fail to address a fundamental issue: estimates that are statistically indifferent may be assigned to different classes on maps or vice versa. Recently, the class separability concept was introduced as a map classification criterion to evaluate the likelihood that estimates in two classes are statistical different. Unfortunately, choropleth maps created according to the separability criterion usually have highly unbalanced classes. To produce reasonably separable but more balanced classes, we propose a heuristic classification approach to consider not just the class separability criterion but also other classification criteria such as evenness and intra-class variability. A geovisual-analytic package was developed to support the heuristic mapping process to evaluate the trade-off between relevant criteria and to select the most preferable classification. Class break values can be adjusted to improve the performance of a classification. PMID:28286426
Explicit least squares system parameter identification for exact differential input/output models
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1993-01-01
The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.
NASA Technical Reports Server (NTRS)
Peters, C.; Kampe, F. (Principal Investigator)
1980-01-01
The mathematical description and implementation of the statistical estimation procedure known as the Houston integrated spatial/spectral estimator (HISSE) is discussed. HISSE is based on a normal mixture model and is designed to take advantage of spectral and spatial information of LANDSAT data pixels, utilizing the initial classification and clustering information provided by the AMOEBA algorithm. The HISSE calculates parametric estimates of class proportions which reduce the error inherent in estimates derived from typical classify and count procedures common to nonparametric clustering algorithms. It also singles out spatial groupings of pixels which are most suitable for labeling classes. These calculations are designed to aid the analyst/interpreter in labeling patches with a crop class label. Finally, HISSE's initial performance on an actual LANDSAT agricultural ground truth data set is reported.
Model Year 2014 Fuel Economy Guide: EPA Fuel Economy Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2013-12-01
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been dividedmore » into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.« less
Model Year 2015 Fuel Economy Guide: EPA Fuel Economy Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2014-12-01
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been dividedmore » into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.« less
Model Year 2016 Fuel Economy Guide: EPA Fuel Economy Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles. The vehicles listed have been dividedmore » into three classes of cars, three classes of light duty trucks, and three classes of special purpose vehicles.« less
Saquib, Juliann; Saquib, Nazmus; Stefanick, Marcia L.; Khanam, Masuma Akter; Anand, Shuchi; Rahman, Mahbubur; Chertow, Glenn M.; Barry, Michele; Ahmed, Tahmeed; Cullen, Mark R.
2016-01-01
Background The sustained economic growth in Bangladesh during the previous decade has created a substantial middle-class population, who have adequate income to spend on food, clothing, and lifestyle management. Along with the improvements in living standards, has also come negative impact on health for the middle class. The study objective was to assess sex differences in obesity prevalence, diet, and physical activity among urban middle-class Bangladeshi. Methods In this cross-sectional study, conducted in 2012, we randomly selected 402 adults from Mohammedpur, Dhaka. The sampling technique was multi-stage random sampling. We used standardized questionnaires for data collection and measured height, weight, and waist circumference. Results Mean age (standard deviation) was 49.4 (12.7) years. The prevalence of both generalized (79% vs. 53%) and central obesity (85% vs. 42%) were significantly higher in women than men. Women reported spending more time watching TV and spending less time walking than men (p<.05); however, men reported a higher intake of unhealthy foods such as fast food and soft drinks. Conclusions We conclude that the prevalence of obesity is significantly higher in urban middle-class Bangladeshis than previous urban estimates, and the burden of obesity disproportionately affects women. Future research and public health efforts are needed to address this severe obesity problem and to promote active lifestyles. PMID:27610059
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, V.E.
1982-01-01
A class of goodness-of-fit estimators is found to provide a useful alternative in certain situations to the standard maximum likelihood method which has some undesirable estimation characteristics for estimation from the three-parameter lognormal distribution. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Filliben tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Robustness of the procedures are examined and example data sets analyzed.
Estimating short-run and long-run interaction mechanisms in interictal state.
Ozkaya, Ata; Korürek, Mehmet
2010-04-01
We address the issue of analyzing electroencephalogram (EEG) from seizure patients in order to test, model and determine the statistical properties that distinguish between EEG states (interictal, pre-ictal, ictal) by introducing a new class of time series analysis methods. In the present study: firstly, we employ statistical methods to determine the non-stationary behavior of focal interictal epileptiform series within very short time intervals; secondly, for such intervals that are deemed non-stationary we suggest the concept of Autoregressive Integrated Moving Average (ARIMA) process modelling, well known in time series analysis. We finally address the queries of causal relationships between epileptic states and between brain areas during epileptiform activity. We estimate the interaction between different EEG series (channels) in short time intervals by performing Granger-causality analysis and also estimate such interaction in long time intervals by employing Cointegration analysis, both analysis methods are well-known in econometrics. Here we find: first, that the causal relationship between neuronal assemblies can be identified according to the duration and the direction of their possible mutual influences; second, that although the estimated bidirectional causality in short time intervals yields that the neuronal ensembles positively affect each other, in long time intervals neither of them is affected (increasing amplitudes) from this relationship. Moreover, Cointegration analysis of the EEG series enables us to identify whether there is a causal link from the interictal state to ictal state.
Out of School, but Not out of Class
ERIC Educational Resources Information Center
Fleming, David S.
2010-01-01
Over the course of their K-12 studies, it has been found that students may have replacement teachers for an estimated 5-10% of their instructional time (Billman, 1994; Nidds & McGerald, 1994). A well-conceived plan can help a teacher ensure an effective learning experience instead of just a "roll-out-the-ball" day whether his/her absence was…
A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.
Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less
A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes
Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.
2017-02-05
Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less
A recursive solution for a fading memory filter derived from Kalman filter theory
NASA Technical Reports Server (NTRS)
Statman, J. I.
1986-01-01
A simple recursive solution for a class of fading memory tracking filters is presented. A fading memory filter provides estimates of filter states based on past measurements, similar to a traditional Kalman filter. Unlike a Kalman filter, an exponentially decaying weight is applied to older measurements, discounting their effect on present state estimates. It is shown that Kalman filters and fading memory filters are closely related solutions to a general least squares estimator problem. Closed form filter transfer functions are derived for a time invariant, steady state, fading memory filter. These can be applied in loop filter implementation of the Deep Space Network (DSN) Advanced Receiver carrier phase locked loop (PLL).
Multidimensional density shaping by sigmoids.
Roth, Z; Baram, Y
1996-01-01
An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.
Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelouche, Doron; Pozo-Nuñez, Francisco; Zucker, Shay, E-mail: doron@sci.haifa.ac.il, E-mail: francisco.pozon@gmail.com, E-mail: shayz@post.tau.ac.il
A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann’s mean-square successive-difference estimator are found to be superior to those using other estimators. An optimized von Neumann scheme is formulated, which better handles sparsely sampled data and outperforms current implementations of discretemore » correlation function methods. This scheme is applied to existing reverberation data of varying quality, and consistency with previously reported time delays is found. In particular, the size–luminosity relation of the broad-line region in quasars is recovered with a scatter comparable to that obtained by other works, yet with fewer assumptions made concerning the process underlying the variability. The proposed method for time-lag determination is particularly relevant for irregularly sampled time series, and in cases where the process underlying the variability cannot be adequately modeled.« less
Rakkiyappan, R; Maheswari, K; Velmurugan, G; Park, Ju H
2018-05-17
This paper investigates H ∞ state estimation problem for a class of semi-Markovian jumping discrete-time neural networks model with event-triggered scheme and quantization. First, a new event-triggered communication scheme is introduced to determine whether or not the current sampled sensor data should be broad-casted and transmitted to the quantizer, which can save the limited communication resource. Second, a novel communication framework is employed by the logarithmic quantizer that quantifies and reduces the data transmission rate in the network, which apparently improves the communication efficiency of networks. Third, a stabilization criterion is derived based on the sufficient condition which guarantees a prescribed H ∞ performance level in the estimation error system in terms of the linear matrix inequalities. Finally, numerical simulations are given to illustrate the correctness of the proposed scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.
Bobb, Jennifer F; Dominici, Francesca; Peng, Roger D
2011-12-01
Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. © 2011, The International Biometric Society.
The two-sample problem with induced dependent censorship.
Huang, Y
1999-12-01
Induced dependent censorship is a general phenomenon in health service evaluation studies in which a measure such as quality-adjusted survival time or lifetime medical cost is of interest. We investigate the two-sample problem and propose two classes of nonparametric tests. Based on consistent estimation of the survival function for each sample, the two classes of test statistics examine the cumulative weighted difference in hazard functions and in survival functions. We derive a unified asymptotic null distribution theory and inference procedure. The tests are applied to trial V of the International Breast Cancer Study Group and show that long duration chemotherapy significantly improves time without symptoms of disease and toxicity of treatment as compared with the short duration treatment. Simulation studies demonstrate that the proposed tests, with a wide range of weight choices, perform well under moderate sample sizes.
Reach and effectiveness of DVD and in-person diabetes self-management education.
Glasgow, Russell E; Edwards, Linda L; Whitesides, Holly; Carroll, Nikki; Sanders, Tristan J; McCray, Barbara L
2009-12-01
To evaluate the reach and effectiveness of a diabetes self-management DVD compared to classroom-based instruction. A hybrid preference/randomized design was used with participants assigned to Choice v. Randomized and DVD v. Class conditions. One hundred and eighty-nine adults with type 2 diabetes participated. Key outcomes included self-management behaviours, process measures including DVD implementation and hypothesized mediators and clinical risk factors. In the Choice condition, four times as many participants chose the mailed DVD as selected Class-based instruction (38.8 v. 9.4%, p<0.001). At the 6-month follow-up, the DVD produced results generally not significantly different than classroom-based instruction, but a combined Class plus DVD condition did not improve outcomes beyond those produced by the classes alone. The DVD appears to have merit as an efficient and appealing alternative to brief classroom-based diabetes education, and the hybrid design is recommended to provide estimates of programme reach.
The cluster-cluster correlation function. [of galaxies
NASA Technical Reports Server (NTRS)
Postman, M.; Geller, M. J.; Huchra, J. P.
1986-01-01
The clustering properties of the Abell and Zwicky cluster catalogs are studied using the two-point angular and spatial correlation functions. The catalogs are divided into eight subsamples to determine the dependence of the correlation function on distance, richness, and the method of cluster identification. It is found that the Corona Borealis supercluster contributes significant power to the spatial correlation function to the Abell cluster sample with distance class of four or less. The distance-limited catalog of 152 Abell clusters, which is not greatly affected by a single system, has a spatial correlation function consistent with the power law Xi(r) = 300r exp -1.8. In both the distance class four or less and distance-limited samples the signal in the spatial correlation function is a power law detectable out to 60/h Mpc. The amplitude of Xi(r) for clusters of richness class two is about three times that for richness class one clusters. The two-point spatial correlation function is sensitive to the use of estimated redshifts.
Apparent multifractality of self-similar Lévy processes
NASA Astrophysics Data System (ADS)
Zamparo, Marco
2017-07-01
Scaling properties of time series are usually studied in terms of the scaling laws of empirical moments, which are the time average estimates of moments of the dynamic variable. Nonlinearities in the scaling function of empirical moments are generally regarded as a sign of multifractality in the data. We show that, except for the Brownian motion, this method fails to disclose the correct monofractal nature of self-similar Lévy processes. We prove that for this class of processes it produces apparent multifractality characterised by a piecewise-linear scaling function with two different regimes, which match at the stability index of the considered process. This result is motivated by previous numerical evidence. It is obtained by introducing an appropriate stochastic normalisation which is able to cure empirical moments, without hiding their dependence on time, when moments they aim at estimating do not exist.
Complex Population Dynamics and the Coalescent Under Neutrality
Volz, Erik M.
2012-01-01
Estimates of the coalescent effective population size Ne can be poorly correlated with the true population size. The relationship between Ne and the population size is sensitive to the way in which birth and death rates vary over time. The problem of inference is exacerbated when the mechanisms underlying population dynamics are complex and depend on many parameters. In instances where nonparametric estimators of Ne such as the skyline struggle to reproduce the correct demographic history, model-based estimators that can draw on prior information about population size and growth rates may be more efficient. A coalescent model is developed for a large class of populations such that the demographic history is described by a deterministic nonlinear dynamical system of arbitrary dimension. This class of demographic model differs from those typically used in population genetics. Birth and death rates are not fixed, and no assumptions are made regarding the fraction of the population sampled. Furthermore, the population may be structured in such a way that gene copies reproduce both within and across demes. For this large class of models, it is shown how to derive the rate of coalescence, as well as the likelihood of a gene genealogy with heterochronous sampling and labeled taxa, and how to simulate a coalescent tree conditional on a complex demographic history. This theoretical framework encapsulates many of the models used by ecologists and epidemiologists and should facilitate the integration of population genetics with the study of mathematical population dynamics. PMID:22042576
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2015-01-01
Summary The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function. PMID:27346982
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2016-06-01
The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.
Estimation and Analysis of Nonlinear Stochastic Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Marcus, S. I.
1975-01-01
The algebraic and geometric structures of certain classes of nonlinear stochastic systems were exploited in order to obtain useful stability and estimation results. The class of bilinear stochastic systems (or linear systems with multiplicative noise) was discussed. The stochastic stability of bilinear systems driven by colored noise was considered. Approximate methods for obtaining sufficient conditions for the stochastic stability of bilinear systems evolving on general Lie groups were discussed. Two classes of estimation problems involving bilinear systems were considered. It was proved that, for systems described by certain types of Volterra series expansions or by certain bilinear equations evolving on nilpotent or solvable Lie groups, the optimal conditional mean estimator consists of a finite dimensional nonlinear set of equations. The theory of harmonic analysis was used to derive suboptimal estimators for bilinear systems driven by white noise which evolve on compact Lie groups or homogeneous spaces.
Estimation of proportions in mixed pixels through their region characterization
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.
Saturated hydraulic conductivity of US soils grouped according to textural class and bulk density
USDA-ARS?s Scientific Manuscript database
Importance of the saturated hydraulic conductivity as soil hydraulic property led to the development of multiple pedotransfer functions for estimating it. One approach to estimating Ksat was using textural classes rather than specific textural fraction contents as pedotransfer inputs. The objective...
Saturated hydraulic conductivity of US soils grouped according textural class and bulk density
USDA-ARS?s Scientific Manuscript database
Importance of the saturated hydraulic conductivity as soil hydraulic property led to the development of multiple pedotransfer functions for estimating it. One approach to estimating Ksat was using textural classes rather than specific textural fraction contents as pedotransfer inputs. The objective...
NASA Astrophysics Data System (ADS)
Takiguchi, M.; Asano, K.; Iwata, T.
2010-12-01
Two M7 class subduction zone earthquakes have occurred in the Ibaraki-ken-oki region, northeast Japan, at 23:23 on July 23, 1982 JST (Mw7.0; 1982MS) and at 01:45 on May 8, 2008 JST (Mw6.8; 2008MS). It has been reported that, from the results of the teleseismic waveform inversion, the rupture of the asperity repeated (HERP, 2010). We estimated the source processes of these earthquakes in detail by analyzing the strong motion records and discussed how much the source characteristics of the two earthquakes repeated. First, we estimated the source model of 2008MS following the method of Miyake et al. (2003). The best-fit set of the model parameters was determined by a grid search using forward modeling of broad-band ground motions. A single 12.6 km × 12.6 km rectangular Strong Motion Generation Area (SMGA, Miyake et al., 2003) was estimated. The rupture of the SMGA of 2008MS (2008SMGA) started from the hypocenter and propagated mainly to northeast. Next, we estimated the source model of 1982MS. We compared the waveforms of 1982MS and 2008MS recorded at the same stations and found the initial rupture phase before the main rupture phase on the waveforms of 1982MS. The travel time analysis showed that the main rupture of the 1982MS started approximately 33 km west of the hypocenter at about 11s after the origin time. The main rupture starting point was located inside 2008SMGA, suggesting that the two SMGAs overlapped in part. The seismic moment ratio of 1982MS to 2008MS was approximately 1.6, and we also found the observed acceleration amplitude spectra of 1982MS were 1.5 times higher than those of 2008MS in the available frequency range. We performed the waveform modeling for 1982MS with a constraint of these ratios. A single rectangular SMGA (1982SMGA) was estimated for the main rupture, which had the same size and the same rupture propagation direction as those of 2008SMGA. However, the estimated stress drop or average slip amount of 1982SMGA was 1.5 times larger than those of 2008SMGA.
Nasari, Masoud M; Szyszkowicz, Mieczysław; Chen, Hong; Crouse, Daniel; Turner, Michelle C; Jerrett, Michael; Pope, C Arden; Hubbell, Bryan; Fann, Neal; Cohen, Aaron; Gapstur, Susan M; Diver, W Ryan; Stieb, David; Forouzanfar, Mohammad H; Kim, Sun-Young; Olives, Casey; Krewski, Daniel; Burnett, Richard T
2016-01-01
The effectiveness of regulatory actions designed to improve air quality is often assessed by predicting changes in public health resulting from their implementation. Risk of premature mortality from long-term exposure to ambient air pollution is the single most important contributor to such assessments and is estimated from observational studies generally assuming a log-linear, no-threshold association between ambient concentrations and death. There has been only limited assessment of this assumption in part because of a lack of methods to estimate the shape of the exposure-response function in very large study populations. In this paper, we propose a new class of variable coefficient risk functions capable of capturing a variety of potentially non-linear associations which are suitable for health impact assessment. We construct the class by defining transformations of concentration as the product of either a linear or log-linear function of concentration multiplied by a logistic weighting function. These risk functions can be estimated using hazard regression survival models with currently available computer software and can accommodate large population-based cohorts which are increasingly being used for this purpose. We illustrate our modeling approach with two large cohort studies of long-term concentrations of ambient air pollution and mortality: the American Cancer Society Cancer Prevention Study II (CPS II) cohort and the Canadian Census Health and Environment Cohort (CanCHEC). We then estimate the number of deaths attributable to changes in fine particulate matter concentrations over the 2000 to 2010 time period in both Canada and the USA using both linear and non-linear hazard function models.
A Flexible Latent Class Approach to Estimating Test-Score Reliability
ERIC Educational Resources Information Center
van der Palm, Daniël W.; van der Ark, L. Andries; Sijtsma, Klaas
2014-01-01
The latent class reliability coefficient (LCRC) is improved by using the divisive latent class model instead of the unrestricted latent class model. This results in the divisive latent class reliability coefficient (DLCRC), which unlike LCRC avoids making subjective decisions about the best solution and thus avoids judgment error. A computational…
Neil I. Lamson; Neil I. Lamson
1987-01-01
Northern red oak site-index (SI) class is estimated using height and diameter of dominant and codominant trees for five Appalachian hardwood species. Methods for predicting total height as a function of diameter are presented. Because total height of 4- and 6-inch trees varies less than 5 feet for the three northern red oak SI classes, use trees that are at least 8...
Effects off system factors on the economics of and demand for small solar thermal power systems
NASA Technical Reports Server (NTRS)
1981-01-01
Market penetration as a function time, SPS performance factors, and market/economic considerations was estimated, and commercialization strategies were formulated. A market analysis task included personal interviews and supplemental mail surveys to acquire statistical data and to identify and measure attitudes, reactions and intentions of prospective SPS users. Interviews encompassed three ownership classes of electric utilities and industrial firms in the SIC codes for energy consumption. A market demand model was developed which utilized the data base developed, and projected energy price and consumption data to perform sensitivity analyses and estimate potential market for SPS.
Effects off system factors on the economics of and demand for small solar thermal power systems
NASA Astrophysics Data System (ADS)
1981-09-01
Market penetration as a function time, SPS performance factors, and market/economic considerations was estimated, and commercialization strategies were formulated. A market analysis task included personal interviews and supplemental mail surveys to acquire statistical data and to identify and measure attitudes, reactions and intentions of prospective SPS users. Interviews encompassed three ownership classes of electric utilities and industrial firms in the SIC codes for energy consumption. A market demand model was developed which utilized the data base developed, and projected energy price and consumption data to perform sensitivity analyses and estimate potential market for SPS.
The 'robust' capture-recapture design allows components of recruitment to be estimated
Pollock, K.H.; Kendall, W.L.; Nichols, J.D.; Lebreton, J.-D.; North, P.M.
1993-01-01
The 'robust' capture-recapture design (Pollock 1982) allows analyses which combine features of closed population model analyses (Otis et aI., 1978, White et aI., 1982) and open population model analyses (Pollock et aI., 1990). Estimators obtained under these analyses are more robust to unequal catch ability than traditional Jolly-Seber estimators (Pollock, 1982; Pollock et al., 1990; Kendall, 1992). The robust design also allows estimation of parameters for population size, survival rate and recruitment numbers for all periods of the study unlike under Jolly-Seber type models. The major advantage of this design that we emphasize in this short review paper is that it allows separate estimation of immigration and in situ recruitment numbers for a two or more age class model (Nichols and Pollock, 1990). This is contrasted with the age-dependent Jolly-Seber model (Pollock, 1981; Stokes, 1984; Pollock et L, 1990) which provides separate estimates for immigration and in situ recruitment for all but the first two age classes where there is at least a three age class model. The ability to achieve this separation of recruitment components can be very important to population modelers and wildlife managers as many species can only be separated into two easily identified age classes in the field.
Time series modeling by a regression approach based on a latent process.
Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice
2009-01-01
Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.
Television viewing through ages 2-5 years and bullying involvement in early elementary school.
Verlinden, Marina; Tiemeier, Henning; Veenstra, René; Mieloo, Cathelijne L; Jansen, Wilma; Jaddoe, Vincent W V; Raat, Hein; Hofman, Albert; Verhulst, Frank C; Jansen, Pauline W
2014-02-12
High television exposure time at young age has been described as a potential risk factor for developing behavioral problems. However, less is known about the effects of preschool television on subsequent bullying involvement. We examined the association between television viewing time through ages 2-5 and bullying involvement in the first grades of elementary school. We hypothesized that high television exposure increases the risk of bullying involvement. TV viewing time was assessed repeatedly in early childhood using parental report. To combine these repeated assessments we used latent class analysis. Four exposure classes were identified and labeled "low", "mid-low", "mid-high" and "high". Bullying involvement was assessed by teacher questionnaire (n=3423, mean age 6.8 years). Additionally, peer/self-report of bullying involvement was obtained using a peer nomination procedure (n=1176, mean age 7.6 years). We examined child risk of being a bully, victim or a bully-victim (compared to being uninvolved in bullying). High television exposure class was associated with elevated risks of bullying and victimization. Also, in both teacher- and child-reported data, children in the high television exposure class were more likely to be a bully-victim (OR=2.11, 95% CI: 1.42-3.13 and OR=3.68, 95% CI: 1.75-7.74 respectively). However, all univariate effect estimates attenuated and were no longer statistically significant once adjusted for maternal and child covariates. The association between television viewing time through ages 2-5 and bullying involvement in early elementary school is confounded by maternal and child socio-demographic characteristics.
Joint deconvolution and classification with applications to passive acoustic underwater multipath.
Anderson, Hyrum S; Gupta, Maya R
2008-11-01
This paper addresses the problem of classifying signals that have been corrupted by noise and unknown linear time-invariant (LTI) filtering such as multipath, given labeled uncorrupted training signals. A maximum a posteriori approach to the deconvolution and classification is considered, which produces estimates of the desired signal, the unknown channel, and the class label. For cases in which only a class label is needed, the classification accuracy can be improved by not committing to an estimate of the channel or signal. A variant of the quadratic discriminant analysis (QDA) classifier is proposed that probabilistically accounts for the unknown LTI filtering, and which avoids deconvolution. The proposed QDA classifier can work either directly on the signal or on features whose transformation by LTI filtering can be analyzed; as an example a classifier for subband-power features is derived. Results on simulated data and real Bowhead whale vocalizations show that jointly considering deconvolution with classification can dramatically improve classification performance over traditional methods over a range of signal-to-noise ratios.
Paksarian, Diana; Cui, Lihong; Angst, Jules; Ajdacic-Gross, Vladeta; Rössler, Wulf; Merikangas, Kathleen R
2016-10-01
Epidemiologic evidence indicates that most of the general population will experience a mental health disorder at some point in their lives. However, few prospective population-based studies have estimated trajectories of risk for mental disorders from young through middle adulthood to estimate the proportion of individuals who experience persistent mental disorder across this age period. To describe the proportion of the population who experience persistent mental disorder across adulthood and to estimate latent trajectories of disorder risk across this age period. A population-based, prospective cohort study was conducted between 1979 and 2008 in the canton of Zurich, Switzerland. A stratified random sample of 591 Swiss citizens was enrolled in 1978 at ages 19 years (men) and 20 years (women); 7 interviews were performed during a 29-year period. Men were sampled from military enrollment records and women from electoral records. From those initially enrolled, participants with high levels of psychiatric symptoms were oversampled for follow-up. Data analysis was performed from July 28, 2015, to June 8, 2016. Latent trajectories, estimated using growth mixture modeling, of past-year mood/anxiety disorder (ie, major depressive episode, phobias, panic, generalized anxiety disorder, and obsessive-compulsive disorder), substance use disorder (ie, drug abuse or dependence and alcohol abuse or dependence), and any mental disorder (ie, any of the above) assessed during in-person semistructured interviews at each wave. Diagnoses were based on DSM-III, DSM-III-R, and DSM-IV criteria. Of the 591 participants at baseline, 299 (50.6%) were female. Persistent mental health disorder across multiple study waves was rare. Among 252 individuals (42.6%) who participated in all 7 study waves, only 1.2% met criteria for disorder every time. Growth mixture modeling identified 3 classes of risk for any disorder across adulthood: low (estimated prevalence, 40.0%; 95% CI, -8.7% to 88.9%), increasing-decreasing (estimated prevalence, 15.3%; 95% CI, 1.0% to 29.6%), and increasing (estimated prevalence, 44.7%; 95% CI, -0.9% to 90.1%). Although no classes were characterized by persistently high disorder risk, for those in the increasing-decreasing class, risk was high from the late 20s to early 40s. Sex-specific models indicated 4 trajectory classes for women but only 3 for men. Persistently high mental health disorder risk across 3 decades of adulthood was rare in this population-based sample. Identifying early determinants of sex-specific risk trajectories would benefit prevention efforts.
Geometry with Coordinates, Teacher's Commentary, Part II, Unit 50. Revised Edition.
ERIC Educational Resources Information Center
Allen, Frank B.; And Others
This is part two of a two-part manual for teachers using SMSG high school text materials. The commentary is organized into four parts. The first part contains an introduction and a short section on estimates of class time needed to cover each chapter. The second or main part consists of a chapter-by-chapter commentary on the text. The third part…
Estimating age of sea otters with cementum layers in the first premolar
Bodkin, James L.; Ames, J.A.; Jameson, R.J.; Johnson, A.M.; Matson, G.M.
1997-01-01
We assessed sources of variation in the use of tooth cementum layers to determine age by comparing counts in premolar tooth sections to known ages of 20 sea otters (Enhydra lutris). Three readers examined each sample 3 times, and the 3 readings of each sample were averaged by reader to provide the mean estimated age. The mean (SE) of known age sample was 5.2 years (1.0) and the 3 mean estimated ages were 7.0 (1.0), 5.9 (1.1) and, 4.4 (0.8). The proportion of estimates accurate to within +/- 1 year were 0.25, 0.55, and 0.65 and to within +/- 2 years 0.65, 0.80, and 0.70, by reader. The proportions of samples estimated with >3 years error were 0.20, 0.10, and 0.05. Errors as large as 7, 6, and 5 years were made among readers. In few instances did all readers uniformly provide either accurate (error 1 yr) counts. In most cases (0.85), 1 or 2 of the readers provided accurate counts. Coefficients of determination (R2) between known ages and mean estimated ages were 0.81, 0.87, and 0.87, by reader. The results of this study suggest that cementum layers within sea otter premolar teeth likely are deposited annually and can be used for age estimation. However, criteria used in interpreting layers apparently varied by reader, occasionally resulting in large errors, which were not consistent among readers. While large errors were evident for some individual otters, there were no differences between the known and estimated age-class distribution generated by each reader. Until accuracy can be improved, application of this ageing technique should be limited to sample sizes of at least 6-7 individuals within age classes of >/=1 year.
NASA Astrophysics Data System (ADS)
Yu, Miao; Huang, Deqing; Yang, Wanqiu
2018-06-01
In this paper, we address the problem of unknown periodicity for a class of discrete-time nonlinear parametric systems without assuming any growth conditions on the nonlinearities. The unknown periodicity hides in the parametric uncertainties, which is difficult to estimate with existing techniques. By incorporating a logic-based switching mechanism, we identify the period and bound of unknown parameter simultaneously. Lyapunov-based analysis is given to demonstrate that a finite number of switchings can guarantee the asymptotic tracking for the nonlinear parametric systems. The simulation result also shows the efficacy of the proposed switching periodic adaptive control approach.
State Estimation for Tensegrity Robots
NASA Technical Reports Server (NTRS)
Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas
2016-01-01
Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.
Oliveira, Thiara Castro de; Silva, Antônio Augusto Moura da; Santos, Cristiane de Jesus Nunes dos; Silva, Josenilde Sousa e; Conceição, Sueli Ismael Oliveira da
2010-12-01
To analyze factors associated with physical activity and the mean time spent in some sedentary activities among school-aged children. A cross-sectional study was carried out in a random sample of 592 schoolchildren aged nine to 16 years in 2005, in São Luís, Northern Brazil. Data were collected by means of a 24-Hour Physical Activity Recall Questionnaire, concerning demographic and socioeconomic variables, physical activities practiced and time spent in certain sedentary activities. Physical activities were classified according to their metabolic equivalents (MET), and a physical activity index was estimated for each child. Sedentary lifestyle was estimated based on time spent watching television, playing videogames and on the computer/internet. Chi square test was used to compare proportions. Linear regression analysis was used to establish associations. Estimates were adjusted for the effect of the sampling design. The mean of the physical activity index was 605.73 MET-min/day (SD = 509.45). School children that were male (coefficient=134.57; 95%CI 50.77; 218.37), from public schools (coefficient.= 94.08; 95%CI 12.54; 175.62 and in the 5th to 7th grade (coefficient.=95.01; 95%CI 8.10;181.92 presented higher indices than females, children from private schools and in the 8th to the 9th grade (p<0.05). On average, students spent 2.66 hours/day in sedentary activities. Time spent in sedentary activities was significantly lower for children aged nine to 11 years (coefficient.= -0.49 hr/day; 95%CI -0.88; -0.10) and in lower socioeconomic classes (coefficient.=-0.87; 95%CI -1.45;-0.30). Domestic chores (59.43%) and walking to school (58.43%) were the most common physical activities. Being female, in private schools and in the 8th to 9th grade were factors associated with lower levels of physical activity. Younger schoolchildren and those from low economic classes spent less time engaged in sedentary activities.
Timing of testing and treatment for asymptomatic diseases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kırkızlar, Eser; Faissol, Daniel M.; Griffin, Paul M.
2010-07-01
Many papers in the medical literature analyze the cost-effectiveness of screening for diseases by comparing a limited number of a priori testing policies under estimated problem parameters. However, this may be insufficient to determine the best timing of the tests or incorporate changes over time. In this paper, we develop and solve a Markov Decision Process (MDP) model for a simple class of asymptomatic diseases in order to provide the building blocks for analysis of a more general class of diseases. We provide a computationally efficient method for determining a cost-effective dynamic intervention strategy that takes into account (i) themore » results of the previous test for each individual and (ii) the change in the individual’s behavior based on awareness of the disease. We demonstrate the usefulness of the approach by applying the results to screening decisions for Hepatitis C (HCV) using medical data, and compare our findings to current HCV screening recommendations.« less
Peron, Guillaume; Hines, James E.
2014-01-01
Many industrial and agricultural activities involve wildlife fatalities by collision, poisoning or other involuntary harvest: wind turbines, highway network, utility network, tall structures, pesticides, etc. Impacted wildlife may benefit from official protection, including the requirement to monitor the impact. Carcass counts can often be conducted to quantify the number of fatalities, but they need to be corrected for carcass persistence time (removal by scavengers and decay) and detection probability (searcher efficiency). In this article we introduce a new piece of software that fits a superpopulation capture-recapture model to raw count data. It uses trial data to estimate detection and daily persistence probabilities. A recurrent issue is that fatalities of rare, protected species are infrequent, in which case the software offers the option to switch to an ‘evidence of absence’ mode, i.e., estimate the number of carcasses that may have been missed by field crews. The software allows distinguishing between different turbine types (e.g. different vegetation cover under turbines, or different technical properties), as well between two carcass age-classes or states, with transition between those classes (e.g, fresh and dry). There is a data simulation capacity that may be used at the planning stage to optimize sampling design. Resulting mortality estimates can be used 1) to quantify the required amount of compensation, 2) inform mortality projections for proposed development sites, and 3) inform decisions about management of existing sites.
Statistical Methods and Sampling Design for Estimating Step Trends in Surface-Water Quality
Hirsch, Robert M.
1988-01-01
This paper addresses two components of the problem of estimating the magnitude of step trends in surface water quality. The first is finding a robust estimator appropriate to the data characteristics expected in water-quality time series. The J. L. Hodges-E. L. Lehmann class of estimators is found to be robust in comparison to other nonparametric and moment-based estimators. A seasonal Hodges-Lehmann estimator is developed and shown to have desirable properties. Second, the effectiveness of various sampling strategies is examined using Monte Carlo simulation coupled with application of this estimator. The simulation is based on a large set of total phosphorus data from the Potomac River. To assure that the simulated records have realistic properties, the data are modeled in a multiplicative fashion incorporating flow, hysteresis, seasonal, and noise components. The results demonstrate the importance of balancing the length of the two sampling periods and balancing the number of data values between the two periods.
Improving the Manager’s Ability to Identify Alternative Technologies.
1980-03-01
TABLES 1, Estimated Sources of Funds for R&D by Broad---------14 Industry Classes, 1980 2. Characteristics of the Prospector ---------------- -59 7 4II...thirds of that total outlay. Table I depicts estimated sources of funds for R&D by broad industry classes in 1980. 13 .17 TABLE 1 ESTIMATED SOURCES ...information about research in progress. The Exchange is the major national source $ for unclassified information on current and recently completed research in
Dirty two-band superconductivity with interband pairing order
NASA Astrophysics Data System (ADS)
Asano, Yasuhiro; Sasaki, Akihiro; Golubov, Alexander A.
2018-04-01
We study theoretically the effects of random nonmagnetic impurities on the superconducting transition temperature T c in a two-band superconductor characterized by an equal-time s-wave interband pairing order parameter. Because of the two-band degree of freedom, it is possible to define a spin-triplet s-wave pairing order parameter as well as a spin-singlet s-wave order parameter. The former belongs to odd-band-parity symmetry class, whereas the latter belongs to even-band-parity symmetry class. In a spin-singlet superconductor, T c is insensitive to the impurity concentration when we estimate the self-energy due to the random impurity potential within the Born approximation. On the other hand in a spin-triplet superconductor, T c decreases with the increase of the impurity concentration. We conclude that Cooper pairs belonging to odd-band-parity symmetry class are fragile under the random impurity potential even though they have s-wave pairing symmetry.
NASA Astrophysics Data System (ADS)
Dragozi, E.; Gitas, Ioannis Z.; Stavrakoudis, Dimitris G.; Minakou, C.
2015-06-01
Forest fires greatly influence the stability and functions of the forest ecosystems. The ever increasing need for accurate and detailed information regarding post-fire effects (burn severity) has led to several studies on the matter. In this study the combined use of Very High Resolution (VHR) satellite data (GeoEye), Objectbased image analysis (OBIA) and Composite Burn Index (CBI) measurements in estimating burn severity, at two different time points (2011 and 2012) is assessed. The accuracy of the produced maps was assessed and changes in burn severity between the two dates were detected using the post classification comparison approach. It was found that the produced burn severity map for 2011 was approximately 10% more accurate than that of 2012. This was mainly attributed to the increased heterogeneity of the study area in the second year, which led to an increased number of mixed class objects and consequently made it more difficult to spectrally discriminate between the severity classes. Following the post-classification analysis, the severity class changes were mainly attributed to the trees' ability to survive severe fire damage and sprout new leaves. Moreover, the results of the study suggest that when classifying CBI-based burn severity using VHR imagery it would be preferable to use images captured soon after the fire.
The stratification of military service and combat exposure, 1934–1994*
MacLean, Alair
2010-01-01
Previous research has suggested that men who were exposed to combat during wartime differed from those who were not. Yet little is known about how selection into combat has changed over time. This paper estimates sequential logistic models using data from the Panel Study of Income Dynamics to examine the stratification of military service and combat exposure in the US during the last six decades of the twentieth century. It tests potentially overlapping hypotheses drawn from two competing theories, class bias and dual selection. It also tests a hypothesis, drawn from the life course perspective, that the processes by which people came to see combat have changed historically. The findings show that human capital, institutional screening, and class bias all determined who saw combat. They also show that, net of historical change in the odds of service and combat, the impact of only one background characteristic, race, changed over time. PMID:21113325
A Note on Cluster Effects in Latent Class Analysis
ERIC Educational Resources Information Center
Kaplan, David; Keller, Bryan
2011-01-01
This article examines the effects of clustering in latent class analysis. A comprehensive simulation study is conducted, which begins by specifying a true multilevel latent class model with varying within- and between-cluster sample sizes, varying latent class proportions, and varying intraclass correlations. These models are then estimated under…
Biomass Estimation for some Shrubs from Northeastern Minnesota
David F. Grigal; Lewis F. Ohmann
1977-01-01
Biomass prediction equations were developed for 23 northeastern Minnesota shrub species. The allowmetric function was used to predict leaf, current annual woody twig, stem, and total woody biomass (dry grass), using stem diameter class estimated to the nearest 0.25 cm class at 15 cm above ground level as the independent variable.
On measuring bird habitat: influence of observer variability and sample size
William M. Block; Kimberly A. With; Michael L. Morrison
1987-01-01
We studied the effects of observer variability when estimating vegetation characteristics at 75 0.04-ha bird plots. Observer estimates were significantly different for 31 of 49 variables. Multivariate analyses showed significant interobserver differences for five of the seven classes of variables studied. Variable classes included the height, number, and diameter of...
Analysing designed experiments in distance sampling
Stephen T. Buckland; Robin E. Russell; Brett G. Dickson; Victoria A. Saab; Donal N. Gorman; William M. Block
2009-01-01
Distance sampling is a survey technique for estimating the abundance or density of wild animal populations. Detection probabilities of animals inherently differ by species, age class, habitats, or sex. By incorporating the change in an observer's ability to detect a particular class of animals as a function of distance, distance sampling leads to density estimates...
Dorazio, R.M.; Royle, J. Andrew
2003-01-01
We develop a parameterization of the beta-binomial mixture that provides sensible inferences about the size of a closed population when probabilities of capture or detection vary among individuals. Three classes of mixture models (beta-binomial, logistic-normal, and latent-class) are fitted to recaptures of snowshoe hares for estimating abundance and to counts of bird species for estimating species richness. In both sets of data, rates of detection appear to vary more among individuals (animals or species) than among sampling occasions or locations. The estimates of population size and species richness are sensitive to model-specific assumptions about the latent distribution of individual rates of detection. We demonstrate using simulation experiments that conventional diagnostics for assessing model adequacy, such as deviance, cannot be relied on for selecting classes of mixture models that produce valid inferences about population size. Prior knowledge about sources of individual heterogeneity in detection rates, if available, should be used to help select among classes of mixture models that are to be used for inference.
Buonaccorsi, V P; McDowell, J R; Graves, J E
2001-05-01
Different classes of molecular markers occasionally yield discordant views of population structure within a species. Here, we examine the distribution of molecular variance from 14 polymorphic loci comprising four classes of molecular markers within approximately 400 blue marlin individuals (Makaira nigricans). Samples were collected from the Atlantic and Pacific Oceans over 5 years. Data from five hypervariable tetranucleotide microsatellite loci and restriction fragment length polymorphism (RFLP) analysis of whole molecule mitochondrial DNA (mtDNA) were reported and compared with previous analyses of allozyme and single-copy nuclear DNA (scnDNA) loci. Temporal variance in allele frequencies was nonsignificant in nearly all cases. Mitochondrial and microsatellite loci revealed striking phylogeographic partitioning among Atlantic and Pacific Ocean samples. A large cluster of alleles was present almost exclusively in Atlantic individuals at one microsatellite locus and for mtDNA, suggesting that, if gene flow occurs, it is likely to be unidirectional from Pacific to Atlantic oceans. Mitochondrial DNA inter-ocean divergence (FST) was almost four times greater than microsatellite or combined nuclear divergences including allozyme and scnDNA markers. Estimates of Neu varied by five orders of magnitude among marker classes. Using mathematical and computer simulation approaches, we show that substantially different distributions of FST are expected from marker classes that differ in mode of inheritance and rate of mutation, without influence of natural selection or sex-biased dispersal. Furthermore, divergent FST values can be reconciled by quantifying the balance between genetic drift, mutation and migration. These results illustrate the usefulness of a mitochondrial analysis of population history, and relative precision of nuclear estimates of gene flow based on a mean of several loci.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elvidge, Christopher D.; Sutton, Paul S.; Ghosh, Tilottama
A global poverty map has been produced at 30 arc sec resolution using a poverty index calculated by dividing population count (LandScan2004) by the brightness of satellite observed lighting (DMSP nighttimelights). Inputs to the LandScan product include satellite-derived landcover and topography, plus human settlement outlines derived from high-resolution imagery. The poverty estimates have been calibrated using national level poverty data from the World Development Indicators (WDI) 2006 edition. The total estimate of the numbers of individuals living in poverty is 2.2billion, slightly under the WDI estimate of 2.6 billion. We have demonstrated a new class of poverty map that shouldmore » improve over time through the inclusion of new reference data for calibration of poverty estimates and as improvements are made in the satellite observation of human activities related to economic activity and technology access.« less
The Convergence Problems of Eigenfunction Expansions of Elliptic Differential Operators
NASA Astrophysics Data System (ADS)
Ahmedov, Anvarjon
2018-03-01
In the present research we investigate the problems concerning the almost everywhere convergence of multiple Fourier series summed over the elliptic levels in the classes of Liouville. The sufficient conditions for the almost everywhere convergence problems, which are most difficult problems in Harmonic analysis, are obtained. The methods of approximation by multiple Fourier series summed over elliptic curves are applied to obtain suitable estimations for the maximal operator of the spectral decompositions. Obtaining of such estimations involves very complicated calculations which depends on the functional structure of the classes of functions. The main idea on the proving the almost everywhere convergence of the eigenfunction expansions in the interpolation spaces is estimation of the maximal operator of the partial sums in the boundary classes and application of the interpolation Theorem of the family of linear operators. In the present work the maximal operator of the elliptic partial sums are estimated in the interpolation classes of Liouville and the almost everywhere convergence of the multiple Fourier series by elliptic summation methods are established. The considering multiple Fourier series as an eigenfunction expansions of the differential operators helps to translate the functional properties (for example smoothness) of the Liouville classes into Fourier coefficients of the functions which being expanded into such expansions. The sufficient conditions for convergence of the multiple Fourier series of functions from Liouville classes are obtained in terms of the smoothness and dimensions. Such results are highly effective in solving the boundary problems with periodic boundary conditions occurring in the spectral theory of differential operators. The investigations of multiple Fourier series in modern methods of harmonic analysis incorporates the wide use of methods from functional analysis, mathematical physics, modern operator theory and spectral decomposition. New method for the best approximation of the square-integrable function by multiple Fourier series summed over the elliptic levels are established. Using the best approximation, the Lebesgue constant corresponding to the elliptic partial sums is estimated. The latter is applied to obtain an estimation for the maximal operator in the classes of Liouville.
A switched systems approach to image-based estimation
NASA Astrophysics Data System (ADS)
Parikh, Anup
With the advent of technological improvements in imaging systems and computational resources, as well as the development of image-based reconstruction techniques, it is necessary to understand algorithm performance when subject to real world conditions. Specifically, this dissertation focuses on the stability and performance of a class of image-based observers in the presence of intermittent measurements, caused by e.g., occlusions, limited FOV, feature tracking losses, communication losses, or finite frame rates. Observers or filters that are exponentially stable under persistent observability may have unbounded error growth during intermittent sensing, even while providing seemingly accurate state estimates. In Chapter 3, dwell time conditions are developed to guarantee state estimation error convergence to an ultimate bound for a class of observers while undergoing measurement loss. Bounds are developed on the unstable growth of the estimation errors during the periods when the object being tracked is not visible. A Lyapunov-based analysis for the switched system is performed to develop an inequality in terms of the duration of time the observer can view the moving object and the duration of time the object is out of the field of view. In Chapter 4, a motion model is used to predict the evolution of the states of the system while the object is not visible. This reduces the growth rate of the bounding function to an exponential and enables the use of traditional switched systems Lyapunov analysis techniques. The stability analysis results in an average dwell time condition to guarantee state error convergence with a known decay rate. In comparison with the results in Chapter 3, the estimation errors converge to zero rather than a ball, with relaxed switching conditions, at the cost of requiring additional information about the motion of the feature. In some applications, a motion model of the object may not be available. Numerous adaptive techniques have been developed to compensate for unknown parameters or functions in system dynamics; however, persistent excitation (PE) conditions are typically required to ensure parameter convergence, i.e., learning. Since the motion model is needed in the predictor, model learning is desired; however, PE is difficult to insure a priori and infeasible to check online for nonlinear systems. Concurrent learning (CL) techniques have been developed to use recorded data and a relaxed excitation condition to ensure convergence. In CL, excitation is only required for a finite period of time, and the recorded data can be checked to determine if it is sufficiently rich. However, traditional CL requires knowledge of state derivatives, which are typically not measured and require extensive filter design and tuning to develop satisfactory estimates. In Chapter 5 of this dissertation, a novel formulation of CL is developed in terms of an integral (ICL), removing the need to estimate state derivatives while preserving parameter convergence properties. Using ICL, an estimator is developed in Chapter 6 for simultaneously estimating the pose of an object as well as learning a model of its motion for use in a predictor when the object is not visible. A switched systems analysis is provided to demonstrate the stability of the estimation and prediction with learning scheme. Dwell time conditions as well as excitation conditions are developed to ensure estimation errors converge to an arbitrarily small bound. Experimental results are provided to illustrate the performance of each of the developed estimation schemes. The dissertation concludes with a discussion of the contributions and limitations of the developed techniques, as well as avenues for future extensions.
NASA Astrophysics Data System (ADS)
Badawy, Bakr; Polavarapu, Saroja; Jones, Dylan B. A.; Deng, Feng; Neish, Michael; Melton, Joe R.; Nassar, Ray; Arora, Vivek K.
2018-02-01
The Canadian Land Surface Scheme and the Canadian Terrestrial Ecosystem Model (CLASS-CTEM) together form the land surface component in the family of Canadian Earth system models (CanESMs). Here, CLASS-CTEM is coupled to Environment and Climate Change Canada (ECCC)'s weather and greenhouse gas forecast model (GEM-MACH-GHG) to consistently model atmosphere-land exchange of CO2. The coupling between the land and the atmospheric transport model ensures consistency between meteorological forcing of CO2 fluxes and CO2 transport. The procedure used to spin up carbon pools for CLASS-CTEM for multi-decadal simulations needed to be significantly altered to deal with the limited availability of consistent meteorological information from a constantly changing operational environment in the GEM-MACH-GHG model. Despite the limitations in the spin-up procedure, the simulated fluxes obtained by driving the CLASS-CTEM model with meteorological forcing from GEM-MACH-GHG were comparable to those obtained from CLASS-CTEM when it is driven with standard meteorological forcing from the Climate Research Unit (CRU) combined with reanalysis fields from the National Centers for Environmental Prediction (NCEP) to form CRU-NCEP dataset. This is due to the similarity of the two meteorological datasets in terms of temperature and radiation. However, notable discrepancies in the seasonal variation and spatial patterns of precipitation estimates, especially in the tropics, were reflected in the estimated carbon fluxes, as they significantly affected the magnitude of the vegetation productivity and, to a lesser extent, the seasonal variations in carbon fluxes. Nevertheless, the simulated fluxes based on the meteorological forcing from the GEM-MACH-GHG model are consistent to some extent with other estimates from bottom-up or top-down approaches. Indeed, when simulated fluxes obtained by driving the CLASS-CTEM model with meteorological data from the GEM-MACH-GHG model are used as prior estimates for an atmospheric CO2 inversion analysis using the adjoint of the GEOS-Chem model, the retrieved CO2 flux estimates are comparable to those obtained from other systems in terms of the global budget and the total flux estimates for the northern extratropical regions, which have good observational coverage. In data-poor regions, as expected, differences in the retrieved fluxes due to the prior fluxes become apparent. Coupling CLASS-CTEM into the Environment Canada Carbon Assimilation System (EC-CAS) is considered an important step toward understanding how meteorological uncertainties affect both CO2 flux estimates and modeled atmospheric transport. Ultimately, such an approach will provide more direct feedback to the CLASS-CTEM developers and thus help to improve the performance of CLASS-CTEM by identifying the model limitations based on atmospheric constraints.
The Effect of Mixed-Age Classes in Sweden
ERIC Educational Resources Information Center
Lindstrom, Elly-Ann; Lindahl, Erica
2011-01-01
Mixed-aged (MA) classes are a common phenomenon around the world. In Sweden, these types of classes increased rapidly during the 1980s and 1990s, despite the fact that existing empirical support for MA classes is weak. In this paper, the effect of attending an MA class during grades 4-6 on students' cognitive skills is estimated. Using a unique…
Multiscaling properties of coastal waters particle size distribution from LISST in situ measurements
NASA Astrophysics Data System (ADS)
Pannimpullath Remanan, R.; Schmitt, F. G.; Loisel, H.; Mériaux, X.
2013-12-01
An eulerian high frequency sampling of particle size distribution (PSD) is performed during 5 tidal cycles (65 hours) in a coastal environment of the eastern English Channel at 1 Hz. The particle data are recorded using a LISST-100x type C (Laser In Situ Scattering and Transmissometry, Sequoia Scientific), recording volume concentrations of particles having diameters ranging from 2.5 to 500 mu in 32 size classes in logarithmic scale. This enables the estimation at each time step (every second) of the probability density function of particle sizes. At every time step, the pdf of PSD is hyperbolic. We can thus estimate PSD slope time series. Power spectral analysis shows that the mean diameter of the suspended particles is scaling at high frequencies (from 1s to 1000s). The scaling properties of particle sizes is studied by computing the moment function, from the pdf of the size distribution. Moment functions at many different time scales (from 1s to 1000 s) are computed and their scaling properties considered. The Shannon entropy at each time scale is also estimated and is related to other parameters. The multiscaling properties of the turbidity (coefficient cp computed from the LISST) are also consider on the same time scales, using Empirical Mode Decomposition.
Global solutions and finite time blow-up for fourth order nonlinear damped wave equation
NASA Astrophysics Data System (ADS)
Xu, Runzhang; Wang, Xingchang; Yang, Yanbing; Chen, Shaohua
2018-06-01
In this paper, we study the initial boundary value problem and global well-posedness for a class of fourth order wave equations with a nonlinear damping term and a nonlinear source term, which was introduced to describe the dynamics of a suspension bridge. The global existence, decay estimate, and blow-up of solution at both subcritical (E(0) < d) and critical (E(0) = d) initial energy levels are obtained. Moreover, we prove the blow-up in finite time of solution at the supercritical initial energy level (E(0) > 0).
Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam
2012-01-01
Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ≥ 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ≥ 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification of large snags than the RF imputation approach. Adjusting the decision threshold to account for unequal size for presence and absence classes is more straightforward for the logistic regression than for the RF imputation approach. Overall, model accuracies were poor in this study, which can be attributed to the poor predictive quality of the explanatory variables and the large range of forest types and geographic conditions observed in the data.
Classifying with confidence from incomplete information.
Parrish, Nathan; Anderson, Hyrum S.; Gupta, Maya R.; ...
2013-12-01
For this paper, we consider the problem of classifying a test sample given incomplete information. This problem arises naturally when data about a test sample is collected over time, or when costs must be incurred to compute the classification features. For example, in a distributed sensor network only a fraction of the sensors may have reported measurements at a certain time, and additional time, power, and bandwidth is needed to collect the complete data to classify. A practical goal is to assign a class label as soon as enough data is available to make a good decision. We formalize thismore » goal through the notion of reliability—the probability that a label assigned given incomplete data would be the same as the label assigned given the complete data, and we propose a method to classify incomplete data only if some reliability threshold is met. Our approach models the complete data as a random variable whose distribution is dependent on the current incomplete data and the (complete) training data. The method differs from standard imputation strategies in that our focus is on determining the reliability of the classification decision, rather than just the class label. We show that the method provides useful reliability estimates of the correctness of the imputed class labels on a set of experiments on time-series data sets, where the goal is to classify the time-series as early as possible while still guaranteeing that the reliability threshold is met.« less
Adaptive sequential Bayesian classification using Page's test
NASA Astrophysics Data System (ADS)
Lynch, Robert S., Jr.; Willett, Peter K.
2002-03-01
In this paper, the previously introduced Mean-Field Bayesian Data Reduction Algorithm is extended for adaptive sequential hypothesis testing utilizing Page's test. In general, Page's test is well understood as a method of detecting a permanent change in distribution associated with a sequence of observations. However, the relationship between detecting a change in distribution utilizing Page's test with that of classification and feature fusion is not well understood. Thus, the contribution of this work is based on developing a method of classifying an unlabeled vector of fused features (i.e., detect a change to an active statistical state) as quickly as possible given an acceptable mean time between false alerts. In this case, the developed classification test can be thought of as equivalent to performing a sequential probability ratio test repeatedly until a class is decided, with the lower log-threshold of each test being set to zero and the upper log-threshold being determined by the expected distance between false alerts. It is of interest to estimate the delay (or, related stopping time) to a classification decision (the number of time samples it takes to classify the target), and the mean time between false alerts, as a function of feature selection and fusion by the Mean-Field Bayesian Data Reduction Algorithm. Results are demonstrated by plotting the delay to declaring the target class versus the mean time between false alerts, and are shown using both different numbers of simulated training data and different numbers of relevant features for each class.
Bayes Error Rate Estimation Using Classifier Ensembles
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep
2003-01-01
The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.
A latent transition model of the effects of a teen dating violence prevention initiative.
Williams, Jason; Miller, Shari; Cutbush, Stacey; Gibbs, Deborah; Clinton-Sherrod, Monique; Jones, Sarah
2015-02-01
Patterns of physical and psychological teen dating violence (TDV) perpetration, victimization, and related behaviors were examined with data from the evaluation of the Start Strong: Building Healthy Teen Relationships initiative, a dating violence primary prevention program targeting middle school students. Latent class and latent transition models were used to estimate distinct patterns of TDV and related behaviors of bullying and sexual harassment in seventh grade students at baseline and to estimate transition probabilities from one pattern of behavior to another at the 1-year follow-up. Intervention effects were estimated by conditioning transitions on exposure to Start Strong. Latent class analyses suggested four classes best captured patterns of these interrelated behaviors. Classes were characterized by elevated perpetration and victimization on most behaviors (the multiproblem class), bullying perpetration/victimization and sexual harassment victimization (the bully-harassment victimization class), bullying perpetration/victimization and psychological TDV victimization (bully-psychological victimization), and experience of bully victimization (bully victimization). Latent transition models indicated greater stability of class membership in the comparison group. Intervention students were less likely to transition to the most problematic pattern and more likely to transition to the least problem class. Although Start Strong has not been found to significantly change TDV, alternative evaluation models may find important differences. Latent transition analysis models suggest positive intervention impact, especially for the transitions at the most and the least positive end of the spectrum. Copyright © 2015. Published by Elsevier Inc.
Breakthroughs in Low-Profile Leaky-Wave HPM Antennas
2016-09-21
information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and... traveling , fast-wave, leaky-wave class. 1.1. Overview of Previous Activities (1st thru 11th Quarter) During the first quarter, we prepared and...theory to guide the design of high-gain configurations (again, limited to 2D, H-plane representations) for linear, forward traveling -wave, leaky
The Decision to Not Invade Baghdad (Persian Gulf War)
2007-04-12
LAWRENCE K. MONTGOMERY, JR. United States Army National Guard Se ni or Se rv ic e Co lle ge DISTRIBUTION STATEMENT A: Approved for Public Release...Distribution is Unlimited. USAWC CLASS OF 2007 REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
Klancar, Gregor; Kristan, Matej; Kovacic, Stanislav; Orqueda, Omar
2004-07-01
In this paper a global vision scheme for estimation of positions and orientations of mobile robots is presented. It is applied to robot soccer application which is a fast dynamic game and therefore needs an efficient and robust vision system implemented. General applicability of the vision system can be found in other robot applications such as mobile transport robots in production, warehouses, attendant robots, fast vision tracking of targets of interest and entertainment robotics. Basic operation of the vision system is divided into two steps. In the first, the incoming image is scanned and pixels are classified into a finite number of classes. At the same time, a segmentation algorithm is used to find corresponding regions belonging to one of the classes. In the second step, all the regions are examined. Selection of the ones that are a part of the observed object is made by means of simple logic procedures. The novelty is focused on optimization of the processing time needed to finish the estimation of possible object positions. Better results of the vision system are achieved by implementing camera calibration and shading correction algorithm. The former corrects camera lens distortion, while the latter increases robustness to irregular illumination conditions.
Matthews, Sarah H; Sun, Rongjun
2006-03-01
This article estimates the percentage of lineages that include four or more generations for a sample of the U.S. population and explores how social status and race are related to lineage depth. We assembled data from Waves 1 and 2 of the National Survey of Families and Households in order to estimate the proportion of adults in four or more generations for the Wave 2 sample (1992-1994). When necessary, we used various decision rules to overcome an absence of information about specific generations. We examine relationships between lineage depth and sociodemographic variables by using logistic regressions. The data show that 32% of the respondents were in lineages comprising four or more generations. Blacks and individuals of lower social class were more likely to be in four-generation lineages, especially shorter-gapped lineages. Whites and individuals of higher social class were not more likely to be in longer-gapped, four-generation lineages. The majority of the adult population in the early 1990s was in three-generation lineages. The verdict is still out on whether population aging results in the wholesale verticalization of lineages. Social differentials in four-generation lineages in the early 1990s were mainly due to differences in the timing of fertility, rather than mortality.
2018-01-01
Hyperspectral image classification with a limited number of training samples without loss of accuracy is desirable, as collecting such data is often expensive and time-consuming. However, classifiers trained with limited samples usually end up with a large generalization error. To overcome the said problem, we propose a fuzziness-based active learning framework (FALF), in which we implement the idea of selecting optimal training samples to enhance generalization performance for two different kinds of classifiers, discriminative and generative (e.g. SVM and KNN). The optimal samples are selected by first estimating the boundary of each class and then calculating the fuzziness-based distance between each sample and the estimated class boundaries. Those samples that are at smaller distances from the boundaries and have higher fuzziness are chosen as target candidates for the training set. Through detailed experimentation on three publically available datasets, we showed that when trained with the proposed sample selection framework, both classifiers achieved higher classification accuracy and lower processing time with the small amount of training data as opposed to the case where the training samples were selected randomly. Our experiments demonstrate the effectiveness of our proposed method, which equates favorably with the state-of-the-art methods. PMID:29304512
Schwab, Joshua; Gruber, Susan; Blaser, Nello; Schomaker, Michael; van der Laan, Mark
2015-01-01
This paper describes a targeted maximum likelihood estimator (TMLE) for the parameters of longitudinal static and dynamic marginal structural models. We consider a longitudinal data structure consisting of baseline covariates, time-dependent intervention nodes, intermediate time-dependent covariates, and a possibly time-dependent outcome. The intervention nodes at each time point can include a binary treatment as well as a right-censoring indicator. Given a class of dynamic or static interventions, a marginal structural model is used to model the mean of the intervention-specific counterfactual outcome as a function of the intervention, time point, and possibly a subset of baseline covariates. Because the true shape of this function is rarely known, the marginal structural model is used as a working model. The causal quantity of interest is defined as the projection of the true function onto this working model. Iterated conditional expectation double robust estimators for marginal structural model parameters were previously proposed by Robins (2000, 2002) and Bang and Robins (2005). Here we build on this work and present a pooled TMLE for the parameters of marginal structural working models. We compare this pooled estimator to a stratified TMLE (Schnitzer et al. 2014) that is based on estimating the intervention-specific mean separately for each intervention of interest. The performance of the pooled TMLE is compared to the performance of the stratified TMLE and the performance of inverse probability weighted (IPW) estimators using simulations. Concepts are illustrated using an example in which the aim is to estimate the causal effect of delayed switch following immunological failure of first line antiretroviral therapy among HIV-infected patients. Data from the International Epidemiological Databases to Evaluate AIDS, Southern Africa are analyzed to investigate this question using both TML and IPW estimators. Our results demonstrate practical advantages of the pooled TMLE over an IPW estimator for working marginal structural models for survival, as well as cases in which the pooled TMLE is superior to its stratified counterpart. PMID:25909047
Sreenivas, K; Sekhar, N Seshadri; Saxena, Manoj; Paliwal, R; Pathak, S; Porwal, M C; Fyzee, M A; Rao, S V C Kameswara; Wadodkar, M; Anasuya, T; Murthy, M S R; Ravisankar, T; Dadhwal, V K
2015-09-15
The present study aims at analysis of spatial and temporal variability in agricultural land cover during 2005-6 and 2011-12 from an ongoing program of annual land use mapping using multidate Advanced Wide Field Sensor (AWiFS) data aboard Resourcesat-1 and 2. About 640-690 multi-temporal AWiFS quadrant data products per year (depending on cloud cover) were co-registered and radiometrically normalized to prepare state (administrative unit) mosaics. An 18-fold classification was adopted in this project. Rule-based techniques along with maximum-likelihood algorithm were employed to deriving land cover information as well as changes within agricultural land cover classes. The agricultural land cover classes include - kharif (June-October), rabi (November-April), zaid (April-June), area sown more than once, fallow lands and plantation crops. Mean kappa accuracy of these estimates varied from 0.87 to 0.96 for various classes. Standard error of estimate has been computed for each class annually and the area estimates were corrected using standard error of estimate. The corrected estimates range between 99 and 116 Mha for kharif and 77-91 Mha for rabi. The kharif, rabi and net sown area were aggregated at 10 km × 10 km grid on annual basis for entire India and CV was computed at each grid cell using temporal spatially-aggregated area as input. This spatial variability of agricultural land cover classes was analyzed across meteorological zones, irrigated command areas and administrative boundaries. The results indicate that out of various states/meteorological zones, Punjab was consistently cropped during kharif as well as rabi seasons. Out of all irrigated commands, Tawa irrigated command was consistently cropped during rabi season. Copyright © 2014 Elsevier Ltd. All rights reserved.
Terry-McElrath, Yvonne M; O'Malley, Patrick M
2015-07-01
To measure changes over time in cigarette smoking uptake prevalence and timing during young adulthood (ages 19-26 years), and associations between time-invariant/-varying characteristics and uptake prevalence/timing. Discrete-time survival modeling of data collected from United States high school seniors (modal age 17/18) enrolled in successive graduating classes from 1976 to 2005 and participating in four follow-up surveys (to modal age 25/26). The longitudinal component of the Monitoring the Future study. A total of 10 758 individuals reporting no life-time smoking when first surveyed as high school seniors. Smoking uptake (any, experimental, occasional and regular); socio-demographic variables; marital, college and work status; time spent socializing. The percentage of young adults moving from non-smoker to experimental smoking [slope estimate 0.11, standard error (SE) = 0.04, P = 0.005] or occasional smoking (slope estimate 0.17, SE = 0.03, P < 0.001) increased significantly across graduating classes; the percentage moving from non-smoker to regular smoker remained stable. All forms of smoking uptake were most likely to occur at age 19/20, but uptake prevalence at older ages increased over time [e.g. cohort year predicting occasional uptake at modal age 25/26 adjusted hazard odds ratio (AHOR) = 1.05, P = 0.002]. Time-invariant/-varying characteristics had unique associations with the timing of various forms of smoking uptake (e.g. at modal age 21/22, currently attending college increased occasional uptake risk (AHOR = 2.11, P < 0.001) but decreased regular uptake risk (AHOR = 0.69, P = 0.026). Young adult occasional and experimental smoking uptake increased in the United States for non-smoking high school seniors graduating from 1976 to 2005. Smoking uptake for these cohorts remained most likely to occur at age 19/20, but prevalence of uptake at older ages increased. © 2015 Society for the Study of Addiction.
Classification with asymmetric label noise: Consistency and maximal denoising
Blanchard, Gilles; Flaska, Marek; Handy, Gregory; ...
2016-09-20
In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that amore » majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.« less
Classification with asymmetric label noise: Consistency and maximal denoising
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchard, Gilles; Flaska, Marek; Handy, Gregory
In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that amore » majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.« less
Modified centroid for estimating sand, silt, and clay from soil texture class
USDA-ARS?s Scientific Manuscript database
Models that require inputs of soil particle size commonly use soil texture class for input; however, texture classes do not represent the continuum of soil size fractions. Soil texture class and clay percentage are collected as a standard practice for many land management agencies (e.g., NRCS, BLM, ...
Road tests of class 8 tractor trailers were conducted by the US Environmental Protection Agency on new and retreaded tires of varying rolling resistance in order to provide estimates of the quantitative relationship between rolling resistance and fuel consumption.
ERIC Educational Resources Information Center
Pence, Brian Wells; Miller, William C.; Gaynes, Bradley N.
2009-01-01
Prevalence and validation studies rely on imperfect reference standard (RS) diagnostic instruments that can bias prevalence and test characteristic estimates. The authors illustrate 2 methods to account for RS misclassification. Latent class analysis (LCA) combines information from multiple imperfect measures of an unmeasurable latent condition to…
ERIC Educational Resources Information Center
Bilir, Mustafa Kuzey
2009-01-01
This study uses a new psychometric model (mixture item response theory-MIMIC model) that simultaneously estimates differential item functioning (DIF) across manifest groups and latent classes. Current DIF detection methods investigate DIF from only one side, either across manifest groups (e.g., gender, ethnicity, etc.), or across latent classes…
Gallifuoco, Alberto; Cantarella, Maria; Marucci, Mariagrazia
2007-01-01
A stirred tank membrane reactor is used to study the kinetics of polygalacturonic acid (PGA) enzymatic hydrolysis. The reactor operates in semicontinuous configuration: the native biopolymer is loaded at the initial time and the system is continuously fed with the buffer. The effect of retention time (from 101 to 142 min) and membrane molecular weight cutoff (from 1 to 30 kDa) on the rate of permeable oligomers production is investigated. Reaction products are clustered in two different classes, those sized below the membrane cutoff and those above. The reducing power measured in the permeate is used as an estimate of total product concentration. The characteristic breakdown times range from 40 to 100 min. The overall kinetics obeys a first-order law with a characteristic time estimated to 24 min. New mathematical data handling are developed and illustrated using the experimental data obtained. Finally, the body of the experimental results suggests useful indications (reactor productivity, breakdown induction period) for implementing the bioprocess at the industrial scale.
Maximum likelihood estimation for periodic autoregressive moving average models
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
ERIC Educational Resources Information Center
Cho, Hyunkuk; Glewwe, Paul; Whitler, Melissa
2012-01-01
Many U.S. states and cities spend substantial funds to reduce class size, especially in elementary (primary) school. Estimating the impact of class size on learning is complicated, since children in small and large classes differ in many observed and unobserved ways. This paper uses a method of Hoxby (2000) to assess the impact of class size on…
The multicategory case of the sequential Bayesian pixel selection and estimation procedure
NASA Technical Reports Server (NTRS)
Pore, M. D.; Dennis, T. B. (Principal Investigator)
1980-01-01
A Bayesian technique for stratified proportion estimation and a sampling based on minimizing the mean squared error of this estimator were developed and tested on LANDSAT multispectral scanner data using the beta density function to model the prior distribution in the two-class case. An extention of this procedure to the k-class case is considered. A generalization of the beta function is shown to be a density function for the general case which allows the procedure to be extended.
Markov switching multinomial logit model: An application to accident-injury severities.
Malyshkina, Nataliya V; Mannering, Fred L
2009-07-01
In this study, two-state Markov switching multinomial logit models are proposed for statistical modeling of accident-injury severities. These models assume Markov switching over time between two unobserved states of roadway safety as a means of accounting for potential unobserved heterogeneity. The states are distinct in the sense that in different states accident-severity outcomes are generated by separate multinomial logit processes. To demonstrate the applicability of the approach, two-state Markov switching multinomial logit models are estimated for severity outcomes of accidents occurring on Indiana roads over a four-year time period. Bayesian inference methods and Markov Chain Monte Carlo (MCMC) simulations are used for model estimation. The estimated Markov switching models result in a superior statistical fit relative to the standard (single-state) multinomial logit models for a number of roadway classes and accident types. It is found that the more frequent state of roadway safety is correlated with better weather conditions and that the less frequent state is correlated with adverse weather conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Yumin; Lum, Kai-Yew; Wang Qingguo
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less
NASA Astrophysics Data System (ADS)
Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew
2009-03-01
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.
Sildenafil improves renal function in patients with pulmonary arterial hypertension
Webb, David J; Vachiery, Jean-Luc; Hwang, Lie-Ju; Maurey, Julie O
2015-01-01
Aim Elevated serum creatinine (sCr) and low estimated glomerular filtration rate (eGFR) are associated with poor outcomes in patients with pulmonary arterial hypertension (PAH) whereas sildenafil treatment improves PAH outcomes. This post hoc analysis assessed the effect of sildenafil on kidney function and links with clinical outcomes including 6-min walk distance, functional class, time to clinical worsening and survival. Methods Patients with PAH received placebo or sildenafil 20, 40 or 80 mg three times daily in the SUPER-1 study and open-label sildenafil titrated to 80 mg three times daily (as tolerated) in the extension study. Results Baseline characteristics were similar among groups (n = 277). PAH was mostly idiopathic (63%) and functional class II (39%) or III (58%). From baseline to week 12, kidney function improved (increased eGFR, decreased sCr) with sildenafil and worsened with placebo. In univariate logistic regression, improved kidney function was associated with significantly improved exercise and functional class (odds ratios 1.17 [95% CI 1.01, 1.36] and 1.21 [95% CI 1.03, 1.41], respectively, for sCr and 0.97 [95% CI 0.94, 0.99] and 0.97 [95% CI 0.94, 0.99] for eGFR, all P < 0.05). In patients who maintained or improved kidney function, time to worsening was significantly delayed (P < 0.02 for both kidney parameters). Observed trends towards improved survival were not significant. Patients with eGFR <60 (vs. ≥60) ml min–1 1.73 m–2 appeared to have worse survival. Conclusions Sildenafil treatment was associated with improved kidney function in patients with PAH, which was in turn associated with improved exercise capacity and functional class, a reduced risk of clinical worsening, and a trend towards reduced mortality. PMID:25727860
Currie, L A
2001-07-01
Three general classes of skewed data distributions have been encountered in research on background radiation, chemical and radiochemical blanks, and low levels of 85Kr and 14C in the atmosphere and the cryosphere. The first class of skewed data can be considered to be theoretically, or fundamentally skewed. It is typified by the exponential distribution of inter-arrival times for nuclear counting events for a Poisson process. As part of a study of the nature of low-level (anti-coincidence) Geiger-Muller counter background radiation, tests were performed on the Poisson distribution of counts, the uniform distribution of arrival times, and the exponential distribution of inter-arrival times. The real laboratory system, of course, failed the (inter-arrival time) test--for very interesting reasons, linked to the physics of the measurement process. The second, computationally skewed, class relates to skewness induced by non-linear transformations. It is illustrated by non-linear concentration estimates from inverse calibration, and bivariate blank corrections for low-level 14C-12C aerosol data that led to highly asymmetric uncertainty intervals for the biomass carbon contribution to urban "soot". The third, environmentally, skewed, data class relates to a universal problem for the detection of excursions above blank or baseline levels: namely, the widespread occurrence of ab-normal distributions of environmental and laboratory blanks. This is illustrated by the search for fundamental factors that lurk behind skewed frequency distributions of sulfur laboratory blanks and 85Kr environmental baselines, and the application of robust statistical procedures for reliable detection decisions in the face of skewed isotopic carbon procedural blanks with few degrees of freedom.
Television viewing through ages 2-5 years and bullying involvement in early elementary school
2014-01-01
Background High television exposure time at young age has been described as a potential risk factor for developing behavioral problems. However, less is known about the effects of preschool television on subsequent bullying involvement. We examined the association between television viewing time through ages 2-5 and bullying involvement in the first grades of elementary school. We hypothesized that high television exposure increases the risk of bullying involvement. Method TV viewing time was assessed repeatedly in early childhood using parental report. To combine these repeated assessments we used latent class analysis. Four exposure classes were identified and labeled “low”, “mid-low”, “mid-high” and “high”. Bullying involvement was assessed by teacher questionnaire (n = 3423, mean age 6.8 years). Additionally, peer/self-report of bullying involvement was obtained using a peer nomination procedure (n = 1176, mean age 7.6 years). We examined child risk of being a bully, victim or a bully-victim (compared to being uninvolved in bullying). Results High television exposure class was associated with elevated risks of bullying and victimization. Also, in both teacher- and child-reported data, children in the high television exposure class were more likely to be a bully-victim (OR = 2.11, 95% CI: 1.42-3.13 and OR = 3.68, 95% CI: 1.75-7.74 respectively). However, all univariate effect estimates attenuated and were no longer statistically significant once adjusted for maternal and child covariates. Conclusions The association between television viewing time through ages 2-5 and bullying involvement in early elementary school is confounded by maternal and child socio-demographic characteristics. PMID:24520886
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-17
... comparable to the obligations proposed in this filing: Market-Makers % Time % Series Classes CBOE (current........ 60 All classes collectively. PMMs % Time % Series Classes CBOE (current rule) 99% of the time... % Time % Series Classes CBOE (current rule) 99% of the time required to 90% * Class-by-class. provide...
CONTROL FUNCTION ASSISTED IPW ESTIMATION WITH A SECONDARY OUTCOME IN CASE-CONTROL STUDIES.
Sofer, Tamar; Cornelis, Marilyn C; Kraft, Peter; Tchetgen Tchetgen, Eric J
2017-04-01
Case-control studies are designed towards studying associations between risk factors and a single, primary outcome. Information about additional, secondary outcomes is also collected, but association studies targeting such secondary outcomes should account for the case-control sampling scheme, or otherwise results may be biased. Often, one uses inverse probability weighted (IPW) estimators to estimate population effects in such studies. IPW estimators are robust, as they only require correct specification of the mean regression model of the secondary outcome on covariates, and knowledge of the disease prevalence. However, IPW estimators are inefficient relative to estimators that make additional assumptions about the data generating mechanism. We propose a class of estimators for the effect of risk factors on a secondary outcome in case-control studies that combine IPW with an additional modeling assumption: specification of the disease outcome probability model. We incorporate this model via a mean zero control function. We derive the class of all regular and asymptotically linear estimators corresponding to our modeling assumption, when the secondary outcome mean is modeled using either the identity or the log link. We find the efficient estimator in our class of estimators and show that it reduces to standard IPW when the model for the primary disease outcome is unrestricted, and is more efficient than standard IPW when the model is either parametric or semiparametric.
Status of pelagic prey fishes in Lake Michigan, 2012
Warner, David M.; O'Brien, Timothy P.; Farha, Steve A.; Claramunt, Randall M.; Hanson, Dale
2012-01-01
Acoustic surveys were conducted in late summer/early fall during the years 1992-1996 and 2001-2012 to estimate pelagic prey fish biomass in Lake Michigan. Midwater trawling during the surveys as well as target strength provided a measure of species and size composition of the fish community for use in scaling acoustic data and providing species-specific abundance estimates. The 2012 survey consisted of 26 acoustic transects (576 km total) and 31 midwater tows. Mean total prey fish biomass was 6.4 kg/ha (relative standard error, RSE = 15%) or 31 kilotonnes (kt = 1,000 metric tons), which was 1.5 times the estimate for 2011 and 22% of the long-term mean. The increase from 2011 resulted from increased biomass of age-0 alewife, age-1 or older alewife, and large bloater. The abundance of the 2012 alewife year class was similar to the average, and this year-class contributed 35% of total alewife biomass (4.9 kg/ha, RSE = 17%), while the 2010 alewife year-class contributed 58%. The 2010 year class made up 89% of age-1 or older alewife biomass. In 2012, alewife comprised 77% of total prey fish biomass, while rainbow smelt and bloater were 4 and 19% of total biomass, respectively. Rainbow smelt biomass in 2012 (0.25 kg/ha, RSE = 17%) was 40% of the rainbow smelt biomass in 2011 and 5% of the long term mean. Bloater biomass was much lower (1.2 kg/ha, RSE = 12%) than in the 1990s, and mean density of small bloater in 2012 (191 fish/ha, RSE = 24%) was lower than peak values observed in 2007-2009. In 2012, pelagic prey fish biomass in Lake Michigan was similar to Lake Superior and Lake Huron. Prey fish biomass remained well below the Fish Community Objectives target of 500-800 kt, and key native species remain absent or rare.
Dynamic Latent Trait Models with Mixed Hidden Markov Structure for Mixed Longitudinal Outcomes.
Zhang, Yue; Berhane, Kiros
2016-01-01
We propose a general Bayesian joint modeling approach to model mixed longitudinal outcomes from the exponential family for taking into account any differential misclassification that may exist among categorical outcomes. Under this framework, outcomes observed without measurement error are related to latent trait variables through generalized linear mixed effect models. The misclassified outcomes are related to the latent class variables, which represent unobserved real states, using mixed hidden Markov models (MHMM). In addition to enabling the estimation of parameters in prevalence, transition and misclassification probabilities, MHMMs capture cluster level heterogeneity. A transition modeling structure allows the latent trait and latent class variables to depend on observed predictors at the same time period and also on latent trait and latent class variables at previous time periods for each individual. Simulation studies are conducted to make comparisons with traditional models in order to illustrate the gains from the proposed approach. The new approach is applied to data from the Southern California Children Health Study (CHS) to jointly model questionnaire based asthma state and multiple lung function measurements in order to gain better insight about the underlying biological mechanism that governs the inter-relationship between asthma state and lung function development.
Using partially labeled data for normal mixture identification with application to class definition
NASA Technical Reports Server (NTRS)
Shahshahani, Behzad M.; Landgrebe, David A.
1992-01-01
The problem of estimating the parameters of a normal mixture density when, in addition to the unlabeled samples, sets of partially labeled samples are available is addressed. The density of the multidimensional feature space is modeled with a normal mixture. It is assumed that the set of components of the mixture can be partitioned into several classes and that training samples are available from each class. Since for any training sample the class of origin is known but the exact component of origin within the corresponding class is unknown, the training samples as considered to be partially labeled. The EM iterative equations are derived for estimating the parameters of the normal mixture in the presence of partially labeled samples. These equations can be used to combine the supervised and nonsupervised learning processes.
Bantis, Leonidas E; Nakas, Christos T; Reiser, Benjamin; Myall, Daniel; Dalrymple-Alford, John C
2017-06-01
The three-class approach is used for progressive disorders when clinicians and researchers want to diagnose or classify subjects as members of one of three ordered categories based on a continuous diagnostic marker. The decision thresholds or optimal cut-off points required for this classification are often chosen to maximize the generalized Youden index (Nakas et al., Stat Med 2013; 32: 995-1003). The effectiveness of these chosen cut-off points can be evaluated by estimating their corresponding true class fractions and their associated confidence regions. Recently, in the two-class case, parametric and non-parametric methods were investigated for the construction of confidence regions for the pair of the Youden-index-based optimal sensitivity and specificity fractions that can take into account the correlation introduced between sensitivity and specificity when the optimal cut-off point is estimated from the data (Bantis et al., Biomet 2014; 70: 212-223). A parametric approach based on the Box-Cox transformation to normality often works well while for markers having more complex distributions a non-parametric procedure using logspline density estimation can be used instead. The true class fractions that correspond to the optimal cut-off points estimated by the generalized Youden index are correlated similarly to the two-class case. In this article, we generalize these methods to the three- and to the general k-class case which involves the classification of subjects into three or more ordered categories, where ROC surface or ROC manifold methodology, respectively, is typically employed for the evaluation of the discriminatory capacity of a diagnostic marker. We obtain three- and multi-dimensional joint confidence regions for the optimal true class fractions. We illustrate this with an application to the Trail Making Test Part A that has been used to characterize cognitive impairment in patients with Parkinson's disease.
A novel method of language modeling for automatic captioning in TC video teleconferencing.
Zhang, Xiaojia; Zhao, Yunxin; Schopp, Laura
2007-05-01
We are developing an automatic captioning system for teleconsultation video teleconferencing (TC-VTC) in telemedicine, based on large vocabulary conversational speech recognition. In TC-VTC, doctors' speech contains a large number of infrequently used medical terms in spontaneous styles. Due to insufficiency of data, we adopted mixture language modeling, with models trained from several datasets of medical and nonmedical domains. This paper proposes novel modeling and estimation methods for the mixture language model (LM). Component LMs are trained from individual datasets, with class n-gram LMs trained from in-domain datasets and word n-gram LMs trained from out-of-domain datasets, and they are interpolated into a mixture LM. For class LMs, semantic categories are used for class definition on medical terms, names, and digits. The interpolation weights of a mixture LM are estimated by a greedy algorithm of forward weight adjustment (FWA). The proposed mixing of in-domain class LMs and out-of-domain word LMs, the semantic definitions of word classes, as well as the weight-estimation algorithm of FWA are effective on the TC-VTC task. As compared with using mixtures of word LMs with weights estimated by the conventional expectation-maximization algorithm, the proposed methods led to a 21% reduction of perplexity on test sets of five doctors, which translated into improvements of captioning accuracy.
Dornburg, Alex; Brandley, Matthew C; McGowen, Michael R; Near, Thomas J
2012-02-01
Various nucleotide substitution models have been developed to accommodate among lineage rate heterogeneity, thereby relaxing the assumptions of the strict molecular clock. Recently developed "uncorrelated relaxed clock" and "random local clock" (RLC) models allow decoupling of nucleotide substitution rates between descendant lineages and are thus predicted to perform better in the presence of lineage-specific rate heterogeneity. However, it is uncertain how these models perform in the presence of punctuated shifts in substitution rate, especially between closely related clades. Using cetaceans (whales and dolphins) as a case study, we test the performance of these two substitution models in estimating both molecular rates and divergence times in the presence of substantial lineage-specific rate heterogeneity. Our RLC analyses of whole mitochondrial genome alignments find evidence for up to ten clade-specific nucleotide substitution rate shifts in cetaceans. We provide evidence that in the uncorrelated relaxed clock framework, a punctuated shift in the rate of molecular evolution within a subclade results in posterior rate estimates that are either misled or intermediate between the disparate rate classes present in baleen and toothed whales. Using simulations, we demonstrate abrupt changes in rate isolated to one or a few lineages in the phylogeny can mislead rate and age estimation, even when the node of interest is calibrated. We further demonstrate how increasing prior age uncertainty can bias rate and age estimates, even while the 95% highest posterior density around age estimates decreases; in other words, increased precision for an inaccurate estimate. We interpret the use of external calibrations in divergence time studies in light of these results, suggesting that rate shifts at deep time scales may mislead inferences of absolute molecular rates and ages.
NASA Astrophysics Data System (ADS)
Boashash, Boualem; Lovell, Brian; White, Langford
1988-01-01
Time-Frequency analysis based on the Wigner-Ville Distribution (WVD) is shown to be optimal for a class of signals where the variation of instantaneous frequency is the dominant characteristic. Spectral resolution and instantaneous frequency tracking is substantially improved by using a Modified WVD (MWVD) based on an Autoregressive spectral estimator. Enhanced signal-to-noise ratio may be achieved by using 2D windowing in the Time-Frequency domain. The WVD provides a tool for deriving descriptors of signals which highlight their FM characteristics. These descriptors may be used for pattern recognition and data clustering using the methods presented in this paper.
Identifying Patterns in the Weather of Europe for Source Term Estimation
NASA Astrophysics Data System (ADS)
Klampanos, Iraklis; Pappas, Charalambos; Andronopoulos, Spyros; Davvetas, Athanasios; Ikonomopoulos, Andreas; Karkaletsis, Vangelis
2017-04-01
During emergencies that involve the release of hazardous substances into the atmosphere the potential health effects on the human population and the environment are of primary concern. Such events have occurred in the past, most notably involving radioactive and toxic substances. Examples of radioactive release events include the Chernobyl accident in 1986, as well as the more recent Fukushima Daiichi accident in 2011. Often, the release of dangerous substances in the atmosphere is detected at locations different from the release origin. The objective of this work is the rapid estimation of such unknown sources shortly after the detection of dangerous substances in the atmosphere, with an initial focus on nuclear or radiological releases. Typically, after the detection of a radioactive substance in the atmosphere indicating the occurrence of an unknown release, the source location is estimated via inverse modelling. However, depending on factors such as the spatial resolution desired, traditional inverse modelling can be computationally time-consuming. This is especially true for cases where complex topography and weather conditions are involved and can therefore be problematic when timing is critical. Making use of machine learning techniques and the Big Data Europe platform1, our approach moves the bulk of the computation before any such event taking place, therefore allowing for rapid initial, albeit rougher, estimations regarding the source location. Our proposed approach is based on the automatic identification of weather patterns within the European continent. Identifying weather patterns has long been an active research field. Our case is differentiated by the fact that it focuses on plume dispersion patterns and these meteorological variables that affect dispersion the most. For a small set of recurrent weather patterns, we simulate hypothetical radioactive releases from a pre-known set of nuclear reactor locations and for different substance and temporal parameters, using the Java flavour of the Euratom-supported funded RODOS (Real-time On-line DecisiOn Support) system2 for off-site emergency management after nuclear accidents. Once dispersions have been pre-computed, and immediately after a detected release, the currently observed weather can be matched to the derived weather classes. Since each weather class corresponds to a different plume dispersion pattern, the closest classes to an unseen weather sample, say the current weather, are the most likely to lead us to the release origin. In addressing the above problem, we make use of multiple years of weather reanalysis data from NCAR's version3 of ECMWF's ERA-Interim4. To derive useful weather classes, we evaluate several algorithms, ranging from straightforward unsupervised clustering to more complex methods, including relevant neural-network algorithms, on multiple variables. Variables and feature sets, clustering algorithms and evaluation approaches are all dealt with and presented experimentally. The Big Data Europe platform allows for the implementation and execution of the above tasks in the cloud, in a scalable, robust and efficient way.
ERIC Educational Resources Information Center
Hoijtink, Herbert; Molenaar, Ivo W.
1997-01-01
This paper shows that a certain class of constrained latent class models may be interpreted as a special case of nonparametric multidimensional item response models. Parameters of this latent class model are estimated using an application of the Gibbs sampler, and model fit is investigated using posterior predictive checks. (SLD)
Reboussin, Beth A.; Ialongo, Nicholas S.
2011-01-01
Summary Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder which is most often diagnosed in childhood with symptoms often persisting into adulthood. Elevated rates of substance use disorders have been evidenced among those with ADHD, but recent research focusing on the relationship between subtypes of ADHD and specific drugs is inconsistent. We propose a latent transition model (LTM) to guide our understanding of how drug use progresses, in particular marijuana use, while accounting for the measurement error that is often found in self-reported substance use data. We extend the LTM to include a latent class predictor to represent empirically derived ADHD subtypes that do not rely on meeting specific diagnostic criteria. We begin by fitting two separate latent class analysis (LCA) models by using second-order estimating equations: a longitudinal LCA model to define stages of marijuana use, and a cross-sectional LCA model to define ADHD subtypes. The LTM model parameters describing the probability of transitioning between the LCA-defined stages of marijuana use and the influence of the LCA-defined ADHD subtypes on these transition rates are then estimated by using a set of first-order estimating equations given the LCA parameter estimates. A robust estimate of the LTM parameter variance that accounts for the variation due to the estimation of the two sets of LCA parameters is proposed. Solving three sets of estimating equations enables us to determine the underlying latent class structures independently of the model for the transition rates and simplifying assumptions about the correlation structure at each stage reduces the computational complexity. PMID:21461139
Ciampi, Antonio; Dyachenko, Alina; Cole, Martin; McCusker, Jane
2011-12-01
The study of mental disorders in the elderly presents substantial challenges due to population heterogeneity, coexistence of different mental disorders, and diagnostic uncertainty. While reliable tools have been developed to collect relevant data, new approaches to study design and analysis are needed. We focus on a new analytic approach. Our framework is based on latent class analysis and hidden Markov chains. From repeated measurements of a multivariate disease index, we extract the notion of underlying state of a patient at a time point. The course of the disorder is then a sequence of transitions among states. States and transitions are not observable; however, the probability of being in a state at a time point, and the transition probabilities from one state to another over time can be estimated. Data from 444 patients with and without diagnosis of delirium and dementia were available from a previous study. The Delirium Index was measured at diagnosis, and at 2 and 6 months from diagnosis. Four latent classes were identified: fairly healthy, moderately ill, clearly sick, and very sick. Dementia and delirium could not be separated on the basis of these data alone. Indeed, as the probability of delirium increased, so did the probability of decline of mental functions. Eight most probable courses were identified, including good and poor stable courses, and courses exhibiting various patterns of improvement. Latent class analysis and hidden Markov chains offer a promising tool for studying mental disorders in the elderly. Its use may show its full potential as new data become available.
Youth Aggressive/Disruptive Behavior Trajectories and subsequent Gambling among Urban Male Youth
Martins, Silvia S.; Liu, Weiwei; Hedden, Sarra L.; Goldweber, Asha; Storr, Carla L.; Derevensky, Jeffrey L.; Stinchfield, Randy; Ialongo, Nicholas S.; Petras, Hanno
2013-01-01
Objective This study examines the association between aggressive/disruptive behavior development in two distinct developmental periods, childhood (i.e., grades 1–3) and early adolescence (i.e., grades 6–10) and subsequent gambling behavior in late adolescence up to age 20. Method The sample consists of 310 urban males of predominately minority and low socioeconomic status followed from first grade to late adolescence. Separate general growth mixture models (GGMM) were estimated to explore the heterogeneity in aggressive/disruptive behavior development in the above-mentioned two time periods. Results Three distinct behavior trajectories were identified for each time period: a chronic high, a moderate increasing and a low increasing class for childhood, and a chronic high, a moderate increasing followed by decreasing and a low stable class for early adolescence. There was no association between childhood behavior trajectories and gambling involvement. Males with a moderate behavior trajectory in adolescence where two times more likely to gamble compared to those in the low stable class (OR=1.89, 95% CI=1.11, 3.24). Those with chronic high trajectories during either childhood or early adolescence (OR=2.60, 95% CI=1.06, 6.38; OR=3.19, 95% CI=1.18, 8.64, respectively) were more likely to be at-risk/problem gamblers than those in the low class. Conclusions Aggressive/disruptive behavior development in childhood and early adolescence is associated with gambling and gambling problems in late adolescence among urban male youth. Preventing childhood and youth aggressive/disruptive behavior may be effective to prevent youth problem gambling. PMID:23410188
A proposed periodic national inventory of land use land cover change
Hans T. Schreuder; Paul W. Snook; Raymond L. Czaplewski; Glenn P. Catts
1986-01-01
Three alternatives using digital thematic mapper (TM), analog TM, and a combination of either digital or analog TM data with low altitude photography are discussed for level I and level II land use/land cover classes for a proposed national inventory. Digital TM data should prove satisfactory for estimating acreage in level I classes, although estimates of precision...
Fractional Brownian motion time-changed by gamma and inverse gamma process
NASA Astrophysics Data System (ADS)
Kumar, A.; Wyłomańska, A.; Połoczański, R.; Sundar, S.
2017-02-01
Many real time-series exhibit behavior adequate to long range dependent data. Additionally very often these time-series have constant time periods and also have characteristics similar to Gaussian processes although they are not Gaussian. Therefore there is need to consider new classes of systems to model these kinds of empirical behavior. Motivated by this fact in this paper we analyze two processes which exhibit long range dependence property and have additional interesting characteristics which may be observed in real phenomena. Both of them are constructed as the superposition of fractional Brownian motion (FBM) and other process. In the first case the internal process, which plays role of the time, is the gamma process while in the second case the internal process is its inverse. We present in detail their main properties paying main attention to the long range dependence property. Moreover, we show how to simulate these processes and estimate their parameters. We propose to use a novel method based on rescaled modified cumulative distribution function for estimation of parameters of the second considered process. This method is very useful in description of rounded data, like waiting times of subordinated processes delayed by inverse subordinators. By using the Monte Carlo method we show the effectiveness of proposed estimation procedures. Finally, we present the applications of proposed models to real time series.
Kalman Filtering for Genetic Regulatory Networks with Missing Values
Liu, Qiuhua; Lai, Tianyue; Wang, Wu
2017-01-01
The filter problem with missing value for genetic regulation networks (GRNs) is addressed, in which the noises exist in both the state dynamics and measurement equations; furthermore, the correlation between process noise and measurement noise is also taken into consideration. In order to deal with the filter problem, a class of discrete-time GRNs with missing value, noise correlation, and time delays is established. Then a new observation model is proposed to decrease the adverse effect caused by the missing value and to decouple the correlation between process noise and measurement noise in theory. Finally, a Kalman filtering is used to estimate the states of GRNs. Meanwhile, a typical example is provided to verify the effectiveness of the proposed method, and it turns out to be the case that the concentrations of mRNA and protein could be estimated accurately. PMID:28814967
On the estimation of brain signal entropy from sparse neuroimaging data
Grandy, Thomas H.; Garrett, Douglas D.; Schmiedek, Florian; Werkle-Bergner, Markus
2016-01-01
Multi-scale entropy (MSE) has been recently established as a promising tool for the analysis of the moment-to-moment variability of neural signals. Appealingly, MSE provides a measure of the predictability of neural operations across the multiple time scales on which the brain operates. An important limitation in the application of the MSE to some classes of neural signals is MSE’s apparent reliance on long time series. However, this sparse-data limitation in MSE computation could potentially be overcome via MSE estimation across shorter time series that are not necessarily acquired continuously (e.g., in fMRI block-designs). In the present study, using simulated, EEG, and fMRI data, we examined the dependence of the accuracy and precision of MSE estimates on the number of data points per segment and the total number of data segments. As hypothesized, MSE estimation across discontinuous segments was comparably accurate and precise, despite segment length. A key advance of our approach is that it allows the calculation of MSE scales not previously accessible from the native segment lengths. Consequently, our results may permit a far broader range of applications of MSE when gauging moment-to-moment dynamics in sparse and/or discontinuous neurophysiological data typical of many modern cognitive neuroscience study designs. PMID:27020961
NASA Technical Reports Server (NTRS)
Matthews, Bryan L.; Srivastava, Ashok N.
2010-01-01
Prior to the launch of STS-119 NASA had completed a study of an issue in the flow control valve (FCV) in the Main Propulsion System of the Space Shuttle using an adaptive learning method known as Virtual Sensors. Virtual Sensors are a class of algorithms that estimate the value of a time series given other potentially nonlinearly correlated sensor readings. In the case presented here, the Virtual Sensors algorithm is based on an ensemble learning approach and takes sensor readings and control signals as input to estimate the pressure in a subsystem of the Main Propulsion System. Our results indicate that this method can detect faults in the FCV at the time when they occur. We use the standard deviation of the predictions of the ensemble as a measure of uncertainty in the estimate. This uncertainty estimate was crucial to understanding the nature and magnitude of transient characteristics during startup of the engine. This paper overviews the Virtual Sensors algorithm and discusses results on a comprehensive set of Shuttle missions and also discusses the architecture necessary for deploying such algorithms in a real-time, closed-loop system or a human-in-the-loop monitoring system. These results were presented at a Flight Readiness Review of the Space Shuttle in early 2009.
Martin, Thomas E.; Riordan, Margaret M.; Repin, Rimi; Mouton, James C.; Blake, William M.
2017-01-01
AimAdult survival is central to theories explaining latitudinal gradients in life history strategies. Life history theory predicts higher adult survival in tropical than north temperate regions given lower fecundity and parental effort. Early studies were consistent with this prediction, but standard-effort netting studies in recent decades suggested that apparent survival rates in temperate and tropical regions strongly overlap. Such results do not fit with life history theory. Targeted marking and resighting of breeding adults yielded higher survival estimates in the tropics, but this approach is thought to overestimate survival because it does not sample social and age classes with lower survival. We compared the effect of field methods on tropical survival estimates and their relationships with life history traits.LocationSabah, Malaysian Borneo.Time period2008–2016.Major taxonPasseriformes.MethodsWe used standard-effort netting and resighted individuals of all social and age classes of 18 tropical songbird species over 8 years. We compared apparent survival estimates between these two field methods with differing analytical approaches.ResultsEstimated detection and apparent survival probabilities from standard-effort netting were similar to those from other tropical studies that used standard-effort netting. Resighting data verified that a high proportion of individuals that were never recaptured in standard-effort netting remained in the study area, and many were observed breeding. Across all analytical approaches, addition of resighting yielded substantially higher survival estimates than did standard-effort netting alone. These apparent survival estimates were higher than for temperate zone species, consistent with latitudinal differences in life histories. Moreover, apparent survival estimates from addition of resighting, but not from standard-effort netting alone, were correlated with parental effort as measured by egg temperature across species.Main conclusionsInclusion of resighting showed that standard-effort netting alone can negatively bias apparent survival estimates and obscure life history relationships across latitudes and among tropical species.
McDowell, Ronald; Bennett, Kathleen; Moriarty, Frank; Clarke, Sarah; Barry, Michael; Fahey, Tom
2018-04-20
To examine the impact of the Preferred Drugs Initiative (PDI), an Irish health policy aimed at enhancing evidence-based cost-effective prescribing, on prescribing trends and the cost of prescription medicines across seven medication classes. Retrospective repeated cross-sectional study spanning the years 2011 - 2016. Health Service Executive Primary Care Reimbursement Service pharmacy claims data for General Medical Services (GMS) patients, approximately 40% of the Irish population. Adults aged ≥18 years between 2011 and 2016 are eligible for the GMS scheme. The percentage of PDI medications within each drug class per calendar quarter. Linear regression was used to model prescribing of the preferred drug within each medication group and to assess the impact of PDI guidelines and other relevant changes in prescribing practice. Savings in drug expenditure were estimated. Between 2011 and 2016, around a quarter (23.59%) of all medications were for single-agent drugs licensed in the seven drug classes. There was a small increase in the percentage of PDI drugs, increasing from 4.64% of all medications in 2011 to 4.76% in 2016 (P<0.001). The percentage of preferred drugs within each drug class was significantly higher immediately following publication of the guidelines for all classes except urology, with the largest increases noted for lansoprazole (1.21%, 95% CI: 0.84% to 1.57%, P<0.001) and venlafaxine (0.71%, 95% CI: 0.15% to 1.27%, P=0.02). Trends in prescribing of the preferred drugs between PDI guidelines and the end of 2016 varied between drug classes. Total cost savings between 2013 and 2016 were estimated to be €2.7 million. There has been a small increase in prescribing of PDI drugs in response to prescribing guidelines, with inconsistent changes observed across therapeutic classes. These findings are relevant where health services are seeking to develop more active prescribing interventions aimed at changing prescribing practice. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
2015-09-30
analysis of trends and shifts in characteristics of specific sources contributing to the soundscape over time. The primary sources of interest are baleen... soundscape . Many of the target acoustic signal categories have been well characterized allowing for development of automated spectrogram correlation...to determine the extent and range over which each class of sources contributes to the regional soundscape . Estimates of signal detection range will
Monitoring of reforestation on burnt areas in Western Russia using Landsat time series
NASA Astrophysics Data System (ADS)
Vorobev, Oleg; Kurbanov, Eldar
2017-04-01
Forest fires are main disturbance factor for the natural ecosystems, especially in boreal forests. Monitoring for the dynamic of forest cover regeneration in the post-fire period of ecosystem recovery is crucial to both estimation of forest stands and forest management. In this study, on the example of burnt areas of 2010 wildfires in Republic Mari El of Russian Federation we estimated post-fire dynamic of different classes of vegetation cover between 2011-2016 years with the use of time series Landsat satellite images. To validate the newly obtained thematic maps we used 80 test sites with independent field data, as well Canopus-B images of high spatial resolution. For the analysis of the satellite images we referred to Normalized Differenced Vegetation Index (NDVI) and Tasseled Cap transformation. The research revealed that at the post-fire period the area of thematic classes "Reforestation of the middle and low density" has maximum cover (44%) on the investigated burnt area. On the burnt areas of 2010 there is ongoing active process of grass overgrowing (up to 20%), also there are thematic classes of deadwood (15%) and open spaces (10%). The results indicate that there is mostly natural regeneration of tree species pattern corresponding to the pre-fire condition. Forest plantations cover only 2% of the overall burnt area. By the 2016 year the NDVI parameters of young vegetation cover were recovered to the pre-fire level as well. The overall unsupervised classification accuracy of more than 70% shows high degree of agreement between the thematic map and the ground truth data. The research results can be applied for the long term succession monitoring and management plan development for the reforestation activities on the lands disturbed by fire.
Milosevic, A; Burnside, G
2016-01-01
Survival of directly placed composite to restore worn teeth has been reported in studies with small sample sizes, short observation periods and different materials. This study aimed to estimate survival for a hybrid composite placed by one clinician up to 8-years follow-up. All patients were referred and recruited for a prospective observational cohort study. One composite was used: Spectrum(®) (DentsplyDeTrey). Most restorations were placed on the maxillary anterior teeth using a Dahl approach. A total of 1010 direct composites were placed in 164 patients. Mean follow-up time was 33.8 months (s.d. 27.7). 71 of 1010 restorations failed during follow-up. The estimated failure rate in the first year was 5.4% (95% CI 3.7-7.0%). Time to failure was significantly greater in older subjects (p=0.005) and when a lack of posterior support was present (p=0.003). Bruxism and an increase in the occlusal vertical dimension were not associated with failure. The proportion of failures was greater in patients with a Class 3 or edge-to-edge incisal relationship than in Class 1 and Class 2 cases but this was not statistically significant. More failures occurred in the lower arch (9.6%) compared to the upper arch (6%) with the largest number of composites having been placed on the maxillary incisors (n=519). The worn dentition presents a restorative challenge but composite is an appropriate restorative material. This study shows that posterior occlusal support is necessary to optimise survival. Copyright © 2015 Elsevier Ltd. All rights reserved.
An empirical method for estimating travel times for wet volcanic mass flows
Pierson, Thomas C.
1998-01-01
Travel times for wet volcanic mass flows (debris avalanches and lahars) can be forecast as a function of distance from source when the approximate flow rate (peak discharge near the source) can be estimated beforehand. The near-source flow rate is primarily a function of initial flow volume, which should be possible to estimate to an order of magnitude on the basis of geologic, geomorphic, and hydrologic factors at a particular volcano. Least-squares best fits to plots of flow-front travel time as a function of distance from source provide predictive second-degree polynomial equations with high coefficients of determination for four broad size classes of flow based on near-source flow rate: extremely large flows (>1 000 000 m3/s), very large flows (10 000–1 000 000 m3/s), large flows (1000–10 000 m3/s), and moderate flows (100–1000 m3/s). A strong nonlinear correlation that exists between initial total flow volume and flow rate for "instantaneously" generated debris flows can be used to estimate near-source flow rates in advance. Differences in geomorphic controlling factors among different flows in the data sets have relatively little effect on the strong nonlinear correlations between travel time and distance from source. Differences in flow type may be important, especially for extremely large flows, but this could not be evaluated here. At a given distance away from a volcano, travel times can vary by approximately an order of magnitude depending on flow rate. The method can provide emergency-management officials a means for estimating time windows for evacuation of communities located in hazard zones downstream from potentially hazardous volcanoes.
Reliability and Agreement in Student Ratings of the Class Environment
ERIC Educational Resources Information Center
Nelson, Peter M.; Christ, Theodore J.
2016-01-01
The current study estimated the reliability and agreement of student ratings of the classroom environment obtained using the Responsive Environmental Assessment for Classroom Teaching (REACT; Christ, Nelson, & Demers, 2012; Nelson, Demers, & Christ, 2014). Coefficient alpha, class-level reliability, and class agreement indices were…
NASA Technical Reports Server (NTRS)
Otterson, D. A.; Seng, G. T.
1985-01-01
An high performance liquid chromatography (HPLC) method to estimate four aromatic classes in middistillate fuels is presented. Average refractive indices are used in a correlation to obtain the concentrations of each of the aromatic classes from HPLC data. The aromatic class concentrations can be obtained in about 15 min when the concentration of the aromatic group is known. Seven fuels with a wide range of compositions were used to test the method. Relative errors in the concentration of the two major aromatic classes were not over 10 percent. Absolute errors of the minor classes were all less than 0.3 percent. The data show that errors in group-type analyses using sulfuric acid derived standards are greater for fuels containing high concentrations of polycyclic aromatics. Corrections are based on the change in refractive index of the aromatic fraction which can occur when sulfuric acid and the fuel react. These corrections improved both the precision and the accuracy of the group-type results.
Updated Value of Service Reliability Estimates for Electric Utility Customers in the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, Michael; Schellenberg, Josh; Blundell, Marshall
2015-01-01
This report updates the 2009 meta-analysis that provides estimates of the value of service reliability for electricity customers in the United States (U.S.). The meta-dataset now includes 34 different datasets from surveys fielded by 10 different utility companies between 1989 and 2012. Because these studies used nearly identical interruption cost estimation or willingness-to-pay/accept methods, it was possible to integrate their results into a single meta-dataset describing the value of electric service reliability observed in all of them. Once the datasets from the various studies were combined, a two-part regression model was used to estimate customer damage functions that can bemore » generally applied to calculate customer interruption costs per event by season, time of day, day of week, and geographical regions within the U.S. for industrial, commercial, and residential customers. This report focuses on the backwards stepwise selection process that was used to develop the final revised model for all customer classes. Across customer classes, the revised customer interruption cost model has improved significantly because it incorporates more data and does not include the many extraneous variables that were in the original specification from the 2009 meta-analysis. The backwards stepwise selection process led to a more parsimonious model that only included key variables, while still achieving comparable out-of-sample predictive performance. In turn, users of interruption cost estimation tools such as the Interruption Cost Estimate (ICE) Calculator will have less customer characteristics information to provide and the associated inputs page will be far less cumbersome. The upcoming new version of the ICE Calculator is anticipated to be released in 2015.« less
NASA Astrophysics Data System (ADS)
Lowman, L.; Barros, A. P.
2014-12-01
Computational modeling of surface erosion processes is inherently difficult because of the four-dimensional nature of the problem and the multiple temporal and spatial scales that govern individual mechanisms. Landscapes are modified via surface and fluvial erosion and exhumation, each of which takes place over a range of time scales. Traditional field measurements of erosion/exhumation rates are scale dependent, often valid for a single point-wise location or averaging over large aerial extents and periods with intense and mild erosion. We present a method of remotely estimating erosion rates using a Bayesian hierarchical model based upon the stream power erosion law (SPEL). A Bayesian approach allows for estimating erosion rates using the deterministic relationship given by the SPEL and data on channel slopes and precipitation at the basin and sub-basin scale. The spatial scale associated with this framework is the elevation class, where each class is characterized by distinct morphologic behavior observed through different modes in the distribution of basin outlet elevations. Interestingly, the distributions of first-order outlets are similar in shape and extent to the distribution of precipitation events (i.e. individual storms) over a 14-year period between 1998-2011. We demonstrate an application of the Bayesian hierarchical modeling framework for five basins and one intermontane basin located in the central Andes between 5S and 20S. Using remotely sensed data of current annual precipitation rates from the Tropical Rainfall Measuring Mission (TRMM) and topography from a high resolution (3 arc-seconds) digital elevation map (DEM), our erosion rate estimates are consistent with decadal-scale estimates based on landslide mapping and sediment flux observations and 1-2 orders of magnitude larger than most millennial and million year timescale estimates from thermochronology and cosmogenic nuclides.
Hager, Rebecca; Tsiatis, Anastasios A; Davidian, Marie
2018-05-18
Clinicians often make multiple treatment decisions at key points over the course of a patient's disease. A dynamic treatment regime is a sequence of decision rules, each mapping a patient's observed history to the set of available, feasible treatment options at each decision point, and thus formalizes this process. An optimal regime is one leading to the most beneficial outcome on average if used to select treatment for the patient population. We propose a method for estimation of an optimal regime involving two decision points when the outcome of interest is a censored survival time, which is based on maximizing a locally efficient, doubly robust, augmented inverse probability weighted estimator for average outcome over a class of regimes. By casting this optimization as a classification problem, we exploit well-studied classification techniques such as support vector machines to characterize the class of regimes and facilitate implementation via a backward iterative algorithm. Simulation studies of performance and application of the method to data from a sequential, multiple assignment randomized clinical trial in acute leukemia are presented. © 2018, The International Biometric Society.
Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S
2015-09-01
Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.
Finite-time synchronization for memristor-based neural networks with time-varying delays.
Abdurahman, Abdujelil; Jiang, Haijun; Teng, Zhidong
2015-09-01
Memristive network exhibits state-dependent switching behaviors due to the physical properties of memristor, which is an ideal tool to mimic the functionalities of the human brain. In this paper, finite-time synchronization is considered for a class of memristor-based neural networks with time-varying delays. Based on the theory of differential equations with discontinuous right-hand side, several new sufficient conditions ensuring the finite-time synchronization of memristor-based chaotic neural networks are obtained by using analysis technique, finite time stability theorem and adding a suitable feedback controller. Besides, the upper bounds of the settling time of synchronization are estimated. Finally, a numerical example is given to show the effectiveness and feasibility of the obtained results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Milne, R K; Yeo, G F; Edeson, R O; Madsen, B W
1988-04-22
Stochastic models of ion channels have been based largely on Markov theory where individual states and transition rates must be specified, and sojourn-time densities for each state are constrained to be exponential. This study presents an approach based on random-sum methods and alternating-renewal theory, allowing individual states to be grouped into classes provided the successive sojourn times in a given class are independent and identically distributed. Under these conditions Markov models form a special case. The utility of the approach is illustrated by considering the effects of limited time resolution (modelled by using a discrete detection limit, xi) on the properties of observable events, with emphasis on the observed open-time (xi-open-time). The cumulants and Laplace transform for a xi-open-time are derived for a range of Markov and non-Markov models; several useful approximations to the xi-open-time density function are presented. Numerical studies show that the effects of limited time resolution can be extreme, and also highlight the relative importance of the various model parameters. The theory could form a basis for future inferential studies in which parameter estimation takes account of limited time resolution in single channel records. Appendixes include relevant results concerning random sums and a discussion of the role of exponential distributions in Markov models.
ERIC Educational Resources Information Center
Dobkin, Carlos; Gil, Ricard; Marion, Justin
2010-01-01
In this paper we estimate the effect of class attendance on exam performance by implementing a policy in three large economics classes that required students scoring below the median on the midterm exam to attend class. This policy generated a large discontinuity in the rate of post-midterm attendance at the median of the midterm score. We…
ERIC Educational Resources Information Center
van der Linden, Wim J.
Latent class models for mastery testing differ from continuum models in that they do not postulate a latent mastery continuum but conceive mastery and non-mastery as two latent classes, each characterized by different probabilities of success. Several researchers use a simple latent class model that is basically a simultaneous application of the…
Length and sequence heterogeneity in 5S rDNA of Populus deltoides.
Negi, Madan S; Rajagopal, Jyothi; Chauhan, Neeti; Cronn, Richard; Lakshmikumaran, Malathi
2002-12-01
The 5S rRNA genes and their associated non-transcribed spacer (NTS) regions are present as repeat units arranged in tandem arrays in plant genomes. Length heterogeneity in 5S rDNA repeats was previously identified in Populus deltoides and was also observed in the present study. Primers were designed to amplify the 5S rDNA NTS variants from the P. deltoides genome. The PCR-amplified products from the two accessions of P. deltoides (G3 and G48) suggested the presence of length heterogeneity of 5S rDNA units within and among accessions, and the size of the spacers ranged from 385 to 434 bp. Sequence analysis of the non-transcribed spacer (NTS) revealed two distinct classes of 5S rDNA within both accessions: class 1, which contained GAA trinucleotide microsatellite repeats, and class 2, which lacked the repeats. The class 1 spacer shows length variation owing to the microsatellite, with two clones exhibiting 10 GAA repeat units and one clone exhibiting 16 such repeat units. However, distance analysis shows that class 1 spacer sequences are highly similar inter se, yielding nucleotide diversity (pi) estimates that are less than 0.15% of those obtained for class 2 spacers (pi = 0.0183 vs. 0.1433, respectively). The presence of microsatellite in the NTS region leading to variation in spacer length is reported and discussed for the first time in P. deltoides.
Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.
2016-01-01
We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373
Estimating live fuels for shrubs and herbs with BIOPAK.
Joseph E. Means; Olga N Krankina; Hao Jiang; Hongyan Li
1996-01-01
This paper describes use of BIOPAK to calculate size classes of live fuels for shrubs and herbs. A library of equations to estimate such fuels in the Pacific Northwest and northern Rocky Mountains is presented and used in an example. These methods can be used in other regions if the user first enters fuel size-class equations for a given region into a new library by...
Improving Estimates of Acceptable Growiing Stock in Young Upland Oak Forests in the Missouri Ozarks
Daniel C. Dey; Paul S. Johnson; H.E. Garrett
1998-01-01
Estimates of regeneration or growing stock in young oak forests may be too high unless criteria are established that define explicitly acceptable growing stock. In young hardwood stands, crown class can be used to identify acceptable growing stock because it is related to the future growth and survival of reproduction. A method is presented for assigning crown class...
Fosgate, G T; Motimele, B; Ganswindt, A; Irons, P C
2017-09-15
Accurate diagnosis of pregnancy is an essential component of an effective reproductive management plan for dairy cattle. Indirect methods of pregnancy detection can be performed soon after breeding and offer an advantage over traditional direct methods in not requiring an experienced veterinarian and having potential for automation. The objective of this study was to estimate the sensitivity and specificity of pregnancy-associated glycoprotein (PAG) detection ELISA and transrectal ultrasound (TRUS) in dairy cows of South Africa using a Bayesian latent class approach. Commercial dairy cattle from the five important dairy regions in South Africa were enrolled in a short-term prospective cohort study. Cattle were examined at 28-35days after artificial insemination (AI) and then followed up 14days later. At both sampling times, TRUS was performed to detect pregnancy and commercially available PAG detection ELISAs were performed on collected serum and milk. A total of 1236 cows were sampled and 1006 had complete test information for use in the Bayesian latent class model. The estimated sensitivity (95% probability interval) and specificity for PAG detection serum ELISA were 99.4% (98.5, 99.9) and 97.4% (94.7, 99.2), respectively. The estimated sensitivity and specificity for PAG detection milk ELISA were 99.2% (98.2, 99.8) and 93.4% (89.7, 96.1), respectively. Sensitivity of veterinarian performed TRUS at 28-35days post-AI varied between 77.8% and 90.5% and specificity varied between 94.7% and 99.8%. In summary, indirect detection of pregnancy using PAG ELISA is an accurate method for use in dairy cattle. The method is descriptively more sensitive than veterinarian-performed TRUS and therefore could be an economically viable addition to a reproductive management plan. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
George, Daniel; Huerta, E. A.
2018-03-01
The recent Nobel-prize-winning detections of gravitational waves from merging black holes and the subsequent detection of the collision of two neutron stars in coincidence with electromagnetic observations have inaugurated a new era of multimessenger astrophysics. To enhance the scope of this emergent field of science, we pioneered the use of deep learning with convolutional neural networks, that take time-series inputs, for rapid detection and characterization of gravitational wave signals. This approach, Deep Filtering, was initially demonstrated using simulated LIGO noise. In this article, we present the extension of Deep Filtering using real data from LIGO, for both detection and parameter estimation of gravitational waves from binary black hole mergers using continuous data streams from multiple LIGO detectors. We demonstrate for the first time that machine learning can detect and estimate the true parameters of real events observed by LIGO. Our results show that Deep Filtering achieves similar sensitivities and lower errors compared to matched-filtering while being far more computationally efficient and more resilient to glitches, allowing real-time processing of weak time-series signals in non-stationary non-Gaussian noise with minimal resources, and also enables the detection of new classes of gravitational wave sources that may go unnoticed with existing detection algorithms. This unified framework for data analysis is ideally suited to enable coincident detection campaigns of gravitational waves and their multimessenger counterparts in real-time.
Hamilton, Matthew B; Tartakovsky, Maria; Battocletti, Amy
2018-05-01
The genetic effective population size, N e , can be estimated from the average gametic disequilibrium (r2^) between pairs of loci, but such estimates require evaluation of assumptions and currently have few methods to estimate confidence intervals. speed-ne is a suite of matlab computer code functions to estimate Ne^ from r2^ with a graphical user interface and a rich set of outputs that aid in understanding data patterns and comparing multiple estimators. speed-ne includes functions to either generate or input simulated genotype data to facilitate comparative studies of Ne^ estimators under various population genetic scenarios. speed-ne was validated with data simulated under both time-forward and time-backward coalescent models of genetic drift. Three classes of estimators were compared with simulated data to examine several general questions: what are the impacts of microsatellite null alleles on Ne^, how should missing data be treated, and does disequilibrium contributed by reduced recombination among some loci in a sample impact Ne^. Estimators differed greatly in precision in the scenarios examined, and a widely employed Ne^ estimator exhibited the largest variances among replicate data sets. speed-ne implements several jackknife approaches to estimate confidence intervals, and simulated data showed that jackknifing over loci and jackknifing over individuals provided ~95% confidence interval coverage for some estimators and should be useful for empirical studies. speed-ne provides an open-source extensible tool for estimation of Ne^ from empirical genotype data and to conduct simulations of both microsatellite and single nucleotide polymorphism (SNP) data types to develop expectations and to compare Ne^ estimators. © 2018 John Wiley & Sons Ltd.
On function classes related pertaining to strong approximation of double Fourier series
NASA Astrophysics Data System (ADS)
Baituyakova, Zhuldyz
2015-09-01
The investigation of embedding of function classes began a long time ago. After Alexits [1], Leindler [2], and Gogoladze[3] investigated estimates of strong approximation by Fourier series in 1965, G. Freud[4] raised the corresponding saturation problem in 1969. The list of the authors dealing with embedding problems partly is also very long. It suffices to mention some names: V. G. Krotov, W. Lenski, S. M. Mazhar, J. Nemeth, E. M. Nikisin, K. I. Oskolkov, G. Sunouchi, J. Szabados, R. Taberski and V. Totik. Study on this topic has since been carried on over a decade, but it seems that most of the results obtained are limited to the case of one dimension. In this paper, embedding results are considered which arise in the strong approximation by double Fourier series. We prove theorem on the interrelation between the classes Wr1,r2HS,M ω and H(λ, p, r1, r2, ω(δ1, δ2)), in the one-dimensional case proved by L. Leindler.
Class III dento-skeletal anomalies: rotational growth and treatment timing.
Mosca, G; Grippaudo, C; Marchionni, P; Deli, R
2006-03-01
The interception of a Class III malocclusion requires a long-term growth prediction in order to estimate the subject's evolution from the prepubertal phase to adulthood. The aim of this retrospective longitudinal study was to highlight the differences in facial morphology in relation to the direction of mandibular growth in a sample of subjects with Class III skeletal anomalies divided on the basis of their Petrovic's auxological categories and rotational types. The study involved 20 patients (11 females and 9 males) who started therapy before reaching their pubertal peak and were followed up for a mean of 4.3 years (range: 3.9-5.5 years). Despite the small sample size, the definition of the rotational type of growth was the main diagnostic element for setting the correct individualised therapy. We therefore believe that the observation of a larger sample would reinforce the diagnostic-therapeutic validity of Petrovic's auxological categories, allow an evaluation off all rotational types, and improve the statistical significance of the results obtained.
Green, Kerry M.; Musci, Rashelle J.; Johnson, Renee M.; Matson, Pamela A.; Reboussin, Beth A.; Ialongo, Nicholas S.
2015-01-01
Objective This study identifies and compares outcomes in young adulthood associated with longitudinal patterns of alcohol and marijuana use during adolescence among urban youth. Method Data come from a cohort of 678 urban, predominantly Black children followed from ages 6–25 (1993–2012). Analyses are based on the 608 children who participated over time (53.6% male). Longitudinal patterning of alcohol and marijuana use were based on annual frequency reports from grades 8–12 and estimated through latent profile analysis. Results We identified four classes of alcohol and marijuana use including Non-Use (47%), Moderate Alcohol Use (28%), Moderate Alcohol/Increasing Marijuana Use (12%) and High Dual Use (13%). A marijuana only class was not identified. Analyses show negative outcomes in adulthood associated with all three adolescent substance use classes. Compared to the non-use class, all use classes had statistically significantly higher rates of substance dependence. Those in the ‘High Dual Use’ class had the lowest rate of high school graduation. Comparing classes with similar alcohol but different marijuana patterns, the ‘Moderate Alcohol/Increasing Marijuana Use’ class had a statistically significant increased risk of having a criminal justice record and developing substance use dependence in adulthood. Conclusion Among urban youth, heterogeneous patterns of alcohol and marijuana use across adolescence are evident, and these patterns are associated with distinct outcomes in adulthood. These findings suggest a need for targeted education and intervention efforts to address the needs of youth using both marijuana and alcohol, as well as the importance of universal early preventive intervention efforts. PMID:26517712
Rein, David B
2005-01-01
Objective To stratify traditional risk-adjustment models by health severity classes in a way that is empirically based, is accessible to policy makers, and improves predictions of inpatient costs. Data Sources Secondary data created from the administrative claims from all 829,356 children aged 21 years and under enrolled in Georgia Medicaid in 1999. Study Design A finite mixture model was used to assign child Medicaid patients to health severity classes. These class assignments were then used to stratify both portions of a traditional two-part risk-adjustment model predicting inpatient Medicaid expenditures. Traditional model results were compared with the stratified model using actuarial statistics. Principal Findings The finite mixture model identified four classes of children: a majority healthy class and three illness classes with increasing levels of severity. Stratifying the traditional two-part risk-adjustment model by health severity classes improved its R2 from 0.17 to 0.25. The majority of additional predictive power resulted from stratifying the second part of the two-part model. Further, the preference for the stratified model was unaffected by months of patient enrollment time. Conclusions Stratifying health care populations based on measures of health severity is a powerful method to achieve more accurate cost predictions. Insurers who ignore the predictive advances of sample stratification in setting risk-adjusted premiums may create strong financial incentives for adverse selection. Finite mixture models provide an empirically based, replicable methodology for stratification that should be accessible to most health care financial managers. PMID:16033501
Delineating Area of Review in a System with Pre-injection Relative Overpressure
Oldenburg, Curtis M.; Cihan, Abdullah; Zhou, Quanlin; ...
2014-12-31
The Class VI permit application for geologic carbon sequestration (GCS) requires delineation of an area of review (AoR), defined as the region surrounding the (GCS) project where underground sources of drinking water (USDWs) may be endangered. The methods for estimating AoR under the Class VI regulation were developed assuming that GCS reservoirs would be in hydrostatic equilibrium with overlying aquifers. Here we develop and apply an approach to estimating AoR for sites with preinjection relative overpressure for which standard AoR estimation methods produces an infinite AoR. The approach we take is to compare brine leakage through a hypothetical open flowmore » path in the base-case scenario (no-injection) to the incrementally larger leakage that would occur in the CO 2-injection case. To estimate AoR by this method, we used semi-analytical solutions to single-phase flow equations to model reservoir pressurization and flow up (single) leaky wells located at progressively greater distances from the injection well. We found that the incrementally larger flow rates for hypothetical leaky wells located 6 km and 4 km from the injection well are ~20% and 30% greater, respectively, than hypothetical baseline leakage rates. If total brine leakage is considered, the results depend strongly on how the incremental increase in total leakage is calculated, varying from a few percent to up to 40% greater (at most at early time) than base-case total leakage.« less
NASA Astrophysics Data System (ADS)
Wild, Walter James
1988-12-01
External nuclear medicine diagnostic imaging of early primary and metastatic lung cancer tumors is difficult due to the poor sensitivity and resolution of existing gamma cameras. Nonimaging counting detectors used for internal tumor detection give ambiguous results because distant background variations are difficult to discriminate from neighboring tumor sites. This suggests that an internal imaging nuclear medicine probe, particularly an esophageal probe, may be advantageously used to detect small tumors because of the ability to discriminate against background variations and the capability to get close to sites neighboring the esophagus. The design, theory of operation, preliminary bench tests, characterization of noise behavior and optimization of such an imaging probe is the central theme of this work. The central concept lies in the representation of the aperture shell by a sequence of binary digits. This, coupled with the mode of operation which is data encoding within an axial slice of space, leads to the fundamental imaging equation in which the coding operation is conveniently described by a circulant matrix operator. The coding/decoding process is a classic coded-aperture problem, and various estimators to achieve decoding are discussed. Some estimators require a priori information about the object (or object class) being imaged; the only unbiased estimator that does not impose this requirement is the simple inverse-matrix operator. The effects of noise on the estimate (or reconstruction) is discussed for general noise models and various codes/decoding operators. The choice of an optimal aperture for detector count times of clinical relevance is examined using a statistical class-separability formalism.
Terry-McElrath, Yvonne M; O'Malley, Patrick M; Johnston, Lloyd D
2017-12-13
Effective cigarette smoking prevention and intervention programming is enhanced by accurate understanding of developmental smoking pathways across the life span. This study investigated within-person patterns of cigarette smoking from ages 18 to 50 among a US national sample of high school graduates, focusing on identifying ages of particular importance for smoking involvement change. Using data from approximately 15,000 individuals participating in the longitudinal Monitoring the Future study, trichotomous measures of past 30-day smoking obtained at 11 time points were modeled using repeated-measures latent class analyses. Sex differences in latent class structure and membership were examined. Twelve latent classes were identified: three characterized by consistent smoking patterns across age (no smoking; smoking < pack per day; smoking pack + per day); three showing uptake to a higher category of smoking across age; four reflecting successful quit behavior by age 50; and two defined by discontinuous shifts between smoking categories. The same latent class structure was found for both males and females, but membership probabilities differed between sexes. Although evidence of increases or decreases in smoking behavior was observed at virtually all ages through 35, 21/22 and 29/30 appeared to be particularly key for smoking category change within class. This examination of latent classes of cigarette smoking among a national US longitudinal sample of high school graduates from ages 18 to 50 identified unique patterns and critical ages of susceptibility to change in smoking category within class. Such information may be of particular use in developing effective smoking prevention and intervention programming. This study examined cigarette smoking among a national longitudinal US sample of high school graduates from ages 18 to 50 and identified distinct latent classes characterized by patterns of movement between no cigarette use, light-to-moderate smoking, and the conventional definition of heavy smoking at 11 time points via repeated-measures latent class analysis. Membership probabilities for each smoking class were estimated, and critical ages of susceptibility to change in smoking behaviors were identified. © The Author 2017. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Zeng, Chen; Rosengard, Sarah Z.; Burt, William; Peña, M. Angelica; Nemcek, Nina; Zeng, Tao; Arrigo, Kevin R.; Tortell, Philippe D.
2018-06-01
We evaluate several algorithms for the estimation of phytoplankton size class (PSC) and functional type (PFT) biomass from ship-based optical measurements in the Subarctic Northeast Pacific Ocean. Using underway measurements of particulate absorption and backscatter in surface waters, we derived estimates of PSC/PFT based on chlorophyll-a concentrations (Chl-a), particulate absorption spectra and the wavelength dependence of particulate backscatter. Optically-derived [Chl-a] and phytoplankton absorption measurements were validated against discrete calibration samples, while the derived PSC/PFT estimates were validated using size-fractionated Chl-a measurements and HPLC analysis of diagnostic photosynthetic pigments (DPA). Our results showflo that PSC/PFT algorithms based on [Chl-a] and particulate absorption spectra performed significantly better than the backscatter slope approach. These two more successful algorithms yielded estimates of phytoplankton size classes that agreed well with HPLC-derived DPA estimates (RMSE = 12.9%, and 16.6%, respectively) across a range of hydrographic and productivity regimes. Moreover, the [Chl-a] algorithm produced PSC estimates that agreed well with size-fractionated [Chl-a] measurements, and estimates of the biomass of specific phytoplankton groups that were consistent with values derived from HPLC. Based on these results, we suggest that simple [Chl-a] measurements should be more fully exploited to improve the classification of phytoplankton assemblages in the Northeast Pacific Ocean.
Evans, R W; Orians, C E; Ascher, N L
1992-01-08
To estimate the potential supply of organ donors and to measure the efficiency of organ procurement efforts in the United States. A geographic database has been developed consisting of multiple cause of death and sociodemographic data compiled by the National Center for Health Statistics. All deaths are evaluated as to their potential for organ donation. Two classes of potential donors are identified: class 1 estimates are restricted to causes of death involving significant head trauma only, and class 2 estimates include class 1 estimates as well as deaths in which brain death was less probable. Over 23,000 people are currently awaiting a kidney, heart, liver, heart-lung, pancreas, or lung transplantation. Donor supply is inadequate, and the number of donors remained unchanged at approximately 4000 annually for 1986 through 1989, with a modest 9.1% increase in 1990. Between 6900 and 10,700 potential donors are available annually (eg, 28.5 to 43.7 per million population). Depending on the class of donor considered, organ procurement efforts are between 37% and 59% efficient. Efficiency greatly varies by state and organ procurement organization. Many more organ donors are available than are being accessed through existing organ procurement efforts. Realistically, it may be possible to increase by 80% the number of donors available in the United States (up to 7300 annually). It is conceivable, although unlikely, that the supply of donor organs could achieve a level to meet demand.
Satellite inventory of Minnesota forest resources
NASA Technical Reports Server (NTRS)
Bauer, Marvin E.; Burk, Thomas E.; Ek, Alan R.; Coppin, Pol R.; Lime, Stephen D.; Walsh, Terese A.; Walters, David K.; Befort, William; Heinzen, David F.
1993-01-01
The methods and results of using Landsat Thematic Mapper (TM) data to classify and estimate the acreage of forest covertypes in northeastern Minnesota are described. Portions of six TM scenes covering five counties with a total area of 14,679 square miles were classified into six forest and five nonforest classes. The approach involved the integration of cluster sampling, image processing, and estimation. Using cluster sampling, 343 plots, each 88 acres in size, were photo interpreted and field mapped as a source of reference data for classifier training and calibration of the TM data classifications. Classification accuracies of up to 75 percent were achieved; most misclassification was between similar or related classes. An inverse method of calibration, based on the error rates obtained from the classifications of the cluster plots, was used to adjust the classification class proportions for classification errors. The resulting area estimates for total forest land in the five-county area were within 3 percent of the estimate made independently by the USDA Forest Service. Area estimates for conifer and hardwood forest types were within 0.8 and 6.0 percent respectively, of the Forest Service estimates. A trial of a second method of estimating the same classes as the Forest Service resulted in standard errors of 0.002 to 0.015. A study of the use of multidate TM data for change detection showed that forest canopy depletion, canopy increment, and no change could be identified with greater than 90 percent accuracy. The project results have been the basis for the Minnesota Department of Natural Resources and the Forest Service to define and begin to implement an annual system of forest inventory which utilizes Landsat TM data to detect changes in forest cover.
Measurement and Structural Model Class Separation in Mixture CFA: ML/EM versus MCMC
ERIC Educational Resources Information Center
Depaoli, Sarah
2012-01-01
Parameter recovery was assessed within mixture confirmatory factor analysis across multiple estimator conditions under different simulated levels of mixture class separation. Mixture class separation was defined in the measurement model (through factor loadings) and the structural model (through factor variances). Maximum likelihood (ML) via the…
The Non-Cognitive Returns to Class Size
ERIC Educational Resources Information Center
Dee, Thomas S.; West, Martin R.
2011-01-01
The authors use nationally representative survey data and a research design that relies on contemporaneous within-student and within-teacher comparisons across two academic subjects to estimate how class size affects certain non-cognitive skills in middle school. Their results indicate that smaller eighth-grade classes are associated with…
Assessing dry weather flow contribution in TSS and COD storm events loads in combined sewer systems.
Métadier, M; Bertrand-Krajewski, J L
2011-01-01
Continuous high resolution long term turbidity measurements along with continuous discharge measurements are now recognised as an appropriate technique for the estimation of in sewer total suspended solids (TSS) and Chemical Oxygen Demand (COD) loads during storm events. In the combined system of the Ecully urban catchment (Lyon, France), this technique is implemented since 2003, with more than 200 storm events monitored. This paper presents a method for the estimation of the dry weather (DW) contribution to measured total TSS and COD event loads with special attention devoted to uncertainties assessment. The method accounts for the dynamics of both discharge and turbidity time series at two minutes time step. The study is based on 180 DW days monitored in 2007-2008. Three distinct classes of DW days were evidenced. Variability analysis and quantification showed that no seasonal effect and no trend over the year were detectable. The law of propagation of uncertainties is applicable for uncertainties estimation. The method has then been applied to all measured storm events. This study confirms the interest of long term continuous discharge and turbidity time series in sewer systems, especially in the perspective of wet weather quality modelling.
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2015-01-01
A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.
Weyl calculus in QED I. The unitary group
NASA Astrophysics Data System (ADS)
Amour, L.; Lascar, R.; Nourrigat, J.
2017-01-01
In this work, we consider fixed 1/2 spin particles interacting with the quantized radiation field in the context of quantum electrodynamics. We investigate the time evolution operator in studying the reduced propagator (interaction picture). We first prove that this propagator belongs to the class of infinite dimensional Weyl pseudodifferential operators recently introduced in Amour et al. [J. Funct. Anal. 269(9), 2747-2812 (2015)] on Wiener spaces. We give a semiclassical expansion of the symbol of the reduced propagator up to any order with estimates on the remainder terms. Next, taking into account analyticity properties for the Weyl symbol of the reduced propagator, we derive estimates concerning transition probabilities between coherent states.
A time delay controller for magnetic bearings
NASA Technical Reports Server (NTRS)
Youcef-Toumi, K.; Reddy, S.
1991-01-01
The control of systems with unknown dynamics and unpredictable disturbances has raised some challenging problems. This is particularly important when high system performance needs to be guaranteed at all times. Recently, the Time Delay Control has been suggested as an alternative control scheme. The proposed control system does not require an explicit plant model nor does it depend on the estimation of specific plant parameters. Rather, it combines adaptation with past observations to directly estimate the effect of the plant dynamics. A control law is formulated for a class of dynamic systems and a sufficient condition is presented for control systems stability. The derivation is based on the bounded input-bounded output stability approach using L sub infinity function norms. The control scheme is implemented on a five degrees of freedom high speed and high precision magnetic bearing. The control performance is evaluated using step responses, frequency responses, and disturbance rejection properties. The experimental data show an excellent control performance despite the system complexity.
Effects of regulation on drug launch and pricing in interdependent markets.
Danzon, Patricia M; Epstein, Andrew J
2012-01-01
This study examines the effect of price regulation and competition on launch timing and pricing of new drugs. Our data cover launch experience in 15 countries from 1992 to 2003 for drugs in 12 major therapeutic classes. We estimate a two-equation model of launch hazard and launch price of new drugs. We find that launch timing and prices of new drugs are related to a country's average prices of established products in a class. Thus to the extent that price regulation reduces price levels, such regulation directly contributes to launch delay in the regulating country. Regulation by external referencing, whereby high-price countries reference low-price countries, also has indirect or spillover effects, contributing to launch delay and higher launch prices in low-price referenced countries. Referencing policies adopted in high-price countries indirectly impose welfare loss on low-price countries. These findings have implications for US proposals to constrain pharmaceutical prices through external referencing and drug importation.
Multiclass Posterior Probability Twin SVM for Motor Imagery EEG Classification.
She, Qingshan; Ma, Yuliang; Meng, Ming; Luo, Zhizeng
2015-01-01
Motor imagery electroencephalography is widely used in the brain-computer interface systems. Due to inherent characteristics of electroencephalography signals, accurate and real-time multiclass classification is always challenging. In order to solve this problem, a multiclass posterior probability solution for twin SVM is proposed by the ranking continuous output and pairwise coupling in this paper. First, two-class posterior probability model is constructed to approximate the posterior probability by the ranking continuous output techniques and Platt's estimating method. Secondly, a solution of multiclass probabilistic outputs for twin SVM is provided by combining every pair of class probabilities according to the method of pairwise coupling. Finally, the proposed method is compared with multiclass SVM and twin SVM via voting, and multiclass posterior probability SVM using different coupling approaches. The efficacy on the classification accuracy and time complexity of the proposed method has been demonstrated by both the UCI benchmark datasets and real world EEG data from BCI Competition IV Dataset 2a, respectively.
Liu, Derong; Wang, Ding; Li, Hongliang
2014-02-01
In this paper, using a neural-network-based online learning optimal control approach, a novel decentralized control strategy is developed to stabilize a class of continuous-time nonlinear interconnected large-scale systems. First, optimal controllers of the isolated subsystems are designed with cost functions reflecting the bounds of interconnections. Then, it is proven that the decentralized control strategy of the overall system can be established by adding appropriate feedback gains to the optimal control policies of the isolated subsystems. Next, an online policy iteration algorithm is presented to solve the Hamilton-Jacobi-Bellman equations related to the optimal control problem. Through constructing a set of critic neural networks, the cost functions can be obtained approximately, followed by the control policies. Furthermore, the dynamics of the estimation errors of the critic networks are verified to be uniformly and ultimately bounded. Finally, a simulation example is provided to illustrate the effectiveness of the present decentralized control scheme.
The dim light melatonin onset following fixed and free sleep schedules.
Burgess, Helen J; Eastman, Charmane I
2005-09-01
The time at which the dim light melatonin onset (DLMO) occurs can be used to ensure the correct timing of light and/or melatonin administration in order to produce desired circadian phase shifts. Sometimes however, measuring the DLMO is not feasible. Here we determined if the DLMO was best estimated from fixed sleep times (based on habitual sleep times) or free (ad libitum) sleep times. Young healthy sleepers on fixed (n=60) or free (n=60) sleep schedules slept at home for 6 days. Sleep times were recorded with sleep logs verified with wrist actigraphy. Half-hourly saliva samples were then collected during a dim light phase assessment and were later assayed to determine the DLMO. We found that the DLMO was more highly correlated with sleep times in the free sleepers than in the fixed sleepers (DLMO versus wake time, r=0.70 and r=0.44, both P<0.05). The regression equation between wake time and the DLMO in the free sleepers predicted the DLMO in an independent sample of free sleepers (n=23) to within 1.5 h of the actual DLMO in 96% of cases. These results indicate that the DLMO can be readily estimated in people whose sleep times are minimally affected by work, class and family commitments. Further work is necessary to determine if the DLMO can be accurately estimated in people with greater work and family responsibilities that affect their sleep times, perhaps by using weekend wake times, and if this method will apply to the elderly and patients with circadian rhythm disorders.
The dim light melatonin onset following fixed and free sleep schedules
Burgess, Helen J.; Eastman, Charmane I.
2013-01-01
Summary The time at which the dim light melatonin onset (DLMO) occurs can be used to ensure the correct timing of light and/or melatonin administration in order to produce desired circadian phase shifts. Sometimes however, measuring the DLMO is not feasible. Here we determined if the DLMO was best estimated from fixed sleep times (based on habitual sleep times) or free (ad libitum) sleep times. Young healthy sleepers on fixed (n = 60) or free (n = 60) sleep schedules slept at home for 6 days. Sleep times were recorded with sleep logs verified with wrist actigraphy. Half-hourly saliva samples were then collected during a dim light phase assessment and were later assayed to determine the DLMO. We found that the DLMO was more highly correlated with sleep times in the free sleepers than in the fixed sleepers (DLMO versus wake time, r = 0.70 and r = 0.44, both P < 0.05). The regression equation between wake time and the DLMO in the free sleepers predicted the DLMO in an independent sample of free sleepers (n = 23) to within 1.5 h of the actual DLMO in 96% of cases. These results indicate that the DLMO can be readily estimated in people whose sleep times are minimally affected by work, class and family commitments. Further work is necessary to determine if the DLMO can be accurately estimated in people with greater work and family responsibilities that affect their sleep times, perhaps by using weekend wake times, and if this method will apply to the elderly and patients with circadian rhythm disorders. PMID:16120097
Real-time Mainshock Forecast by Statistical Discrimination of Foreshock Clusters
NASA Astrophysics Data System (ADS)
Nomura, S.; Ogata, Y.
2016-12-01
Foreshock discremination is one of the most effective ways for short-time forecast of large main shocks. Though many large earthquakes accompany their foreshocks, discreminating them from enormous small earthquakes is difficult and only probabilistic evaluation from their spatio-temporal features and magnitude evolution may be available. Logistic regression is the statistical learning method best suited to such binary pattern recognition problems where estimates of a-posteriori probability of class membership are required. Statistical learning methods can keep learning discreminating features from updating catalog and give probabilistic recognition of forecast in real time. We estimated a non-linear function of foreshock proportion by smooth spline bases and evaluate the possibility of foreshocks by the logit function. In this study, we classified foreshocks from earthquake catalog by the Japan Meteorological Agency by single-link clustering methods and learned spatial and temporal features of foreshocks by the probability density ratio estimation. We use the epicentral locations, time spans and difference in magnitudes for learning and forecasting. Magnitudes of main shocks are also predicted our method by incorporating b-values into our method. We discuss the spatial pattern of foreshocks from the classifier composed by our model. We also implement a back test to validate predictive performance of the model by this catalog.
Caria, Maria Paola; Faggiano, Fabrizio; Bellocco, Rino; Galanti, Maria Rosaria
2013-12-01
Partial implementation may explain modest effectiveness of many school-based preventive programmes against substance use. We studied whether specific characteristics of the class could predict the level of implementation of a curriculum delivered by class teachers in schools from some European countries. Secondary analysis of data from an evaluation trial. In seven European countries, 78 schools (173 classes) were randomly assigned to a 12-unit, interactive, standardized programme based on the comprehensive social influence model. Curriculum completeness, application fidelity, average unit duration and use of role-play were monitored using structured report forms. Predictors of implementation were measured by aggregating at class level information from the baseline student survey. Class size, gender composition, mean age, factors related to substance use and to affection to school were analysed, with associations estimated by multilevel regression models. Implementation was not significantly predicted by mean age, proportion of students with positive academic expectation or liking school. Proportion of boys was associated with a shorter time devoted to each unit [β = -0.19, 95% confidence intervals (CI) -0.32 to -0.06]. Class size was inversely related to application fidelity [Odds ratio (OR) 0.92, 95% CI 0.85 to 0.99]. Prevalence of substance use was associated with a decreased odds of implementing all the curriculum units (OR 0.81, 95% CI 0.65 to 0.99). Students' connectedness to their class was associated with increased odds of teachers using role-play (OR 1.52, 95% CI 1.03 to 2.29). Teachers' implementation of preventive programmes may be affected by structural and social characteristics of classes and therefore benefit from organizational strategies and teachers' training in class management techniques.
Estimating the quadratic mean diameters of fine woody debris in forests of the United States
Christopher W. Woodall; Vicente J. Monleon
2010-01-01
Most fine woody debris (FWD) line-intersect sampling protocols and associated estimators require an approximation of the quadratic mean diameter (QMD) of each individual FWD size class. There is a lack of empirically derived QMDs by FWD size class and species/forest type across the U.S. The objective of this study is to evaluate a technique known as the graphical...
Robust passive control for a class of uncertain neutral systems based on sliding mode observer.
Liu, Zhen; Zhao, Lin; Kao, Yonggui; Gao, Cunchen
2017-01-01
The passivity-based sliding mode control (SMC) problem for a class of uncertain neutral systems with unmeasured states is investigated. Firstly, a particular non-fragile state observer is designed to generate the estimations of the system states, based upon which a novel integral-type sliding surface function is established for the control process. Secondly, a new sufficient condition for robust asymptotic stability and passivity of the resultant sliding mode dynamics (SMDs) is obtained in terms of linear matrix inequalities (LMIs). Thirdly, the finite-time reachability of the predesigned sliding surface is ensured by resorting to a novel adaptive SMC law. Finally, the validity and superiority of the scheme are justified via several examples. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Magis, David
2014-11-01
In item response theory, the classical estimators of ability are highly sensitive to response disturbances and can return strongly biased estimates of the true underlying ability level. Robust methods were introduced to lessen the impact of such aberrant responses on the estimation process. The computation of asymptotic (i.e., large-sample) standard errors (ASE) for these robust estimators, however, has not yet been fully considered. This paper focuses on a broad class of robust ability estimators, defined by an appropriate selection of the weight function and the residual measure, for which the ASE is derived from the theory of estimating equations. The maximum likelihood (ML) and the robust estimators, together with their estimated ASEs, are then compared in a simulation study by generating random guessing disturbances. It is concluded that both the estimators and their ASE perform similarly in the absence of random guessing, while the robust estimator and its estimated ASE are less biased and outperform their ML counterparts in the presence of random guessing with large impact on the item response process. © 2013 The British Psychological Society.
Infrasonic emissions from local meteorological events: A summary of data taken throughout 1984
NASA Technical Reports Server (NTRS)
Zuckerwar, A. J.
1986-01-01
Records of infrasonic signals, propagating through the Earth's atmosphere in the frequency band 2 to 16 Hz, were gathered on a three microphone array at Langley Research Center throughout the year 1984. Digital processing of these records fulfilled three functions: time delay estimation, based on an adaptive filter; source location, determined from the time delay estimates; and source identification, based on spectral analysis. Meteorological support was provided by significant meteorological advisories, lightning locator plots, and daily reports from the Air Weather Service. The infrasonic data are organized into four characteristic signatures, one of which is believed to contain emissions from local meteorological sources. This class of signature prevailed only on those days when major global meteorological events appeared in or near to eastern United States. Eleven case histories are examined. Practical application of the infrasonic array in a low level wing shear alert system is discussed.
Global, finite energy, weak solutions for the NLS with rough, time-dependent magnetic potentials
NASA Astrophysics Data System (ADS)
Antonelli, Paolo; Michelangeli, Alessandro; Scandone, Raffaele
2018-04-01
We prove the existence of weak solutions in the space of energy for a class of nonlinear Schrödinger equations in the presence of a external, rough, time-dependent magnetic potential. Under our assumptions, it is not possible to study the problem by means of usual arguments like resolvent techniques or Fourier integral operators, for example. We use a parabolic regularisation, and we solve the approximating Cauchy problem. This is achieved by obtaining suitable smoothing estimates for the dissipative evolution. The total mass and energy bounds allow to extend the solution globally in time. We then infer sufficient compactness properties in order to produce a global-in-time finite energy weak solution to our original problem.
NASA Astrophysics Data System (ADS)
Cao, Jinde; Song, Qiankun
2006-07-01
In this paper, the exponential stability problem is investigated for a class of Cohen-Grossberg-type bidirectional associative memory neural networks with time-varying delays. By using the analysis method, inequality technique and the properties of an M-matrix, several novel sufficient conditions ensuring the existence, uniqueness and global exponential stability of the equilibrium point are derived. Moreover, the exponential convergence rate is estimated. The obtained results are less restrictive than those given in the earlier literature, and the boundedness and differentiability of the activation functions and differentiability of the time-varying delays are removed. Two examples with their simulations are given to show the effectiveness of the obtained results.
NASA Astrophysics Data System (ADS)
Li, Kelin
2010-02-01
In this article, a class of impulsive bidirectional associative memory (BAM) fuzzy cellular neural networks (FCNNs) with time-varying delays is formulated and investigated. By employing delay differential inequality and M-matrix theory, some sufficient conditions ensuring the existence, uniqueness and global exponential stability of equilibrium point for impulsive BAM FCNNs with time-varying delays are obtained. In particular, a precise estimate of the exponential convergence rate is also provided, which depends on system parameters and impulsive perturbation intention. It is believed that these results are significant and useful for the design and applications of BAM FCNNs. An example is given to show the effectiveness of the results obtained here.
Non-Linear System Identification for Aeroelastic Systems with Application to Experimental Data
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2008-01-01
Representation and identification of a non-linear aeroelastic pitch-plunge system as a model of the NARMAX class is considered. A non-linear difference equation describing this aircraft model is derived theoretically and shown to be of the NARMAX form. Identification methods for NARMAX models are applied to aeroelastic dynamics and its properties demonstrated via continuous-time simulations of experimental conditions. Simulation results show that (i) the outputs of the NARMAX model match closely those generated using continuous-time methods and (ii) NARMAX identification methods applied to aeroelastic dynamics provide accurate discrete-time parameter estimates. Application of NARMAX identification to experimental pitch-plunge dynamics data gives a high percent fit for cross-validated data.
A global map of rainfed cropland areas (GMRCA) at the end of last millennium using remote sensing
Biradar, C.M.; Thenkabail, P.S.; Noojipady, P.; Li, Y.; Dheeravath, V.; Turral, H.; Velpuri, M.; Gumma, M.K.; Gangalakunta, O.R.P.; Cai, X.L.; Xiao, X.; Schull, M.A.; Alankara, R.D.; Gunasinghe, S.; Mohideen, S.
2009-01-01
The overarching goal of this study was to produce a global map of rainfed cropland areas (GMRCA) and calculate country-by-country rainfed area statistics using remote sensing data. A suite of spatial datasets, methods and protocols for mapping GMRCA were described. These consist of: (a) data fusion and composition of multi-resolution time-series mega-file data-cube (MFDC), (b) image segmentation based on precipitation, temperature, and elevation zones, (c) spectral correlation similarity (SCS), (d) protocols for class identification and labeling through uses of SCS R2-values, bi-spectral plots, space-time spiral curves (ST-SCs), rich source of field-plot data, and zoom-in-views of Google Earth (GE), and (e) techniques for resolving mixed classes by decision tree algorithms, and spatial modeling. The outcome was a 9-class GMRCA from which country-by-country rainfed area statistics were computed for the end of the last millennium. The global rainfed cropland area estimate from the GMRCA 9-class map was 1.13 billion hectares (Bha). The total global cropland areas (rainfed plus irrigated) was 1.53 Bha which was close to national statistics compiled by FAOSTAT (1.51 Bha). The accuracies and errors of GMRCA were assessed using field-plot and Google Earth data points. The accuracy varied between 92 and 98% with kappa value of about 0.76, errors of omission of 2-8%, and the errors of commission of 19-36%. ?? 2008 Elsevier B.V.
NASA Technical Reports Server (NTRS)
Chelette, T. L.; Repperger, Daniel W.; Albery, W. B.
1991-01-01
An effort was initiated at the Armstrong Aerospace Medical Research Laboratory (AAMRL) to investigate the improvement of the situational awareness of a pilot with respect to his aircraft's spatial orientation. The end product of this study is a device to alert a pilot to potentially disorienting situations. Much like a ground collision avoidance system (GCAS) is used in fighter aircraft to alert the pilot to 'pull up' when dangerous flight paths are predicted, this device warns the pilot to put a higher priority on attention to the orientation instrument. A Kalman filter was developed which estimates the pilot's perceived position and orientation. The input to the Kalman filter consists of two classes of data. The first class of data consists of noise parameters (indicating parameter uncertainty), conflict signals (e.g. vestibular and kinesthetic signal disagreement), and some nonlinear effects. The Kalman filter's perceived estimates are now the sum of both Class 1 data (good information) and Class 2 data (distorted information). When the estimated perceived position or orientation is significantly different from the actual position or orientation, the pilot is alerted.
Li, Yunji; Wu, QingE; Peng, Li
2018-01-23
In this paper, a synthesized design of fault-detection filter and fault estimator is considered for a class of discrete-time stochastic systems in the framework of event-triggered transmission scheme subject to unknown disturbances and deception attacks. A random variable obeying the Bernoulli distribution is employed to characterize the phenomena of the randomly occurring deception attacks. To achieve a fault-detection residual is only sensitive to faults while robust to disturbances, a coordinate transformation approach is exploited. This approach can transform the considered system into two subsystems and the unknown disturbances are removed from one of the subsystems. The gain of fault-detection filter is derived by minimizing an upper bound of filter error covariance. Meanwhile, system faults can be reconstructed by the remote fault estimator. An recursive approach is developed to obtain fault estimator gains as well as guarantee the fault estimator performance. Furthermore, the corresponding event-triggered sensor data transmission scheme is also presented for improving working-life of the wireless sensor node when measurement information are aperiodically transmitted. Finally, a scaled version of an industrial system consisting of local PC, remote estimator and wireless sensor node is used to experimentally evaluate the proposed theoretical results. In particular, a novel fault-alarming strategy is proposed so that the real-time capacity of fault-detection is guaranteed when the event condition is triggered.
Osnas, E.E.; Heisey, D.M.; Rolley, R.E.; Samuel, M.D.
2009-01-01
Emerging infectious diseases threaten wildlife populations and human health. Understanding the spatial distributions of these new diseases is important for disease management and policy makers; however, the data are complicated by heterogeneities across host classes, sampling variance, sampling biases, and the space-time epidemic process. Ignoring these issues can lead to false conclusions or obscure important patterns in the data, such as spatial variation in disease prevalence. Here, we applied hierarchical Bayesian disease mapping methods to account for risk factors and to estimate spatial and temporal patterns of infection by chronic wasting disease (CWD) in white-tailed deer (Odocoileus virginianus) of Wisconsin, USA. We found significant heterogeneities for infection due to age, sex, and spatial location. Infection probability increased with age for all young deer, increased with age faster for young males, and then declined for some older animals, as expected from disease-associated mortality and age-related changes in infection risk. We found that disease prevalence was clustered in a central location, as expected under a simple spatial epidemic process where disease prevalence should increase with time and expand spatially. However, we could not detect any consistent temporal or spatiotemporal trends in CWD prevalence. Estimates of the temporal trend indicated that prevalence may have decreased or increased with nearly equal posterior probability, and the model without temporal or spatiotemporal effects was nearly equivalent to models with these effects based on deviance information criteria. For maximum interpretability of the role of location as a disease risk factor, we used the technique of direct standardization for prevalence mapping, which we develop and describe. These mapping results allow disease management actions to be employed with reference to the estimated spatial distribution of the disease and to those host classes most at risk. Future wildlife epidemiology studies should employ hierarchical Bayesian methods to smooth estimated quantities across space and time, account for heterogeneities, and then report disease rates based on an appropriate standardization. ?? 2009 by the Ecological Society of America.
The vasovagal tonus index as a prognostic indicator in dogs with dilated cardiomyopathy.
Pereira, Y Martinez; Woolley, R; Culshaw, G; French, A; Martin, M
2008-11-01
To investigate the prognostic and diagnostic value of heart rate variability (HRV) using the vasovagal tonus index (VVTI) in dogs suffering from idiopathic dilated cardiomyopathy (DCM). Electrocardiographic (ECG) recordings of 369 patients presented to a referral centre between 1993 and 2006 were reviewed. VVTI values were calculated from 132 dogs. Lower VVTI values were found in patients in International Small Animal Cardiac Health Council (ISACHC) heart failure (HF) class 2 and 3 compared with class 1. VVTI was found to be positively correlated with survival time (ST) in class 2 and 3 patients. When a cut-off value of 7.59 for VVTI was used, the test could differentiate patients in ISACHC HF class 1 versus 2 and 3 with a sensitivity of 89 per cent and a specificity of 62.5 per cent. The ST for patients with VVTI values less than 7.59 was significantly lower. The VVTI is a useful index, obtained from a standard ECG recording that estimates HRV in dogs and does not require any specific equipment for its calculation. It can be useful as a diagnostic tool to assess the severity of HF and is a useful prognostic tool in dogs with DCM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, V.E.
1979-10-01
The standard maximum likelihood and moment estimation procedures are shown to have some undesirable characteristics for estimating the parameters in a three-parameter lognormal distribution. A class of goodness-of-fit estimators is found which provides a useful alternative to the standard methods. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Shapiro-Francia tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted-order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Bias and robustness of the procedures are examined and example data sets analyzed including geochemical datamore » from the National Uranium Resource Evaluation Program.« less
NASA Astrophysics Data System (ADS)
Chakraborty, S.; Banerjee, A.; Gupta, S. K. S.; Christensen, P. R.; Papandreou-Suppappola, A.
2017-12-01
Multitemporal observations acquired frequently by satellites with short revisit periods such as the Moderate Resolution Imaging Spectroradiometer (MODIS), is an important source for modeling land cover. Due to the inherent seasonality of the land cover, harmonic modeling reveals hidden state parameters characteristic to it, which is used in classifying different land cover types and in detecting changes due to natural or anthropogenic factors. In this work, we use an eight day MODIS composite to create a Normalized Difference Vegetation Index (NDVI) time-series of ten years. Improved hidden parameter estimates of the nonlinear harmonic NDVI model are obtained using the Particle Filter (PF), a sequential Monte Carlo estimator. The nonlinear estimation based on PF is shown to improve parameter estimation for different land cover types compared to existing techniques that use the Extended Kalman Filter (EKF), due to linearization of the harmonic model. As these parameters are representative of a given land cover, its applicability in near real-time detection of land cover change is also studied by formulating a metric that captures parameter deviation due to change. The detection methodology is evaluated by considering change as a rare class problem. This approach is shown to detect change with minimum delay. Additionally, the degree of change within the change perimeter is non-uniform. By clustering the deviation in parameters due to change, this spatial variation in change severity is effectively mapped and validated with high spatial resolution change maps of the given regions.
Heritable and non-heritable genetic effects on retained placenta in Meuse-Rhine-Yssel cattle.
Benedictus, L; Koets, A P; Kuijpers, F H J; Joosten, I; van Eldik, P; Heuven, H C M
2013-02-01
Failure of the timely expulsion of the fetal membranes, called retained placenta, leads to reduced fertility, increased veterinary costs and reduced milk yields. The objectives of this study were to concurrently look at the heritable and non-heritable genetic effects on retained placenta and test the hypothesis that a greater coefficient of relationship between dam and calf increases the risk of retained placenta in the dam. The average incidence of retained placenta in 43,661 calvings of Meuse-Rhine-Yssel cattle was 4.5%, ranging from 0% to 29.6% among half-sib groups. The average pedigree based relationship between the sire and the maternal grandsire was 0.05 and ranged from 0 to 1.04. Using a sire-maternal grandsire model the heritability was estimated at 0.22 (SEM=0.07) which is comparable with estimates for other dual purpose breeds. The coefficient of relationship between the sire and the maternal grandsire had an effect on retained placenta. The coefficient of relationship between the sire and the maternal grandsire was used as a proxy for the coefficient of relationship between dam and calf, which is correlated with the probability of major histocompatibility complex (MHC) class I compatibility between dam and calf. MHC class I compatibility is an important risk factor for retained placenta. Although the MHC class I haplotype is genetically determined, MHC class I compatibility is not heritable. This study shows that selection against retained placenta is possible and indicates that preventing the mating of related parents may play a role in the prevention of retained placenta. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Gutmann, Ethan D.; Small, Eric E.
2007-01-01
Soil hydraulic properties (SHPs) regulate the movement of water in the soil. This in turn plays an important role in the water and energy cycles at the land surface. At present, SHPS are commonly defined by a simple pedotransfer function from soil texture class, but SHPs vary more within a texture class than between classes. To examine the impact of using soil texture class to predict SHPS, we run the Noah land surface model for a wide variety of measured SHPs. We find that across a range of vegetation cover (5 - 80% cover) and climates (250 - 900 mm mean annual precipitation), soil texture class only explains 5% of the variance expected from the real distribution of SHPs. We then show that modifying SHPs can drastically improve model performance. We compare two methods of estimating SHPs: (1) inverse method, and (2) soil texture class. Compared to texture class, inverse modeling reduces errors between measured and modeled latent heat flux from 88 to 28 w/m(exp 2). Additionally we find that with increasing vegetation cover the importance of SHPs decreases and that the van Genuchten m parameter becomes less important, while the saturated conductivity becomes more important.
Estimation of diagnostic test accuracy without full verification: a review of latent class methods
Collins, John; Huynh, Minh
2014-01-01
The performance of a diagnostic test is best evaluated against a reference test that is without error. For many diseases, this is not possible, and an imperfect reference test must be used. However, diagnostic accuracy estimates may be biased if inaccurately verified status is used as the truth. Statistical models have been developed to handle this situation by treating disease as a latent variable. In this paper, we conduct a systematized review of statistical methods using latent class models for estimating test accuracy and disease prevalence in the absence of complete verification. PMID:24910172
Thoughts of Death and Suicide in Early Adolescence
ERIC Educational Resources Information Center
Vander Stoep, Ann; McCauley, Elizabeth; Flynn, Cynthia; Stone, Andrea
2009-01-01
The prevalence and persistence of thoughts of death and suicide during early adolescence were estimated in a community-based cohort. A latent class approach was used to identify distinct subgroups based on endorsements to depression items administered repeatedly over 24 months. Two classes emerged, with 75% in a low ideation class across four…
Optimization-Based Model Fitting for Latent Class and Latent Profile Analyses
ERIC Educational Resources Information Center
Huang, Guan-Hua; Wang, Su-Mei; Hsu, Chung-Chu
2011-01-01
Statisticians typically estimate the parameters of latent class and latent profile models using the Expectation-Maximization algorithm. This paper proposes an alternative two-stage approach to model fitting. The first stage uses the modified k-means and hierarchical clustering algorithms to identify the latent classes that best satisfy the…
Toth, Damon J. A.; Leecaster, Molly; Pettey, Warren B. P.; Gundlapalli, Adi V.; Gao, Hongjiang; Rainey, Jeanette J.; Uzicanin, Amra; Samore, Matthew H.
2015-01-01
Influenza poses a significant health threat to children, and schools may play a critical role in community outbreaks. Mathematical outbreak models require assumptions about contact rates and patterns among students, but the level of temporal granularity required to produce reliable results is unclear. We collected objective contact data from students aged 5–14 at an elementary school and middle school in the state of Utah, USA, and paired those data with a novel, data-based model of influenza transmission in schools. Our simulations produced within-school transmission averages consistent with published estimates. We compared simulated outbreaks over the full resolution dynamic network with simulations on networks with averaged representations of contact timing and duration. For both schools, averaging the timing of contacts over one or two school days caused average outbreak sizes to increase by 1–8%. Averaging both contact timing and pairwise contact durations caused average outbreak sizes to increase by 10% at the middle school and 72% at the elementary school. Averaging contact durations separately across within-class and between-class contacts reduced the increase for the elementary school to 5%. Thus, the effect of ignoring details about contact timing and duration in school contact networks on outbreak size modelling can vary across different schools. PMID:26063821
Vannoni, Francesca; Mamo, C; Demaria, M; Ceccarelli, C; Costa, G
2005-01-01
Knowledge on the occupational and social factors that influence the relationship between illness, absence from work and occupational mobility is at present insufficient. To map out, by social class and occupational group, the impact of health problems on work and the distribution of accidents and morbidity associated with occupation. Using data from the National Survey of the Italian Labour Force (ISTAT, 1999), covering a sample of 200,384 subjects, prevalence odds ratios of morbidity, work injuries and change of occupation due to health problems were calculated by social class and occupation, adjusting for age and residence. The working class showed a higher risk, due to health problems, of a reduction in time worked (OR = 3.70 in men and OR = 4.10 in women), of choosing to work part-time (OR = 2.04 in men and OR = 2.27 in women), or of withdrawing from the workforce (for artisans, skilled manual workers, farmers and agricultural labourers OR = 1.63 in men and OR = 1.47 in women). This class was also at a greater disadvantage not only with respect to accident rates (OR = 1.85 in men and OR = 1.88 in women), but also with respect to the time needed for post-trauma rehabilitation and return to work (for absences of one week to one month: OR = 1.67 and 1.83 for men and women, respectively; for absences of more than one month: OR = 1.29 and OR = 1.69). Moreover, the working class, when compared to other social classes, had a higher rate of suffering from illness, physical impairment or other physical and psychological problems caused or aggravated by working activity (25% in men and 32% in women). The ISTAT National Survey provides an estimate of minor accidents with prognoses of less than three days, including those not reported to the National Institute for Insurance against Occupational Accidents and Diseases (INAIL). This allows a preliminary exploration of the relationship between health problems and occupational mobility; however, it seems necessary to collect more detailed information in order to more exhaustively explore the mechanisms which generate the inequalities observed.
[Dose and image quality in intraoral radiography].
Hjardemaal, O
1991-11-01
The technique factors when performing intraoral X-ray exposures must be selected in such a way that sufficient diagnostic information is obtained at a reasonable patient dose. The Danish National Institute of Radiation Hygiene has performed a study comprising 32 dental X-ray sets. The mean value of the skin dose for a maxillary molar was 9.6 mGy and the value for the dental colleges 7.0 mGy. For a mandibular incisor the corresponding doses were 7.7 mGy and 3.6 mGy. After the conclusion of the mentioned study it has been part of the institute's inspection procedure for dental X-ray sets to measure patient skin doses. 243 measurements were performed and the mean value of the entrance skin dose was 6.5 mGy and the dose interval was 0.7-57 mGy. All doses are normalised to speed class D film. At 16% of the inspected sets films of speed class E were used. The remainder used class D films. The spread in doses cannot be explained by variation in equipment parameters alone but is to a high degree due to a combination of inappropriate film processing and exposure time. Interviews with staff in dental clinics confirm that films are frequently processed until the desired density is obtained by visual estimation. It is shown that the skin dose when using film of speed class D can be kept below 7 mGy for a mandibular incisor. Concluding is stated that film processing shall be performed in accordance with specifications from the manufacturer of the developer. Film of speed class E must be used. Finally must the exposure time be graduated according to the object exposed.
Does movement behaviour predict population densities? A test with 25 butterfly species.
Schultz, Cheryl B; Pe'er, B Guy; Damiani, Christine; Brown, Leone; Crone, Elizabeth E
2017-03-01
Diffusion, which approximates a correlated random walk, has been used by ecologists to describe movement, and forms the basis for many theoretical models. However, it is often criticized as too simple a model to describe animal movement in real populations. We test a key prediction of diffusion models, namely, that animals should be more abundant in land cover classes through which they move more slowly. This relationship between density and diffusion has rarely been tested across multiple species within a given landscape. We estimated diffusion rates and corresponding densities of 25 Israeli butterfly species from flight path data and visual surveys. The data were collected across 19 sites in heterogeneous landscapes with four land cover classes: semi-natural habitat, olive groves, wheat fields and field margins. As expected from theory, species tended to have higher densities in land cover classes through which they moved more slowly and lower densities in land cover classes through which they moved more quickly. Two components of movement (move length and turning angle) were not associated with density, nor was expected net squared displacement. Move time, however, was associated with density, and animals spent more time per move step in areas with higher density. The broad association we document between movement behaviour and density suggests that diffusion is a good first approximation of movement in butterflies. Moreover, our analyses demonstrate that dispersal is not a species-invariant trait, but rather one that depends on landscape context. Thus, land cover classes with high diffusion rates are likely to have low densities and be effective conduits for movement. © 2016 The Authors. Journal of Animal Ecology © 2016 British Ecological Society.
NASA Astrophysics Data System (ADS)
Loukili, Y.; Woodbury, A. D.; Snelgrove, K. R.
2006-12-01
The Canadian Land Surface Scheme (CLASS) is a numerical model developed at the Canadian Atmospheric Environment Service by Verseghy et al. [1991, 1993, 2000] and used to evaluate the vertical transfer of energy and water between the land surface and three soil layers. Among the features of CLASS its treatment of the land surface as a composite of four primary subareas: canopy and snow covered ground, snow-covered ground, canopy covered soil, and bare soil. The vegetation properties are also related via weighted averages to four types: needleaf trees, broadleaf trees, grass and crops. The incorporation of meteorological data as forcing inputs drives the model through advanced formulae describing the earth surface physics. These include canopy radiation and evapotranspiration, sensible and latent heat fluxes, rainfall interception, infiltration and ponding, snow melt and soil freezing. Such treatment allows for a realistic estimation of the surface energy balance. In this work, a major revision of CLASS, called AccuCLASS, is introduced, which permits a user specified depth and as many soil layers as needed. Almost all the physically based calculations of heat and moisture transfer in CLASS are kept and adequately extended to fit the desired refined mesh. In the resolution of soil temperature and heat flux terms, the GMRES iterative method replaced the explicit algebraic manipulation. Moreover, in the moisture regime, a water table lower boundary condition is added for the future coupling with groundwater models. The results of AccuCLASS are extensively validated for some synthetic runs under real-like seasonal weather conditions and different soil types, through inter-comparing to simulation outputs from SHAW [Flerchinger and Saxon, 1989], HYDRUS-1D [Simunek et al., 1998] and HELP [Schroeder et al., 1994] models. We find that AccuCLASS and SHAW accurately predict moisture and bottom drainage amounts; and that the original CLASS code does not have sufficient grid refinement to track precisely the unsaturated flow below the soil surface. On the other hand, when considering short time scale responses, HELP overestimates the recharge for sandy soils and underestimates it for clayey soils. An improvement of surface energy terms estimation is also carried out by AccuCLASS. Furthermore, some stand-alone tests forced by actual meteorological data over two land squares representative of the Assiniboine Delta Aquifer (ADA) show the importance of our contributions and the ability to provide a more accurate forecast of water mass balance terms. The coupling of this novel version of CLASS to other GCM components will help study objectively the cyclic drought phenomenon on the Canadian Prairies as well as its medium and long term ecological and socio-economic impacts in the region.
NASA Astrophysics Data System (ADS)
Reddy, Ramakrushna; Nair, Rajesh R.
2013-10-01
This work deals with a methodology applied to seismic early warning systems which are designed to provide real-time estimation of the magnitude of an event. We will reappraise the work of Simons et al. (2006), who on the basis of wavelet approach predicted a magnitude error of ±1. We will verify and improve upon the methodology of Simons et al. (2006) by applying an SVM statistical learning machine on the time-scale wavelet decomposition methods. We used the data of 108 events in central Japan with magnitude ranging from 3 to 7.4 recorded at KiK-net network stations, for a source-receiver distance of up to 150 km during the period 1998-2011. We applied a wavelet transform on the seismogram data and calculating scale-dependent threshold wavelet coefficients. These coefficients were then classified into low magnitude and high magnitude events by constructing a maximum margin hyperplane between the two classes, which forms the essence of SVMs. Further, the classified events from both the classes were picked up and linear regressions were plotted to determine the relationship between wavelet coefficient magnitude and earthquake magnitude, which in turn helped us to estimate the earthquake magnitude of an event given its threshold wavelet coefficient. At wavelet scale number 7, we predicted the earthquake magnitude of an event within 2.7 seconds. This means that a magnitude determination is available within 2.7 s after the initial onset of the P-wave. These results shed light on the application of SVM as a way to choose the optimal regression function to estimate the magnitude from a few seconds of an incoming seismogram. This would improve the approaches from Simons et al. (2006) which use an average of the two regression functions to estimate the magnitude.
Status of pelagic prey fishes in Lake Michigan, 2013
Warner, David M.; Farha, Steven A.; O'Brien, Timothy P.; Ogilvie, Lynn; Claramunt, Randall M.; Hanson, Dale
2014-01-01
Acoustic surveys were conducted in late summer/early fall during the years 1992-1996 and 2001-2013 to estimate pelagic prey fish biomass in Lake Michigan. Midwater trawling during the surveys as well as target strength provided a measure of species and size composition of the fish community for use in scaling acoustic data and providing species-specific abundance estimates. The 2013 survey consisted of 27 acoustic transects (546 km total) and 31 midwater trawl tows. Mean prey fish biomass was 6.1 kg/ha (relative standard error, RSE = 11%) or 29.6 kilotonnes (kt = 1,000 metric tons), which was similar to the estimate in 2012 (31.1 kt) and 23.5% of the long-term (18 years) mean. The numeric density of the 2013 alewife year class was 6% of the time series average and this year-class contributed 4% of total alewife biomass (5.2 kg/ha, RSE = 12%). Alewife ≥age-1 comprised 96% of alewife biomass. In 2013, alewife comprised 86% of total prey fish biomass, while rainbow smelt and bloater were 4 and 10% of total biomass, respectively. Rainbow smelt biomass in 2013 (0.24 kg/ha, RSE = 17%) was essentially identical to the rainbow smelt biomass in 2012 and was 6% of the long term mean. Bloater biomass in 2013 was 0.6 kg/ha, only half the 2012 biomass, and 6% of the long term mean. Mean density of small bloater in 2013 (29 fish/ha, RSE = 29%) was lower than peak values observed in 2007-2009 and was 23% of the time series mean. In 2013, pelagic prey fish biomass in Lake Michigan was similar to Lake Huron, but pelagic community composition differs in the two lakes, with Lake Huron dominated by bloater.
Reichert, Brian E.; Kendall, William L.; Fletcher, Robert J.; Kitchens, Wiley M.
2016-01-01
While variation in age structure over time and space has long been considered important for population dynamics and conservation, reliable estimates of such spatio-temporal variation in age structure have been elusive for wild vertebrate populations. This limitation has arisen because of problems of imperfect detection, the potential for temporary emigration impacting assessments of age structure, and limited information on age. However, identifying patterns in age structure is important for making reliable predictions of both short- and long-term dynamics of populations of conservation concern. Using a multistate superpopulation estimator, we estimated region-specific abundance and age structure (the proportion of individuals within each age class) of a highly endangered population of snail kites for two separate regions in Florida over 17 years (1997–2013). We find that in the southern region of the snail kite—a region known to be critical for the long-term persistence of the species—the population has declined significantly since 1997, and during this time, it has increasingly become dominated by older snail kites (> 12 years old). In contrast, in the northern region—a region historically thought to serve primarily as drought refugia—the population has increased significantly since 2007 and age structure is more evenly distributed among age classes. Given that snail kites show senescence at approximately 13 years of age, where individuals suffer higher mortality rates and lower breeding rates, these results reveal an alarming trend for the southern region. Our work illustrates the importance of accounting for spatial structure when assessing changes in abundance and age distribution and the need for monitoring of age structure in imperiled species.
Winter wheat mapping combining variations before and after estimated heading dates
NASA Astrophysics Data System (ADS)
Qiu, Bingwen; Luo, Yuhan; Tang, Zhenghong; Chen, Chongcheng; Lu, Difei; Huang, Hongyu; Chen, Yunzhi; Chen, Nan; Xu, Weiming
2017-01-01
Accurate and updated information on winter wheat distribution is vital for food security. The intra-class variability of the temporal profiles of vegetation indices presents substantial challenges to current time series-based approaches. This study developed a new method to identify winter wheat over large regions through a transformation and metric-based approach. First, the trend surfaces were established to identify key phenological parameters of winter wheat based on altitude and latitude with references to crop calendar data from the agro-meteorological stations. Second, two phenology-based indicators were developed based on the EVI2 differences between estimated heading and seedling/harvesting dates and the change amplitudes. These two phenology-based indicators revealed variations during the estimated early and late growth stages. Finally, winter wheat data were extracted based on these two metrics. The winter wheat mapping method was applied to China based on the 250 m 8-day composite Moderate Resolution Imaging Spectroradiometer (MODIS) 2-band Enhanced Vegetation Index (EVI2) time series datasets. Accuracy was validated with field survey data, agricultural census data, and Landsat-interpreted results in test regions. When evaluated with 653 field survey sites and Landsat image interpreted data, the overall accuracy of MODIS-derived images in 2012-2013 was 92.19% and 88.86%, respectively. The MODIS-derived winter wheat areas accounted for over 82% of the variability at the municipal level when compared with agricultural census data. The winter wheat mapping method developed in this study demonstrates great adaptability to intra-class variability of the vegetation temporal profiles and has great potential for further applications to broader regions and other types of agricultural crop mapping.
Binoculars with mil scale as a training aid for estimating form class
H.W. Camp, J.R.; C.A. Bickford
1949-01-01
In an extensive forest inventory, estimates involving personal judgment cannot be eliminated. However, every means should be taken to keep these estimates to a minimum and to provide on-the-job training that is adequate for obtaining the best estimates possible.
Identification and feedback control in structures with piezoceramic actuators
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.; Wang, Y.
1992-01-01
In this lecture we give fundamental well-posedness results for a variational formulation of a class of damped second order partial differential equations with unbounded input or control coefficients. Included as special cases in this class are structures with piezoceramic actuators. We consider approximation techniques leading to computational methods in the context of both parameter estimation and feedback control problems for these systems. Rigorous convergence results for parameter estimates and feedback gains are discussed.
Incorporating spatial context into statistical classification of multidimensional image data
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Tilton, J. C.; Swain, P. H.
1981-01-01
Compound decision theory is employed to develop a general statistical model for classifying image data using spatial context. The classification algorithm developed from this model exploits the tendency of certain ground-cover classes to occur more frequently in some spatial contexts than in others. A key input to this contextural classifier is a quantitative characterization of this tendency: the context function. Several methods for estimating the context function are explored, and two complementary methods are recommended. The contextural classifier is shown to produce substantial improvements in classification accuracy compared to the accuracy produced by a non-contextural uniform-priors maximum likelihood classifier when these methods of estimating the context function are used. An approximate algorithm, which cuts computational requirements by over one-half, is presented. The search for an optimal implementation is furthered by an exploration of the relative merits of using spectral classes or information classes for classification and/or context function estimation.
Novel trace chemical detection algorithms: a comparative study
NASA Astrophysics Data System (ADS)
Raz, Gil; Murphy, Cara; Georgan, Chelsea; Greenwood, Ross; Prasanth, R. K.; Myers, Travis; Goyal, Anish; Kelley, David; Wood, Derek; Kotidis, Petros
2017-05-01
Algorithms for standoff detection and estimation of trace chemicals in hyperspectral images in the IR band are a key component for a variety of applications relevant to law-enforcement and the intelligence communities. Performance of these methods is impacted by the spectral signature variability due to presence of contaminants, surface roughness, nonlinear dependence on abundances as well as operational limitations on the compute platforms. In this work we provide a comparative performance and complexity analysis of several classes of algorithms as a function of noise levels, error distribution, scene complexity, and spatial degrees of freedom. The algorithm classes we analyze and test include adaptive cosine estimator (ACE and modifications to it), compressive/sparse methods, Bayesian estimation, and machine learning. We explicitly call out the conditions under which each algorithm class is optimal or near optimal as well as their built-in limitations and failure modes.
Chen, Jie; Li, Jiahong; Yang, Shuanghua; Deng, Fang
2017-11-01
The identification of the nonlinearity and coupling is crucial in nonlinear target tracking problem in collaborative sensor networks. According to the adaptive Kalman filtering (KF) method, the nonlinearity and coupling can be regarded as the model noise covariance, and estimated by minimizing the innovation or residual errors of the states. However, the method requires large time window of data to achieve reliable covariance measurement, making it impractical for nonlinear systems which are rapidly changing. To deal with the problem, a weighted optimization-based distributed KF algorithm (WODKF) is proposed in this paper. The algorithm enlarges the data size of each sensor by the received measurements and state estimates from its connected sensors instead of the time window. A new cost function is set as the weighted sum of the bias and oscillation of the state to estimate the "best" estimate of the model noise covariance. The bias and oscillation of the state of each sensor are estimated by polynomial fitting a time window of state estimates and measurements of the sensor and its neighbors weighted by the measurement noise covariance. The best estimate of the model noise covariance is computed by minimizing the weighted cost function using the exhaustive method. The sensor selection method is in addition to the algorithm to decrease the computation load of the filter and increase the scalability of the sensor network. The existence, suboptimality and stability analysis of the algorithm are given. The local probability data association method is used in the proposed algorithm for the multitarget tracking case. The algorithm is demonstrated in simulations on tracking examples for a random signal, one nonlinear target, and four nonlinear targets. Results show the feasibility and superiority of WODKF against other filtering algorithms for a large class of systems.
PREDICTION OF SOLAR FLARE SIZE AND TIME-TO-FLARE USING SUPPORT VECTOR MACHINE REGRESSION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boucheron, Laura E.; Al-Ghraibah, Amani; McAteer, R. T. James
We study the prediction of solar flare size and time-to-flare using 38 features describing magnetic complexity of the photospheric magnetic field. This work uses support vector regression to formulate a mapping from the 38-dimensional feature space to a continuous-valued label vector representing flare size or time-to-flare. When we consider flaring regions only, we find an average error in estimating flare size of approximately half a geostationary operational environmental satellite (GOES) class. When we additionally consider non-flaring regions, we find an increased average error of approximately three-fourths a GOES class. We also consider thresholding the regressed flare size for the experimentmore » containing both flaring and non-flaring regions and find a true positive rate of 0.69 and a true negative rate of 0.86 for flare prediction. The results for both of these size regression experiments are consistent across a wide range of predictive time windows, indicating that the magnetic complexity features may be persistent in appearance long before flare activity. This is supported by our larger error rates of some 40 hr in the time-to-flare regression problem. The 38 magnetic complexity features considered here appear to have discriminative potential for flare size, but their persistence in time makes them less discriminative for the time-to-flare problem.« less
Babies in traffic: infant vocalizations and listener sex modulate auditory motion perception.
Neuhoff, John G; Hamilton, Grace R; Gittleson, Amanda L; Mejia, Adolfo
2014-04-01
Infant vocalizations and "looming sounds" are classes of environmental stimuli that are critically important to survival but can have dramatically different emotional valences. Here, we simultaneously presented listeners with a stationary infant vocalization and a 3D virtual looming tone for which listeners made auditory time-to-arrival judgments. Negatively valenced infant cries produced more cautious (anticipatory) estimates of auditory arrival time of the tone over a no-vocalization control. Positively valenced laughs had the opposite effect, and across all conditions, men showed smaller anticipatory biases than women. In Experiment 2, vocalization-matched vocoded noise stimuli did not influence concurrent auditory time-to-arrival estimates compared with a control condition. In Experiment 3, listeners estimated the egocentric distance of a looming tone that stopped before arriving. For distant stopping points, women estimated the stopping point as closer when the tone was presented with an infant cry than when it was presented with a laugh. For near stopping points, women showed no differential effect of vocalization type. Men did not show differential effects of vocalization type at either distance. Our results support the idea that both the sex of the listener and the emotional valence of infant vocalizations can influence auditory motion perception and can modulate motor responses to other behaviorally relevant environmental sounds. We also find support for previous work that shows sex differences in emotion processing are diminished under conditions of higher stress.
Semiparametric Estimation of Treatment Effect in a Pretest–Posttest Study with Missing Data
Davidian, Marie; Tsiatis, Anastasios A.; Leon, Selene
2008-01-01
The pretest–posttest study is commonplace in numerous applications. Typically, subjects are randomized to two treatments, and response is measured at baseline, prior to intervention with the randomized treatment (pretest), and at prespecified follow-up time (posttest). Interest focuses on the effect of treatments on the change between mean baseline and follow-up response. Missing posttest response for some subjects is routine, and disregarding missing cases can lead to invalid inference. Despite the popularity of this design, a consensus on an appropriate analysis when no data are missing, let alone for taking into account missing follow-up, does not exist. Under a semiparametric perspective on the pretest–posttest model, in which limited distributional assumptions on pretest or posttest response are made, we show how the theory of Robins, Rotnitzky and Zhao may be used to characterize a class of consistent treatment effect estimators and to identify the efficient estimator in the class. We then describe how the theoretical results translate into practice. The development not only shows how a unified framework for inference in this setting emerges from the Robins, Rotnitzky and Zhao theory, but also provides a review and demonstration of the key aspects of this theory in a familiar context. The results are also relevant to the problem of comparing two treatment means with adjustment for baseline covariates. PMID:19081743
Semiparametric Estimation of Treatment Effect in a Pretest-Posttest Study with Missing Data.
Davidian, Marie; Tsiatis, Anastasios A; Leon, Selene
2005-08-01
The pretest-posttest study is commonplace in numerous applications. Typically, subjects are randomized to two treatments, and response is measured at baseline, prior to intervention with the randomized treatment (pretest), and at prespecified follow-up time (posttest). Interest focuses on the effect of treatments on the change between mean baseline and follow-up response. Missing posttest response for some subjects is routine, and disregarding missing cases can lead to invalid inference. Despite the popularity of this design, a consensus on an appropriate analysis when no data are missing, let alone for taking into account missing follow-up, does not exist. Under a semiparametric perspective on the pretest-posttest model, in which limited distributional assumptions on pretest or posttest response are made, we show how the theory of Robins, Rotnitzky and Zhao may be used to characterize a class of consistent treatment effect estimators and to identify the efficient estimator in the class. We then describe how the theoretical results translate into practice. The development not only shows how a unified framework for inference in this setting emerges from the Robins, Rotnitzky and Zhao theory, but also provides a review and demonstration of the key aspects of this theory in a familiar context. The results are also relevant to the problem of comparing two treatment means with adjustment for baseline covariates.
NuSTAR Detection of X-Ray Heating Events in the Quiet Sun
NASA Astrophysics Data System (ADS)
Kuhar, Matej; Krucker, Säm; Glesener, Lindsay; Hannah, Iain G.; Grefenstette, Brian W.; Smith, David M.; Hudson, Hugh S.; White, Stephen M.
2018-04-01
The explanation of the coronal heating problem potentially lies in the existence of nanoflares, numerous small-scale heating events occurring across the whole solar disk. In this Letter, we present the first imaging spectroscopy X-ray observations of three quiet Sun flares during the Nuclear Spectroscopic Telescope ARray (NuSTAR) solar campaigns on 2016 July 26 and 2017 March 21, concurrent with the Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) observations. Two of the three events showed time lags of a few minutes between peak X-ray and extreme ultraviolet emissions. Isothermal fits with rather low temperatures in the range 3.2–4.1 MK and emission measures of (0.6–15) × 1044 cm‑3 describe their spectra well, resulting in thermal energies in the range (2–6) × 1026 erg. NuSTAR spectra did not show any signs of a nonthermal or higher temperature component. However, as the estimated upper limits of (hidden) nonthermal energy are comparable to the thermal energy estimates, the lack of a nonthermal component in the observed spectra is not a constraining result. The estimated Geostationary Operational Environmental Satellite (GOES) classes from the fitted values of temperature and emission measure fall between 1/1000 and 1/100 A class level, making them eight orders of magnitude fainter in soft X-ray flux than the largest solar flares.
The Improved Estimation of Ratio of Two Population Proportions
ERIC Educational Resources Information Center
Solanki, Ramkrishna S.; Singh, Housila P.
2016-01-01
In this article, first we obtained the correct mean square error expression of Gupta and Shabbir's linear weighted estimator of the ratio of two population proportions. Later we suggested the general class of ratio estimators of two population proportions. The usual ratio estimator, Wynn-type estimator, Singh, Singh, and Kaur difference-type…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaston, G.G.; Kolchugina, T.P.
1995-12-01
Forty-two regions with similar vegetation and landcover were identified in the former Soviet Union (FSU) by classifying Global Vegetation Index (GVI) images. Image classes were described in terms of vegetation and landcover. Image classes appear to provide more accurate and precise descriptions for most ecosystems when compared to general thematic maps. The area of forest lands were estimated at 1,330 Mha and the actual area of forest ecosystems at 875 Mha. Arable lands were estimated to be 211 Mha. The area of the tundra biome was estimated at 261 Mha. The areas of the forest-tundra/dwarf forest, taiga, mixed-deciduous forest andmore » forest-steppe biomes were estimated t 153, 882, 196, and 144 Mha, respectively. The areas of desert-semidesert biome and arable land with irrigated land and meadows, were estimated at 126 and 237 Mha, respectively. Vegetation and landcover types were associated with the Bazilevich database of phytomass and NPP for vegetation in the FSU. The phytomass in the FSU was estimated at 97.1 Gt C, with 86.8 in forest vegetation, 9.7 in natural non-forest and 0.6 Gt C in arable lands. The NPP was estimated at 8.6 Gt C/yr, with 3.2, 4.8, and 0.6 Gt C/yr of forest, natural non-forest, and arable ecosystems, respectively. The phytomass estimates for forests were greater than previous assessments which considered the age-class distribution of forest stands in the FSU. The NPP of natural ecosystems estimated in this study was 23% greater than previous estimates which used thematic maps to identify ecosystems. 47 refs., 4 figs., 2 tabs.« less
Depression and Alcohol Use in a National Sample of Hispanic Adolescents.
Merianos, Ashley L; Swoboda, Christopher M; Oluwoye, Oladunni A; Gilreath, Tamika D; Unger, Jennifer B
2018-04-16
Underage alcohol use and depression remain public health concerns for Hispanic adolescents nationwide. The study purpose was to identify the profiles of depression among Hispanic adolescents who reported experiencing depressive symptoms in their lifetime and classify them into groups based on their symptoms. Based on classifications, we examined the relationship between past year alcohol use and severity of depressive symptoms while controlling for sex and age. A secondary analysis of the 2013 NSDUH was conducted among Hispanic adolescents from 12 to 17 years of age (n = 585) who reported experiencing depressive symptoms. Latent class analysis was used to identify latent classes of depressive symptoms among Hispanic adolescents. A zero-inflated negative-binomial regression model was used to examine the relationship between alcohol use and depressive symptoms. "High depressive" and "moderate depressive" classes were formed. The items that highly differentiated among the groups were felt worthless nearly every day, others noticed they were restless or lethargic, and had changes in appetite or weight. There was a significant difference (p = 0.03) between the classes based on alcohol use; those in the moderate depressive class were 1.71 times more likely to be identified as not reporting past alcohol use. Results indicated the high depressive class was estimated to have 1.62 more days of past year alcohol use than those in the moderate depressive class for adolescents who used alcohol (p < 0.001). Conclusions/Importance: Study findings can be used to address these significant public health issues impacting Hispanic adolescents. Recommendations are included.
NASA Astrophysics Data System (ADS)
Gao, Tian; Zhu, Jiaojun; Deng, Songqiu; Zheng, Xiao; Zhang, Jinxin; Shang, Guiduo; Huang, Liyan
2016-10-01
Timber production is the purpose for managing plantation forests, and its spatial and quantitative information is critical for advising management strategies. Previous studies have focused on growing stock volume (GSV), which represents the current potential of timber production, yet few studies have investigated historical process-harvested timber. This resulted in a gap in a synthetical ecosystem service assessment of timber production. In this paper, we established a Management Process-based Timber production (MPT) framework to integrate the current GSV and the harvested timber derived from historical logging regimes, trying to synthetically assess timber production for a historical period. In the MPT framework, age-class and current GSV determine the times of historical thinning and the corresponding harvested timber, by using a ;space-for-time; substitution. The total timber production can be estimated by the historical harvested timber in each thinning and the current GSV. To test this MPT framework, an empirical study on a larch plantation (LP) with area of 43,946 ha was conducted in North China for a period from 1962 to 2010. Field-based inventory data was integrated with ALOS PALSAR (Advanced Land-Observing Satellite Phased Array L-band Synthetic Aperture Radar) and Landsat-8 OLI (Operational Land Imager) data for estimating the age-class and current GSV of LP. The random forest model with PALSAR backscatter intensity channels and OLI bands as input predictive variables yielded an accuracy of 67.9% with a Kappa coefficient of 0.59 for age-class classification. The regression model using PALSAR data produced a root mean square error (RMSE) of 36.5 m3 ha-1. The total timber production of LP was estimated to be 7.27 × 106 m3, with 4.87 × 106 m3 in current GSV and 2.40 × 106 m3 in harvested timber through historical thinning. The historical process-harvested timber accounts to 33.0% of the total timber production, which component has been neglected in the assessments for current status of plantation forests. Synthetically considering the RMSE for predictive GSV and misclassification of age-class, the error in timber production were supposed to range from -55.2 to 56.3 m3 ha-1. The MPT framework can be used to assess timber production of other tree species at a larger spatial scale, providing crucial information for a better understanding of forest ecosystem service.
Rodríguez, Libia M; París, Sara C; Arbeláez, Mario; Cotes, José M; Süsal, Caner; Torres, Yolanda; García, Luís F
2007-08-01
In the present study, we investigated whether pretransplantation HLA class I and class II antibodies and pretransplantation levels of soluble CD30 (sCD30) and IgA anti-Fab autoantibodies are predictive of kidney allograft survival. Pretransplantation sera of 504 deceased-donor kidney recipients were tested for IgG HLA class I and class II antibodies, sCD30, and IgA anti-Fab levels using the CTS 4 ELISA kit. Kidney graft survival was estimated by Kaplan-Meier method and multivariate Cox regression. Regardless of the presence of HLA class II antibodies, recipients with high HLA class I reactivity had lower 1-year graft survival than recipients with low reactivity (p < 0.01). Recipients with high sCD30 had lower 5-year graft survival rate than those with low sCD30 (p < 0.01). The sCD30 effect was observed in presensitized and nonsensitized recipients, demonstrated a synergistic effect with HLA class I antibodies (p < 0.001), and appeared to be neutralized in recipients with no HLA class II mismatches. IgA anti-Fab did not influence kidney graft survival. Our results indicate that high pretransplantation sCD30 levels and HLA class I positivity increase the risk of kidney graft loss regardless of other factors. Consequently, such determinations should be routinely performed to estimate recipients' risks of graft rejection before transplantation.
Separation of Powers in Classifying International Agreements
1996-01-01
SEPARATION OF POWERS IN CLASSIFYING INTERNATIONAL AGREEMENTS CORE COURSE III ESSAY CDR James F Duffy, JAGC, USN, Class of 96 The National Secmty Policy Process SemmrH Faculty Semmar Instructor Dr John Rexhart Faculty Adviser CAPT J Kelso, USN Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing
Continuous estimates of Survival through Eight Years of Service Using FY 1979 Cross-Sectional Data.
1981-07-01
performed for Class A school attendees and non-A school attendees, holding constant the effects of age, educational level, and mental group.* Mean...through eight years of service for _ non-prior service mail recruits. Average survival 0 times by education , mental group, and age are calculated from...attendees is 35 months and for non-A school attendees is 28 months. As expected, we found that educational level has the great- est impact on survival
HLA-G and MHC Class II Protein Expression in Diffuse Large B-Cell Lymphoma.
Jesionek-Kupnicka, Dorota; Bojo, Marcin; Prochorec-Sobieszek, Monika; Szumera-Ciećkiewicz, Anna; Jabłońska, Joanna; Kalinka-Warzocha, Ewa; Kordek, Radzisław; Młynarski, Wojciech; Robak, Tadeusz; Warzocha, Krzysztof; Lech-Maranda, Ewa
2016-06-01
The expression of human leukocyte antigen-G (HLA-G) and HLA class II protein was studied by immunohistochemical staining of lymph nodes from 148 patients with diffuse large B-cell lymphoma (DLBCL) and related to the clinical course of the disease. Negative HLA-G expression was associated with a lower probability of achieving a complete remission (p = 0.04). Patients with negative HLA-G expression tended towards a lower 3-year overall survival (OS) rate compared to those with positive expression of HLA-G (p = 0.08). When restricting the analysis to patients receiving chemotherapy with rituximab, the estimated 3-year OS rate of patients with positive HLA-G expression was 73.3 % compared with 47.5 % (p = 0.03) in those with negative expression. Patients with negative HLA class II expression presented a lower 3-year OS rate compared to subjects with positive expression (p = 0.04). The loss of HLA class II expression (p = 0.05) and belonging to the intermediate high/high IPI risk group (p = 0.001) independently increased the risk of death. HLA class II expression also retained its prognostic value in patients receiving rituximab; the 3-year OS rate was 65.3 % in patients with positive HLA class II expression versus 29.6 % (p = 0.04) in subjects that had loss of HLA class II expression. To our knowledge, for the first time, the expression of HLA-G protein in DLBCL and its association with the clinical course of the disease was demonstrated. Moreover, the link between losing HLA class II protein expression and poor survival of patients treated with immunochemotherapy was confirmed.
Samuel, Michael D.; Storm, Daniel J.; Rolley, Robert E.; Beissel, Thomas; Richards, Bryan J.; Van Deelen, Timothy R.
2014-01-01
The age structure of harvested animals provides the basis for many demographic analyses. Ages of harvested white-tailed deer (Odocoileus virginianus) and other ungulates often are estimated by evaluating replacement and wear patterns of teeth, which is subjective and error-prone. Few previous studies however, examined age- and sex-specific error rates. Counting cementum annuli of incisors is an alternative, more accurate method of estimating age, but factors that influence consistency of cementum annuli counts are poorly known. We estimated age of 1,261 adult (≥1.5 yr old) white-tailed deer harvested in Wisconsin and Illinois (USA; 2005–2008) using both wear-and-replacement and cementum annuli. We compared cementum annuli with wear-and-replacement estimates to assess misclassification rates by sex and age. Wear-and-replacement for estimating ages of white-tailed deer resulted in substantial misclassification compared with cementum annuli. Age classes of females were consistently underestimated, while those of males were underestimated for younger age classes but overestimated for older age classes. Misclassification resulted in an impression of a younger age-structure than actually was the case. Additionally, we obtained paired age-estimates from cementum annuli for 295 deer. Consistency of paired cementum annuli age-estimates decreased with age, was lower in females than males, and decreased as age estimates became less certain. Our results indicated that errors in the wear-and-replacement techniques are substantial and could impact demographic analyses that use age-structure information.
Class Size Effects on Student Achievement: Heterogeneity across Abilities and Fields
ERIC Educational Resources Information Center
De Paola, Maria; Ponzo, Michela; Scoppa, Vincenzo
2013-01-01
In this paper, we analyze class size effects on college students exploiting data from a project offering special remedial courses in mathematics and language skills to freshmen enrolled at an Italian medium-sized public university. To estimate the effects of class size, we exploit the fact that students and teachers are virtually randomly assigned…
ERIC Educational Resources Information Center
Chapman, Michael; McBride, Michelle L.
1992-01-01
Children of 4 to 10 years of age were given 2 class inclusion tasks. Younger children's performance was inflated by guessing. Scores were higher in the marked task than in the unmarked task as a result of differing rates of inclusion logic. Children's verbal justifications closely approximated estimates of their true competence. (GLR)
ERIC Educational Resources Information Center
Fleary, Sasha A.
2017-01-01
Background: Several studies have used latent class analyses to explore obesogenic behaviors and substance use in adolescents independently. We explored a variety of health risks jointly to identify distinct patterns of risk behaviors among adolescents. Methods: Latent class models were estimated using Youth Risk Behavior Surveillance System…
A Multinomial Logit Approach to Estimating Regional Inventories by Product Class
Lawrence Teeter; Xiaoping Zhou
1998-01-01
Current timber inventory projections generally lack information on inventory by product classes. Most models available for inventory projection and linked to supply analyses are limited to projecting aggregate softwood and hardwood. The objective of this research is to develop a methodology to distribute the volume on each FIA survey plot to product classes and...
Experimental Estimates of the Impacts of Class Size on Test Scores: Robustness and Heterogeneity
ERIC Educational Resources Information Center
Ding, Weili; Lehrer, Steven F.
2011-01-01
Proponents of class size reductions (CSRs) draw heavily on the results from Project Student/Teacher Achievement Ratio to support their initiatives. Adding to the political appeal of these initiative are reports that minority and economically disadvantaged students received the largest benefits from smaller classes. We extend this research in two…
Vendor compliance with Ontario's tobacco point of sale legislation.
Dubray, Jolene M; Schwartz, Robert M; Garcia, John M; Bondy, Susan J; Victor, J Charles
2009-01-01
On May 31, 2006, Ontario joined a small group of international jurisdictions to implement legislative restrictions on tobacco point of sale promotions. This study compares the presence of point of sale promotions in the retail tobacco environment from three surveys: one prior to and two following implementation of the legislation. Approximately 1,575 tobacco vendors were randomly selected for each survey. Each regionally-stratified sample included equal numbers of tobacco vendors categorized into four trade classes: chain convenience, independent convenience and discount, gas stations, and grocery. Data regarding the six restricted point of sale promotions were collected using standardized protocols and inspection forms. Weighted estimates and 95% confidence intervals were produced at the provincial, regional and vendor trade class level using the bootstrap method for estimating variance. At baseline, the proportion of tobacco vendors who did not engage in each of the six restricted point of sale promotions ranged from 41% to 88%. Within four months following implementation of the legislation, compliance with each of the six restricted point of sale promotions exceeded 95%. Similar levels of compliance were observed one year later. Grocery stores had the fewest point of sale promotions displayed at baseline. Compliance rates did not differ across vendor trade classes at either follow-up survey. Point of sale promotions did not differ across regions in any of the three surveys. Within a short period of time, a high level of compliance with six restricted point of sale promotions was achieved.
Controlling prescription drug expenditures: a report of success.
Miller, David P; Furberg, Curt D; Small, Ronald H; Millman, Franklyn M; Ambrosius, Walter T; Harshbarger, Julia S; Ohl, Christopher A
2007-08-01
To determine whether a multi-interventional program can limit increases in prescription drug expenditures while maintaining utilization of needed medications. Quasi-experimental, pre-post design. The program included formulary changes, quantity limits, and mandatory pill splitting for select drugs implemented in phases. We assessed the short-term effects of each intervention by comparing class-specific drug spending and generic medication use before and after benefit changes. Long-term effects were determined by comparing overall spending with projected spending estimates, and by examining changes in the planwide use of generic medications over time. Effects on medication utilization were assessed by examining members' use of selected classes of chronic medications before and after the policy changes. Over 3 years, the plan and members saved $6.6 million attributed to the interventions. Most of the savings were due to the reclassification of select brand-name drugs to nonpreferred status (estimated annual savings, $941,000), followed by the removal of nonsedating antihistamines from the formulary (annual savings, $565,000), and the introduction of pill splitting (annual savings, $342,000). Limiting quantities of select medications had the smallest impact (annual savings, $135,000). Members' use of generic medications steadily increased from 40% to 57%. Although 17.5% of members stopped using at least 1 class of selected medications, members' total use of chronic medications remained constant. A combination of interventions can successfully manage prescription drug spending while preserving utilization of chronic medications. Additional studies are needed to determine the effect of these cost-control interventions on other health outcomes.
t'Kindt, Ruben; Jorge, Lucie; Dumont, Emmie; Couturon, Pauline; David, Frank; Sandra, Pat; Sandra, Koen
2012-01-03
An LC-MS based method for the profiling and characterization of ceramide species in the upper layer of human skin is described. Ceramide samples, collected by tape stripping of human skin, were analyzed by reversed-phase liquid chromatography coupled to high-resolution quadrupole time-of-flight mass spectrometry operated in both positive and negative electrospray ionization mode. All known classes of ceramides could be measured in a repeatable manner. Furthermore, the data set showed several undiscovered ceramides, including a class with four hydroxyl functionalities in its sphingoid base. High-resolution MS/MS fragmentation spectra revealed that each identified ceramide species is composed of several skeletal isomers due to variation in carbon length of the respective sphingoid bases and fatty acyl building blocks. The resulting variety in skeletal isomers has not been previously demonstrated. It is estimated that over 1000 unique ceramide structures could be elucidated in human stratum corneum. Ceramide species with an even and odd number of carbon atoms in both chains were detected in all ceramide classes. Acid hydrolysis of the ceramides, followed by LC-MS analysis of the end-products, confirmed the observed distribution of both sphingoid bases and fatty acyl groups in skin ceramides. The study resulted in an accurate mass retention time library for targeted profiling of skin ceramides. It is furthermore demonstrated that targeted data processing results in an improved repeatability versus untargeted data processing (72.92% versus 62.12% of species display an RSD < 15%). © 2011 American Chemical Society
Changes in occupational class differences in leisure-time physical activity: a follow-up study.
Seiluri, Tina; Lahti, Jouni; Rahkonen, Ossi; Lahelma, Eero; Lallukka, Tea
2011-03-01
Physical activity is known to have health benefits across population groups. However, less is known about changes over time in socioeconomic differences in leisure-time physical activity and the reasons for the changes. We hypothesised that class differences in leisure-time physical activity would widen over time due to declining physical activity among the lower occupational classes. We examined whether occupational class differences in leisure-time physical activity change over time in a cohort of Finnish middle-aged women and men. We also examined whether a set of selected covariates could account for the observed changes. The data were derived from the Helsinki Health Study cohort mail surveys; the respondents were 40-60-year-old employees of the City of Helsinki at baseline in 2000-2002 (n = 8960, response rate 67%). Follow-up questionnaires were sent to the baseline respondents in 2007 (n = 7332, response rate 83%). The outcome measure was leisure-time physical activity, including commuting, converted to metabolic equivalent tasks (MET). Socioeconomic position was measured by occupational class (professionals, semi-professionals, routine non-manual employees and manual workers). The covariates included baseline age, marital status, limiting long-lasting illness, common mental disorders, job strain, physical and mental health functioning, smoking, body mass index, and employment status at follow-up. Firstly the analyses focused on changes over time in age adjusted prevalence of leisure-time physical activity. Secondly, logistic regression analysis was used to adjust for covariates of changes in occupational class differences in leisure-time physical activity. At baseline there were no occupational class differences in leisure-time physical activity. Over the follow-up leisure-time physical activity increased among those in the higher classes and decreased among manual workers, suggesting the emergence of occupational class differences at follow-up. Women in routine non-manual and manual classes and men in the manual class tended to be more often physically inactive in their leisure-time (<14 MET hours/week) and to be less often active (>30 MET hours/week) than those in the top two classes. Adjustment for the covariates did not substantially affect the observed occupational class differences in leisure-time physical activity at follow-up. Occupational class differences in leisure-time physical activity emerged over the follow-up period among both women and men. Leisure-time physical activity needs to be promoted among ageing employees, especially among manual workers.
The Feedback of Star Formation Based on Large-scale Spectroscopic Mapping Technology
NASA Astrophysics Data System (ADS)
Li, H. X.
2017-05-01
Star Formation is a fundamental topic in astrophysics. Although there is a popular model of low-mass star formation, every step of the process is full of physical and chemical complexity. One of the key questions is the dynamical feedback during the process of star formation. The answer of this question will help us to understand the star formation and the evolution of molecular clouds. We have identified outflows and bubbles in the Taurus molecular cloud based on the ˜ 100 deg2 Five College Radio Astronomy Observatory 12CO(1-0) and 13CO(1-0) maps and the Spitzer young stellar object (YSO) catalog. In the main 44 deg2 area of Taurus, we found 55 outflows, of which 31 were previously unknown. We also found 37 bubbles in the entire 100 deg2 area of Taurus, all of which had not been identified before. After visual inspection, we developed an interactive IDL pipeline to confirm the outflows and bubbles. This sample covers a contiguous region with a linear spatial dynamic range of ˜ 1000. Among the 55 outflows, we found that bipolar, monopolar redshifted, and monopolar blueshifted outflows account for 45%, 44%, and 11%, respectively. There are more red lobes than blue ones. The occurrence of more red lobes may result from the fact that Taurus is thin. Red lobes tend to be smaller and younger. The total mass and energy of red lobes are similar to blue lobes on average. There are 3 expanding bubbles and 34 broken bubbles among all the bubbles in Taurus. There are more outflow-driving YSOs in Class I, Flat, and Class II while few outflow-driving YSOs in Class III, which indicates that outflows more likely appear in the earlier stage (Class I) than in the later phase (Class III) of star formation. There are more bubble-driving YSOs of Class II and Class III while there are few bubble-driving YSOs of Class I and Flat, implying that the bubble structures are more likely to occur in the later stage of star formation. The total kinetic energy of the identified outflows is estimated to be ˜ 3.9 × 1045 erg, which is 1% of the cloud turbulent energy. The total kinetic energy of the detected bubbles is estimated to be ˜ 9.2 × 1046 erg, which is 29% of the turbulent energy of Taurus. The energy injection rate from the outflows is ˜ 1.3 × 1033 erg s-1, 0.4-2 times the turbulent dissipation rate of the cloud. The energy injection rate from bubbles is ˜ 6.4 × 1033 erg s-1, 2-10 times the turbulent dissipation rate of the cloud. The gravitational binding energy of the cloud is ˜ 1.5 × 1048 erg, 385 and 16 times the energy of outflows and bubbles, respectively. We conclude that neither outflows nor bubbles can provide sufficient energy to balance the overall gravitational binding energy and the turbulent energy of Taurus. However, in the current epoch, stellar feedback is sufficient to maintain the observed turbulence in Taurus. We studied the methods of spectral data processing for large-scale surveys, which is helpful in developing the data-processing software of FAST (Five-hundred-meter Aperture Spherical radio Telescope).
Polynomial fuzzy observer designs: a sum-of-squares approach.
Tanaka, Kazuo; Ohtake, Hiroshi; Seo, Toshiaki; Tanaka, Motoyasu; Wang, Hua O
2012-10-01
This paper presents a sum-of-squares (SOS) approach to polynomial fuzzy observer designs for three classes of polynomial fuzzy systems. The proposed SOS-based framework provides a number of innovations and improvements over the existing linear matrix inequality (LMI)-based approaches to Takagi-Sugeno (T-S) fuzzy controller and observer designs. First, we briefly summarize previous results with respect to a polynomial fuzzy system that is a more general representation of the well-known T-S fuzzy system. Next, we propose polynomial fuzzy observers to estimate states in three classes of polynomial fuzzy systems and derive SOS conditions to design polynomial fuzzy controllers and observers. A remarkable feature of the SOS design conditions for the first two classes (Classes I and II) is that they realize the so-called separation principle, i.e., the polynomial fuzzy controller and observer for each class can be separately designed without lack of guaranteeing the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. Although, for the last class (Class III), the separation principle does not hold, we propose an algorithm to design polynomial fuzzy controller and observer satisfying the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. All the design conditions in the proposed approach can be represented in terms of SOS and are symbolically and numerically solved via the recently developed SOSTOOLS and a semidefinite-program solver, respectively. To illustrate the validity and applicability of the proposed approach, three design examples are provided. The examples demonstrate the advantages of the SOS-based approaches for the existing LMI approaches to T-S fuzzy observer designs.
Odendaal, Lieza; Fosgate, Geoffrey T; Romito, Marco; Coetzer, Jacobus A W; Clift, Sarah J
2014-01-01
Real-time reverse transcription polymerase chain reaction (real-time RT-PCR), histopathology, and immunohistochemical labeling (IHC) were performed on liver specimens from 380 naturally infected cattle and sheep necropsied during the 2010 Rift Valley fever (RVF) epidemic in South Africa. Sensitivity (Se) and specificity (Sp) of real-time RT-PCR, histopathology, and IHC were estimated in a latent-class model using a Bayesian framework. The Se and Sp of real-time RT-PCR were estimated as 97.4% (95% confidence interval [CI] = 95.2-98.8%) and 71.7% (95% CI = 65-77.9%) respectively. The Se and Sp of histopathology were estimated as 94.6% (95% CI = 91-97.2%) and 92.3% (95% CI = 87.6-95.8%), respectively. The Se and Sp of IHC were estimated as 97.6% (95% CI = 93.9-99.8%) and 99.4% (95% CI = 96.9-100%), respectively. Decreased Sp of real-time RT-PCR was ascribed to cross-contamination of samples. Stratified analysis of the data suggested variations in test accuracy with fetuses and severely autolyzed specimens. The Sp of histopathology in fetuses (83%) was 9.3% lower than the sample population (92.3%). The Se of IHC decreased from 97.6% to 81.5% in the presence of severe autolysis. The diagnostic Se and Sp of histopathology was higher than expected, confirming the value of routine postmortem examinations and histopathology of liver specimens. Aborted fetuses, however, should be screened using a variety of tests in areas endemic for RVF, and results from severely autolyzed specimens should be interpreted with caution. The most feasible testing option for countries lacking suitably equipped laboratories seems to be routine histology in combination with IHC.
Aerodynamic parameter estimation via Fourier modulating function techniques
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1995-01-01
Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.
Drummond, Mark A.; Stier, Michael P.; Coffin, Alisa W.
2015-01-01
This report summarizes baseline land-cover change information for four time intervals from between 1973 and 2000 for the Gulf Coastal Plains and Ozarks Landscape Conservation Cooperative (LCC). The study used sample data from the USGS Land Cover Trends dataset to develop estimates of change for 10 land-cover classes in the LCC. The results show that an estimated 17.7 percent of the LCC land cover had a change during the 27-year period. Cyclic forest dynamics—of timber harvest and regrowth—are the most extensive types of land conversion. Agricultural land had an estimated net decline of 3.5 percent as cropland and pasture were urbanized and developed and converted to forest use. Urban and other developed land covers expanded from 2.0 percent of the LCC in 1973 to 3.1 percent in 2000. The report also highlights causes and challenges of land-cover change.
NASA Astrophysics Data System (ADS)
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
Sieve estimation in semiparametric modeling of longitudinal data with informative observation times.
Zhao, Xingqiu; Deng, Shirong; Liu, Li; Liu, Lei
2014-01-01
Analyzing irregularly spaced longitudinal data often involves modeling possibly correlated response and observation processes. In this article, we propose a new class of semiparametric mean models that allows for the interaction between the observation history and covariates, leaving patterns of the observation process to be arbitrary. For inference on the regression parameters and the baseline mean function, a spline-based least squares estimation approach is proposed. The consistency, rate of convergence, and asymptotic normality of the proposed estimators are established. Our new approach is different from the usual approaches relying on the model specification of the observation scheme, and it can be easily used for predicting the longitudinal response. Simulation studies demonstrate that the proposed inference procedure performs well and is more robust. The analyses of bladder tumor data and medical cost data are presented to illustrate the proposed method.
Optimized quantum sensing with a single electron spin using real-time adaptive measurements.
Bonato, C; Blok, M S; Dinani, H T; Berry, D W; Markham, M L; Twitchen, D J; Hanson, R
2016-03-01
Quantum sensors based on single solid-state spins promise a unique combination of sensitivity and spatial resolution. The key challenge in sensing is to achieve minimum estimation uncertainty within a given time and with high dynamic range. Adaptive strategies have been proposed to achieve optimal performance, but their implementation in solid-state systems has been hindered by the demanding experimental requirements. Here, we realize adaptive d.c. sensing by combining single-shot readout of an electron spin in diamond with fast feedback. By adapting the spin readout basis in real time based on previous outcomes, we demonstrate a sensitivity in Ramsey interferometry surpassing the standard measurement limit. Furthermore, we find by simulations and experiments that adaptive protocols offer a distinctive advantage over the best known non-adaptive protocols when overhead and limited estimation time are taken into account. Using an optimized adaptive protocol we achieve a magnetic field sensitivity of 6.1 ± 1.7 nT Hz(-1/2) over a wide range of 1.78 mT. These results open up a new class of experiments for solid-state sensors in which real-time knowledge of the measurement history is exploited to obtain optimal performance.
Optimized quantum sensing with a single electron spin using real-time adaptive measurements
NASA Astrophysics Data System (ADS)
Bonato, C.; Blok, M. S.; Dinani, H. T.; Berry, D. W.; Markham, M. L.; Twitchen, D. J.; Hanson, R.
2016-03-01
Quantum sensors based on single solid-state spins promise a unique combination of sensitivity and spatial resolution. The key challenge in sensing is to achieve minimum estimation uncertainty within a given time and with high dynamic range. Adaptive strategies have been proposed to achieve optimal performance, but their implementation in solid-state systems has been hindered by the demanding experimental requirements. Here, we realize adaptive d.c. sensing by combining single-shot readout of an electron spin in diamond with fast feedback. By adapting the spin readout basis in real time based on previous outcomes, we demonstrate a sensitivity in Ramsey interferometry surpassing the standard measurement limit. Furthermore, we find by simulations and experiments that adaptive protocols offer a distinctive advantage over the best known non-adaptive protocols when overhead and limited estimation time are taken into account. Using an optimized adaptive protocol we achieve a magnetic field sensitivity of 6.1 ± 1.7 nT Hz-1/2 over a wide range of 1.78 mT. These results open up a new class of experiments for solid-state sensors in which real-time knowledge of the measurement history is exploited to obtain optimal performance.
Short-term droughts forecast using Markov chain model in Victoria, Australia
NASA Astrophysics Data System (ADS)
Rahmat, Siti Nazahiyah; Jayasuriya, Niranjali; Bhuiyan, Muhammed A.
2017-07-01
A comprehensive risk management strategy for dealing with drought should include both short-term and long-term planning. The objective of this paper is to present an early warning method to forecast drought using the Standardised Precipitation Index (SPI) and a non-homogeneous Markov chain model. A model such as this is useful for short-term planning. The developed method has been used to forecast droughts at a number of meteorological monitoring stations that have been regionalised into six (6) homogenous clusters with similar drought characteristics based on SPI. The non-homogeneous Markov chain model was used to estimate drought probabilities and drought predictions up to 3 months ahead. The drought severity classes defined using the SPI were computed at a 12-month time scale. The drought probabilities and the predictions were computed for six clusters that depict similar drought characteristics in Victoria, Australia. Overall, the drought severity class predicted was quite similar for all the clusters, with the non-drought class probabilities ranging from 49 to 57 %. For all clusters, the near normal class had a probability of occurrence varying from 27 to 38 %. For the more moderate and severe classes, the probabilities ranged from 2 to 13 % and 3 to 1 %, respectively. The developed model predicted drought situations 1 month ahead reasonably well. However, 2 and 3 months ahead predictions should be used with caution until the models are developed further.
NASA Astrophysics Data System (ADS)
Abreu-Vicente, J.; Kainulainen, J.; Stutz, A.; Henning, Th.; Beuther, H.
2015-09-01
We present the first study of the relationship between the column density distribution of molecular clouds within nearby Galactic spiral arms and their evolutionary status as measured from their stellar content. We analyze a sample of 195 molecular clouds located at distances below 5.5 kpc, identified from the ATLASGAL 870 μm data. We define three evolutionary classes within this sample: starless clumps, star-forming clouds with associated young stellar objects, and clouds associated with H ii regions. We find that the N(H2) probability density functions (N-PDFs) of these three classes of objects are clearly different: the N-PDFs of starless clumps are narrowest and close to log-normal in shape, while star-forming clouds and H ii regions exhibit a power-law shape over a wide range of column densities and log-normal-like components only at low column densities. We use the N-PDFs to estimate the evolutionary time-scales of the three classes of objects based on a simple analytic model from literature. Finally, we show that the integral of the N-PDFs, the dense gas mass fraction, depends on the total mass of the regions as measured by ATLASGAL: more massive clouds contain greater relative amounts of dense gas across all evolutionary classes. Appendices are available in electronic form at http://www.aanda.org
Random regression models using different functions to model milk flow in dairy cows.
Laureano, M M M; Bignardi, A B; El Faro, L; Cardoso, V L; Tonhati, H; Albuquerque, L G
2014-09-12
We analyzed 75,555 test-day milk flow records from 2175 primiparous Holstein cows that calved between 1997 and 2005. Milk flow was obtained by dividing the mean milk yield (kg) of the 3 daily milking by the total milking time (min) and was expressed as kg/min. Milk flow was grouped into 43 weekly classes. The analyses were performed using a single-trait Random Regression Models that included direct additive genetic, permanent environmental, and residual random effects. In addition, the contemporary group and linear and quadratic effects of cow age at calving were included as fixed effects. Fourth-order orthogonal Legendre polynomial of days in milk was used to model the mean trend in milk flow. The additive genetic and permanent environmental covariance functions were estimated using random regression Legendre polynomials and B-spline functions of days in milk. The model using a third-order Legendre polynomial for additive genetic effects and a sixth-order polynomial for permanent environmental effects, which contained 7 residual classes, proved to be the most adequate to describe variations in milk flow, and was also the most parsimonious. The heritability in milk flow estimated by the most parsimonious model was of moderate to high magnitude.
Crop Characteristics Research: Growth and Reflectance Analysis
NASA Technical Reports Server (NTRS)
Badhwar, G. D. (Principal Investigator)
1985-01-01
Much of the early research in remote sensing follows along developing spectral signatures of cover types. It was found, however, that a signature from an unknown cover class could not always be matched to a catalog value of known cover class. This approach was abandoned and supervised classification schemes followed. These were not efficient and required extensive training. It was obvious that data acquired at a single time could not separate cover types. A large portion of the proposed research has concentrated on modeling the temporal behavior of agricultural crops and on removing the need for any training data in remote sensing surveys; the key to which is the solution of the so-called 'signature extension' problem. A clear need to develop spectral estimaters of crop ontogenic stages and yield has existed even though various correlations have been developed. Considerable effort in developing techniques to estimate these variables was devoted to this work. The need to accurately evaluate existing canopy reflectance model(s), improve these models, use them to understand the crop signatures, and estimate leaf area index was the third objective of the proposed work. A synopsis of this research effort is discussed.
A classification of U.S. estuaries based on physical and hydrologic attributes
Engle, V.D.; Kurtz, J.C.; Smith, L.M.; Chancy, C.; Bourgeois, P.
2007-01-01
A classification of U.S. estuaries is presented based on estuarine characteristics that have been identified as important for quantifying stressor-response relationships in coastal systems. Estuaries within a class have similar physical and hydrologic characteristics and would be expected to demonstrate similar biological responses to stressor loads from the adjacent watersheds. Nine classes of estuaries were identified by applying cluster analysis to a database for 138 U.S. estuarine drainage areas. The database included physical measures of estuarine areas, depth and volume, as well as hydrologic parameters (i.e., tide height, tidal prism volume, freshwater inflow rates, salinity, and temperature). The ability of an estuary to dilute or flush pollutants can be estimated using physical and hydrologic properties such as volume, bathymetry, freshwater inflow and tidal exchange rates which influence residence time and affect pollutant loading rates. Thus, physical and hydrologic characteristics can be used to estimate the susceptibility of estuaries to pollutant effects. This classification of estuaries can be used by natural resource managers to describe and inventory coastal systems, understand stressor impacts, predict which systems are most sensitive to stressors, and manage and protect coastal resources. ?? Springer Science+Business Media B.V. 2007.
Blackout detection as a multiobjective optimization problem.
Chaudhary, A M; Trachtenberg, E A
1991-01-01
We study new fast computational procedures for a pilot blackout (total loss of vision) detection in real time. Their validity is demonstrated by data acquired during experiments with volunteer pilots on a human centrifuge. A new systematic class of very fast suboptimal group filters is employed. The utilization of various inherent group invariancies of signals involved allows us to solve the detection problem via estimation with respect to many performance criteria. The complexity of the procedures in terms of the number of computer operations required for their implementation is investigated. Various classes of such prediction procedures are investigated, analyzed and trade offs are established. Also we investigated the validity of suboptimal filtering using different group filters for different performance criteria, namely: the number of false detections, the number of missed detections, the accuracy of detection and the closeness of all procedures to a certain bench mark technique in terms of dispersion squared (mean square error). The results are compared to recent studies of detection of evoked potentials using estimation. The group filters compare favorably with conventional techniques in many cases with respect to the above mentioned criteria. Their main advantage is the fast computational processing.
Development of a Refined Space Vehicle Rollout Forcing Function
NASA Technical Reports Server (NTRS)
James, George; Tucker, Jon-Michael; Valle, Gerard; Grady, Robert; Schliesing, John; Fahling, James; Emory, Benjamin; Armand, Sasan
2016-01-01
For several decades, American manned spaceflight vehicles and the associated launch platforms have been transported from final assembly to the launch pad via a pre-launch phase called rollout. The rollout environment is rich with forced harmonics and higher order effects can be used for extracting structural dynamics information. To enable this utilization, processing tools are needed to move from measured and analytical data to dynamic metrics such as transfer functions, mode shapes, modal frequencies, and damping. This paper covers the range of systems and tests that are available to estimate rollout forcing functions for the Space Launch System (SLS). The specific information covered in this paper includes: the different definitions of rollout forcing functions; the operational and developmental data sets that are available; the suite of analytical processes that are currently in-place or in-development; and the plans and future work underway to solve two immediate problems related to rollout forcing functions. Problem 1 involves estimating enforced accelerations to drive finite element models for developing design requirements for the SLS class of launch vehicles. Problem 2 involves processing rollout measured data in near real time to understand structural dynamics properties of a specific vehicle and the class to which it belongs.
NASA Astrophysics Data System (ADS)
Sun, W.; Dryer, M.; Fry, C. D.; Deehr, C. S.; Smith, Z.; Akasofu, S.-I.; Kartalev, M. D.; Grigorov, K. G.
2002-07-01
The Sun was extremely active during the "April Fool’s Day" epoch of 2001. We chose this period between a solar flare on 28 March 2001 to a final shock arrival at Earth on 21 April 2001. The activity consisted of two presumed helmet-streamer blowouts, seven M-class flares, and nine X-class flares, the last of which was behind the west limb. We have been experimenting since February 1997 with real-time, end-to-end forecasting of interplanetary coronal mass ejection (ICME) shock arrival times. Since August 1998, these forecasts have been distributed in real-time by e-mail to a list of interested scientists and operational USAF and NOAA forecasters. They are made using three different solar wind models. We describe here the solar events observed during the April Fool’s 2001 epoch, along with the predicted and actual shock arrival times, and the ex post facto correction to the real-time coronal shock speed observations. It appears that the initial estimates of coronal shock speeds from Type II radio burst observations and coronal mass ejections were too high by as much as 30%. We conclude that a 3-dimensional coronal density model should be developed for application to observations of solar flares and their Type II radio burst observations.
Calibrating recruitment estimates for mourning doves from harvest age ratios
Miller, David A.; Otis, David L.
2010-01-01
We examined results from the first national-scale effort to estimate mourning dove (Zenaida macroura) age ratios and developed a simple, efficient, and generalizable methodology for calibrating estimates. Our method predicted age classes of unknown-age wings based on backward projection of molt distributions from fall harvest collections to preseason banding. We estimated 1) the proportion of late-molt individuals in each age class, and 2) the molt rates of juvenile and adult birds. Monte Carlo simulations demonstrated our estimator was minimally biased. We estimated model parameters using 96,811 wings collected from hunters and 42,189 birds banded during preseason from 68 collection blocks in 22 states during the 2005–2007 hunting seasons. We also used estimates to derive a correction factor, based on latitude and longitude of samples, which can be applied to future surveys. We estimated differential vulnerability of age classes to harvest using data from banded birds and applied that to harvest age ratios to estimate population age ratios. Average, uncorrected age ratio of known-age wings for states that allow hunting was 2.25 (SD 0.85) juveniles:adult, and average, corrected ratio was 1.91 (SD 0.68), as determined from harvest age ratios from an independent sample of 41,084 wings collected from random hunters in 2007 and 2008. We used an independent estimate of differential vulnerability to adjust corrected harvest age ratios and estimated the average population age ratio as 1.45 (SD 0.52), a direct measure of recruitment rates. Average annual recruitment rates were highest east of the Mississippi River and in the northwestern United States, with lower rates between. Our results demonstrate a robust methodology for calibrating recruitment estimates for mourning doves and represent the first large-scale estimates of recruitment for the species. Our methods can be used by managers to correct future harvest survey data to generate recruitment estimates for use in formulating harvest management strategies.
van Wagtendonk, J.W.; Moore, P.E.
2010-01-01
Fire managers and researchers need information on fuel deposition rates to estimate future changes in fuel bed characteristics, determine when forests transition to another fire behavior fuel model, estimate future changes in fuel bed characteristics, and parameterize and validate ecosystem process models. This information is lacking for many ecosystems including the Sierra Nevada in California, USA. We investigated fuel deposition rates and stand characteristics of seven montane and four subalpine conifers in the Sierra Nevada. We collected foliage, miscellaneous bark and crown fragments, cones, and woody fuel classes from four replicate plots each in four stem diameter size classes for each species, for a total of 176 sampling sites. We used these data to develop predictive equations for each fuel class and diameter size class of each species based on stem and crown characteristics. There were consistent species and diameter class differences in the annual amount of foliage and fragments deposited. Foliage deposition rates ranged from just over 50 g m-2 year-1 in small diameter mountain hemlock stands to ???300 g m-2 year-1 for the three largest diameter classes of giant sequoia. The deposition rate for most woody fuel classes increased from the smallest diameter class stands to the largest diameter class stands. Woody fuel deposition rates varied among species as well. The rates for the smallest woody fuels ranged from 0.8 g m-2 year-1 for small diameter stands of Jeffrey pine to 126.9 g m-2 year-1 for very large diameter stands of mountain hemlock. Crown height and live crown ratio were the best predictors of fuel deposition rates for most fuel classes and species. Both characteristics reflect the amount of crown biomass including foliage and woody fuels. Relationships established in this study allow predictions of fuel loads to be made on a stand basis for each of these species under current and possible future conditions. These predictions can be used to estimate fuel treatment longevity, assist in determining fuel model transitions, and predict future changes in fuel bed characteristics.
Liu, Jian; Liu, Kexin; Liu, Shutang
2017-01-01
In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results. PMID:28467431
Liu, Jian; Liu, Kexin; Liu, Shutang
2017-01-01
In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results.
NASA Astrophysics Data System (ADS)
Frazer, Gordon J.; Anderson, Stuart J.
1997-10-01
The radar returns from some classes of time-varying point targets can be represented by the discrete-time signal plus noise model: xt equals st plus [vt plus (eta) t] equals (summation)i equals o P minus 1 Aiej2(pi f(i)/f(s)t) plus vt plus (eta) t, t (epsilon) 0, . . ., N minus 1, fi equals kfI plus fo where the received signal xt corresponds to the radar return from the target of interest from one azimuth-range cell. The signal has an unknown number of components, P, unknown complex amplitudes Ai and frequencies fi. The frequency parameters fo and fI are unknown, although constrained such that fo less than fI/2 and parameter k (epsilon) {minus u, . . ., minus 2, minus 1, 0, 1, 2, . . ., v} is constrained such that the component frequencies fi are bound by (minus fs/2, fs/2). The noise term vt, is typically colored, and represents clutter, interference and various noise sources. It is unknown, except that (summation)tvt2 less than infinity; in general, vt is not well modelled as an auto-regressive process of known order. The additional noise term (eta) t represents time-invariant point targets in the same azimuth-range cell. An important characteristic of the target is the unknown parameter, fI, representing the frequency interval between harmonic lines. It is desired to determine an estimate of fI from N samples of xt. We propose an algorithm to estimate fI based on Thomson's harmonic line F-Test, which is part of the multi-window spectrum estimation method and demonstrate the proposed estimator applied to target echo time series collected using an experimental HF skywave radar.
Air Pollutants from Jeddah Desalination—Power Plant (KSA)
NASA Astrophysics Data System (ADS)
Al-Seroury, F. A.; Mayhoub, A. B.
2011-10-01
Ground level concentrations due to emissions from the Jeddah dual—purpose plant (sea water desalination and electric power production) have been estimated using the standard Gaussian plume model (GPM). The main types of pollutants emitted from the plant are: Hydro-carbons HC, carbon monoxide CO, Nitrogen oxides NOx and sulfur dioxide SO2. Thermal stability classes for Jeddah city are estimated for the months of the year (2007). It was found that the dominant stability class for the city is the moderately unstable class B (according to pasquill classification). The results of stability classes' evaluation together with the meteorological wind—data are used to predict the ground level concentration (glc) of the pollutants against the downwind distance from the plant location. The month and day of each calculated value of the pollutant concentration during the year (2007) have been specified. The maximum (glc) and their positions on the ground for each pollutant are found.
NASA Technical Reports Server (NTRS)
Liu, Wilson M.; Padgett, Deborah L.; Terebey, Susan; Angione, John; Rebull, Luisa M.; McCollum, Bruce; Fajardo-Acosta, Sergio; Leisawitz, David
2015-01-01
The Wide-Field Infrared Survey Explorer (WISE) has uncovered a striking cluster of young stellar object (YSO) candidates associated with the L1509 dark cloud in Auriga. The WISE observations, at 3.4, 4.6, 12, and 22 microns, show a number of objects with colors consistent with YSOs, and their spectral energy distributions suggest the presence of circumstellar dust emission, including numerous Class I, flat spectrum, and Class II objects. In general, the YSOs in L1509 are much more tightly clustered than YSOs in other dark clouds in the Taurus-Auriga star forming region, with Class I and flat spectrum objects confined to the densest aggregates, and Class II objects more sparsely distributed. We estimate a most probable distance of 485-700 pc, and possibly as far as the previously estimated distance of 2 kpc.
Using Latent Class Analysis to Model Temperament Types.
Loken, Eric
2004-10-01
Mixture models are appropriate for data that arise from a set of qualitatively different subpopulations. In this study, latent class analysis was applied to observational data from a laboratory assessment of infant temperament at four months of age. The EM algorithm was used to fit the models, and the Bayesian method of posterior predictive checks was used for model selection. Results show at least three types of infant temperament, with patterns consistent with those identified by previous researchers who classified the infants using a theoretically based system. Multiple imputation of group memberships is proposed as an alternative to assigning subjects to the latent class with maximum posterior probability in order to reflect variance due to uncertainty in the parameter estimation. Latent class membership at four months of age predicted longitudinal outcomes at four years of age. The example illustrates issues relevant to all mixture models, including estimation, multi-modality, model selection, and comparisons based on the latent group indicators.
Indoor Location Sensing with Invariant Wi-Fi Received Signal Strength Fingerprinting
Husen, Mohd Nizam; Lee, Sukhan
2016-01-01
A method of location fingerprinting based on the Wi-Fi received signal strength (RSS) in an indoor environment is presented. The method aims to overcome the RSS instability due to varying channel disturbances in time by introducing the concept of invariant RSS statistics. The invariant RSS statistics represent here the RSS distributions collected at individual calibration locations under minimal random spatiotemporal disturbances in time. The invariant RSS statistics thus collected serve as the reference pattern classes for fingerprinting. Fingerprinting is carried out at an unknown location by identifying the reference pattern class that maximally supports the spontaneous RSS sensed from individual Wi-Fi sources. A design guideline is also presented as a rule of thumb for estimating the number of Wi-Fi signal sources required to be available for any given number of calibration locations under a certain level of random spatiotemporal disturbances. Experimental results show that the proposed method not only provides 17% higher success rate than conventional ones but also removes the need for recalibration. Furthermore, the resolution is shown finer by 40% with the execution time more than an order of magnitude faster than the conventional methods. These results are also backed up by theoretical analysis. PMID:27845711
Indoor Location Sensing with Invariant Wi-Fi Received Signal Strength Fingerprinting.
Husen, Mohd Nizam; Lee, Sukhan
2016-11-11
A method of location fingerprinting based on the Wi-Fi received signal strength (RSS) in an indoor environment is presented. The method aims to overcome the RSS instability due to varying channel disturbances in time by introducing the concept of invariant RSS statistics. The invariant RSS statistics represent here the RSS distributions collected at individual calibration locations under minimal random spatiotemporal disturbances in time. The invariant RSS statistics thus collected serve as the reference pattern classes for fingerprinting. Fingerprinting is carried out at an unknown location by identifying the reference pattern class that maximally supports the spontaneous RSS sensed from individual Wi-Fi sources. A design guideline is also presented as a rule of thumb for estimating the number of Wi-Fi signal sources required to be available for any given number of calibration locations under a certain level of random spatiotemporal disturbances. Experimental results show that the proposed method not only provides 17% higher success rate than conventional ones but also removes the need for recalibration. Furthermore, the resolution is shown finer by 40% with the execution time more than an order of magnitude faster than the conventional methods. These results are also backed up by theoretical analysis.
Levasseur, Pierre
2015-07-01
Associated with overweight, obesity and chronic diseases, the nutrition transition process reveals important socioeconomic issues in Mexico. Using panel data from the Mexican Family Life Survey, the purpose of the study is to estimate the causal effect of household socioeconomic status (SES) on nutritional outcomes among urban adults. We divide the analysis into two steps. First, using a mixed clustering procedure, we distinguish four socioeconomic classes based on income, educational and occupational dimensions: (i) a poor class; (ii) a lower-middle class; (iii) an upper-middle class; (iv) a rich class. Second, using an econometric framework adapted to our study (the Hausman-Taylor estimator), we measure the impact of belonging to these socioeconomic groups on individual anthropometric indicators, based on the body-mass index (BMI) and the waist-to-height ratio (WHtR). Our results make several contributions: (i) we show that a new middle class, rising out of poverty, is the most exposed to the risks of adiposity; (ii) as individuals from the upper class seem to be fatter than individuals from the upper-middle class, we can reject the assumption of an inverted U-shaped relationship between socioeconomic and anthropometric status as commonly suggested in emerging economies; (iii) the influence of SES on central adiposity appears to be particularly strong for men. Copyright © 2015 Elsevier Ltd. All rights reserved.
Nonlinear System Identification for Aeroelastic Systems with Application to Experimental Data
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2008-01-01
Representation and identification of a nonlinear aeroelastic pitch-plunge system as a model of the Nonlinear AutoRegressive, Moving Average eXogenous (NARMAX) class is considered. A nonlinear difference equation describing this aircraft model is derived theoretically and shown to be of the NARMAX form. Identification methods for NARMAX models are applied to aeroelastic dynamics and its properties demonstrated via continuous-time simulations of experimental conditions. Simulation results show that (1) the outputs of the NARMAX model closely match those generated using continuous-time methods, and (2) NARMAX identification methods applied to aeroelastic dynamics provide accurate discrete-time parameter estimates. Application of NARMAX identification to experimental pitch-plunge dynamics data gives a high percent fit for cross-validated data.
Manipulating flexible parts using a teleoperated system with time delay: An experiment
NASA Technical Reports Server (NTRS)
Kotoku, T.; Takamune, K.; Tanie, K.; Komoriya, K.; Matsuhira, N.; Asakura, M.; Bamba, H.
1994-01-01
This paper reports experiments involving the handling of flexible parts (e.g. wires) when using a teleoperated system with time delay. The task is principally a peg-in-hole task involving the wrapping of a wire around two posts on the task-board. It is difficult to estimate the effects of the flexible parts; therefore, on-line teleoperation is indispensable for this class of unpredictable task. We first propose a teleoperation system based on the predictive image display, then describe an experimental teleoperation testbed with a four second transmission time delay. Finally, we report on wire handling operations that were performed to evaluate the performance of this system. Those experiments will contribute to future advanced experiments for the MITI ETS-7 mission.
NASA Astrophysics Data System (ADS)
Thomann, Enrique A.; Guenther, Ronald B.
2006-02-01
Explicit formulae for the fundamental solution of the linearized time dependent Navier Stokes equations in three spatial dimensions are obtained. The linear equations considered in this paper include those used to model rigid bodies that are translating and rotating at a constant velocity. Estimates extending those obtained by Solonnikov in [23] for the fundamental solution of the time dependent Stokes equations, corresponding to zero translational and angular velocity, are established. Existence and uniqueness of solutions of these linearized problems is obtained for a class of functions that includes the classical Lebesgue spaces L p (R 3), 1 < p < ∞. Finally, the asymptotic behavior and semigroup properties of the fundamental solution are established.
NASA Astrophysics Data System (ADS)
Wei, Xinjiang; Sun, Shixiang
2018-03-01
An elegant anti-disturbance control (EADC) strategy for a class of discrete-time stochastic systems with both nonlinearity and multiple disturbances, which include the disturbance with partially known information and a sequence of random vectors, is proposed in this paper. A stochastic disturbance observer is constructed to estimate the disturbance with partially known information, based on which, an EADC scheme is proposed by combining pole placement and linear matrix inequality methods. It is proved that the two different disturbances can be rejected and attenuated, and the corresponding desired performances can be guaranteed for discrete-time stochastic systems with known and unknown nonlinear dynamics, respectively. Simulation examples are given to demonstrate the effectiveness of the proposed schemes compared with some existing results.
Critical short-time dynamics in a system with interacting static and diffusive populations
NASA Astrophysics Data System (ADS)
Argolo, C.; Quintino, Yan; Gleria, Iram; Lyra, M. L.
2012-01-01
We study the critical short-time dynamical behavior of a one-dimensional model where diffusive individuals can infect a static population upon contact. The model presents an absorbing phase transition from an active to an inactive state. Previous calculations of the critical exponents based on quasistationary quantities have indicated an unusual crossover from the directed percolation to the diffusive contact process universality classes. Here we show that the critical exponents governing the slow short-time dynamic evolution of several relevant quantities, including the order parameter, its relative fluctuations, and correlation function, reinforce the lack of universality in this model. Accurate estimates show that the critical exponents are distinct in the regimes of low and high recovery rates.
Evidence for a bound on the lifetime of de Sitter space
NASA Astrophysics Data System (ADS)
Freivogel, Ben; Lippert, Matthew
2008-12-01
Recent work has suggested a surprising new upper bound on the lifetime of de Sitter vacua in string theory. The bound is parametrically longer than the Hubble time but parametrically shorter than the recurrence time. We investigate whether the bound is satisfied in a particular class of de Sitter solutions, the KKLT vacua. Despite the freedom to make the supersymmetry breaking scale exponentially small, which naively would lead to extremely stable vacua, we find that the lifetime is always less than about exp(1022) Hubble times, in agreement with the proposed bound. This result, however, is contingent on several estimates and assumptions; in particular, we rely on a conjectural upper bound on the Euler number of the Calabi-Yau fourfolds used in KKLT compactifications.
NASA Astrophysics Data System (ADS)
Padokhin, A. M.; Kurbatov, G. A.; Yasyukevich, Y.; Yasyukevich, A.
2017-12-01
With the development of GNSS and SBAS constellations, the coherent multi-frequency L band transmissions are now available from a number of geostationary satellites. These signals can be used for ionospheric TEC estimations in the same way as widely used GPS/GLONASS signals. In this work, we compare noise patterns in TEC estimations based on different geostationary satellites data: augmentation systems (Indian GAGAN, European EGNOS and American WAAS), and Chinese COMPASS/Beidou navigation system. We show that noise level in geostationary COMPASS/Beidou TEC estimations is times smaller than noise in SBAS TEC estimation and corresponds to those of GPS/GLONASS at the same elevation angles. We discuss the capabilities of geostationary TEC data for studying ionospheric variability driven by space weather and meteorological sources at different time scales. Analyzing data from IGS/MGEX receivers we present geostationary TEC response on X-class Solar flares of current cycle, moderate and strong geomagnetic storms, including G4 St. Patrick's day Storm 2015 and recent G3 storm of the end of May 2017. We also discuss geostationary TEC disturbances in near equatorial ionosphere caused by two SSW events (minor and major final warming of 2015-2016 winter season) as well as geostationary TEC response on typhoons activity near Taiwan in autumn 2016. Our results show large potential of geostationary TEC estimations with GNSS and SBAS signals for continuous ionospheric monitoring.
The Impact of a Universal Class-Size Reduction Policy: Evidence from Florida's Statewide Mandate
ERIC Educational Resources Information Center
Chingos, Matthew M.
2012-01-01
Class-size reduction (CSR) mandates presuppose that resources provided to reduce class size will have a larger impact on student outcomes than resources that districts can spend as they see fit. I estimate the impact of Florida's statewide CSR policy by comparing the deviations from prior achievement trends in districts that were required to…
Changes in land use in western Oregon between 1971-74 and 1982.
Donald R. Gedney; Bruce A. Hiserote
1989-01-01
Statistics are presented by county for western Oregon for four dominant land use classes on non-Federally owned land. The classes were primary forest, primary agriculture, low-density urban, and urban. Classes were determined from aerial photographs taken in 1971-74 and in 1982; by using these data, estimates of change for the period between photography were developed...
ERIC Educational Resources Information Center
Araya, Roberto; Plana, Francisco; Dartnell, Pablo; Soto-Andrade, Jorge; Luci, Gina; Salinas, Elena; Araya, Marylen
2012-01-01
Teacher practice is normally assessed by observers who watch classes or videos of classes. Here, we analyse an alternative strategy that uses text transcripts and a support vector machine classifier. For each one of the 710 videos of mathematics classes from the 2005 Chilean National Teacher Assessment Programme, a single 4-minute slice was…
40 CFR Appendix B to Subpart A of... - Class II Controlled Substances a
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 17 2010-07-01 2010-07-01 false Class II Controlled Substances a B..., Subpt. A, App. B Appendix B to Subpart A of Part 82—Class II Controlled Substances a Controlled... the highest ODP, and the lower value is the estimate of the ODP of the isomer with the lowest ODP...
Ebejer, Jane L; Medland, Sarah E; van der Werf, Julius; Lynskey, Michael; Martin, Nicholas G; Duffy, David L
2016-11-01
The findings of genetic, imaging and neuropsychological studies of attention-deficit hyperactivity disorder (ADHD) are mixed. To understand why this might be the case we use both dimensional and categorical symptom measurement to provide alternate and detailed perspectives of symptom expression. Interviewers collected ADHD, conduct problems (CP) and sociodemographic data from 3793 twins and their siblings aged 22 to 49 (M = 32.6). We estimate linear weighting of symptoms across ADHD and CP items. Latent class analyses and regression describe associations between measured variables, environmental risk factors and subsequent disadvantage. Additionally, the clinical relevance of each class was estimated. Five classes were found for women and men; few symptoms, hyperactive-impulsive, CP, inattentive, combined symptoms with CP. Women within the inattentive class reported more symptoms and reduced emotional health when compared to men and to women within other latent classes. Women and men with combined ADHD symptoms reported comorbid conduct problems but those with either inattention or hyperactivity-impulsivity only did not. The dual perspective of dimensional and categorical measurement of ADHD provides important detail about symptom variation across sex and with environmental covariates. © The Author(s) 2013.
Accessing and constructing driving data to develop fuel consumption forecast model
NASA Astrophysics Data System (ADS)
Yamashita, Rei-Jo; Yao, Hsiu-Hsen; Hung, Shih-Wei; Hackman, Acquah
2018-02-01
In this study, we develop a forecasting models, to estimate fuel consumption based on the driving behavior, in which vehicles and routes are known. First, the driving data are collected via telematics and OBDII. Then, the driving fuel consumption formula is used to calculate the estimate fuel consumption, and driving behavior indicators are generated for analysis. Based on statistical analysis method, the driving fuel consumption forecasting model is constructed. Some field experiment results were done in this study to generate hundreds of driving behavior indicators. Based on data mining approach, the Pearson coefficient correlation analysis is used to filter highly fuel consumption related DBIs. Only highly correlated DBI will be used in the model. These DBIs are divided into four classes: speed class, acceleration class, Left/Right/U-turn class and the other category. We then use K-means cluster analysis to group to the driver class and the route class. Finally, more than 12 aggregate models are generated by those highly correlated DBIs, using the neural network model and regression analysis. Based on Mean Absolute Percentage Error (MAPE) to evaluate from the developed AMs. The best MAPE values among these AM is below 5%.
Information criteria for quantifying loss of reversibility in parallelized KMC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gourgoulias, Konstantinos, E-mail: gourgoul@math.umass.edu; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Rey-Bellet, Luc, E-mail: luc@math.umass.edu
Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot bemore » computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.« less
Information criteria for quantifying loss of reversibility in parallelized KMC
NASA Astrophysics Data System (ADS)
Gourgoulias, Konstantinos; Katsoulakis, Markos A.; Rey-Bellet, Luc
2017-01-01
Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot be computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Filho, P. H.; Shimabukuro, Y. E.; Demedeiros, J. S.; Desantana, C. C.; Alves, E. C. M.
1981-01-01
The state of Mato Grosso do Sul was selected as the study area to define the recognizable classes of Eucalyptus spp. and Pinus spp. by visual and automatic analyses. For visual analysis, a preliminary interpretation key and a legend of 6 groups were derived. Based on these six groups, three final classes were defined for analysis: (1) area prepared for reforestation; (2) area reforested with Eucalyptus spp.; and (3) area reforested with Pinus spp. For automatic interpretation the area along the highway from Ribas do Rio Pardo to Agua Clara was classified into the following classes: eucalytus, bare soil, plowed soil, pine and "cerrado". The results of visual analysis show that 67% of the reforested farms have relative differences in area estimate below 5%, 22%, between 5% and 10%; and 11% between 10% and 20%. The reforested eucalyptus area is 17 times greater than the area of reforested pine. Automatic classification of eucalyptus ranged from 73.03% to 92.30% in the training areas.
Methods for estimating dispersal probabilities and related parameters using marked animals
Bennetts, R.E.; Nichols, J.D.; Pradel, R.; Lebreton, J.D.; Kitchens, W.M.; Clobert, Jean; Danchin, Etienne; Dhondt, Andre A.; Nichols, James D.
2001-01-01
Deriving valid inferences about the causes and consequences of dispersal from empirical studies depends largely on our ability reliably to estimate parameters associated with dispersal. Here, we present a review of the methods available for estimating dispersal and related parameters using marked individuals. We emphasize methods that place dispersal in a probabilistic framework. In this context, we define a dispersal event as a movement of a specified distance or from one predefined patch to another, the magnitude of the distance or the definition of a `patch? depending on the ecological or evolutionary question(s) being addressed. We have organized the chapter based on four general classes of data for animals that are captured, marked, and released alive: (1) recovery data, in which animals are recovered dead at a subsequent time, (2) recapture/resighting data, in which animals are either recaptured or resighted alive on subsequent sampling occasions, (3) known-status data, in which marked animals are reobserved alive or dead at specified times with probability 1.0, and (4) combined data, in which data are of more than one type (e.g., live recapture and ring recovery). For each data type, we discuss the data required, the estimation techniques, and the types of questions that might be addressed from studies conducted at single and multiple sites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zellmer, S.D.; Rastorfer, J.R.; Van Dyke, G.D.
Implementation of recent federal and state regulations promulgated to protect wetlands makes information on effects of gas pipeline rights-of-way (ROWs) in wetlands essential to the gas pipeline industry. This study is designed to record vegetational changes induced by the construction of a large-diameter gas pipeline through deciduous forested wetlands. Two second-growth forested wetland sites mapped as Lenawee soils, one mature and one subjected to recent selective logging, were selected in Midland County, Michigan. Changes in the adjacent forest and successional development on the ROW are being documented. Cover-class estimates are being made for understory and ROW plant species using 1more » {times}1-m quadrats. Counts are also being made for all woody species with stems < 2 cm in diameter at breast height (dbh) in the same plots used for cover-class estimates. Individual stem diameters and species counts are being recorded for all woody understory and overstory plants with stems {ge}2 cm dbh in 10 {times} 10-m plots. Although analyses of the data have not been completed, preliminary analyses indicate that some destruction of vegetation at the ROW forest edge may have been avoidable during pipeline construction. Rapid regrowth of many native wetland plant species on the ROW occurred because remnants of native vegetation and soil-bearing propagules of existing species survived on the ROW after pipeline construction and seeding operations. 91 refs., 11 figs., 3 tabs.« less
NuSTAR Hard X-Ray Observation of a Sub-A Class Solar Flare
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glesener, Lindsay; Krucker, Säm; Hudson, Hugh
We report a Nuclear Spectroscopic Telescope Array ( NuSTAR ) observation of a solar microflare, SOL2015-09-01T04. Although it was too faint to be observed by the GOES X-ray Sensor, we estimate the event to be an A0.1 class flare in brightness. This microflare, with only ∼5 counts s{sup −1} detector{sup −1} observed by the Reuven Ramaty High Energy Solar Spectroscopic Imager ( RHESSI ), is fainter than any hard X-ray (HXR) flare in the existing literature. The microflare occurred during a solar pointing by the highly sensitive NuSTAR astrophysical observatory, which used its direct focusing optics to produce detailed HXRmore » microflare spectra and images. The microflare exhibits HXR properties commonly observed in larger flares, including a fast rise and more gradual decay, earlier peak time with higher energy, spatial dimensions similar to the RHESSI microflares, and a high-energy excess beyond an isothermal spectral component during the impulsive phase. The microflare is small in emission measure, temperature, and energy, though not in physical size; observations are consistent with an origin via the interaction of at least two magnetic loops. We estimate the increase in thermal energy at the time of the microflare to be 2.4 × 10{sup 27} erg. The observation suggests that flares do indeed scale down to extremely small energies and retain what we customarily think of as “flare-like” properties.« less
A frame selective dynamic programming approach for noise robust pitch estimation.
Yarra, Chiranjeevi; Deshmukh, Om D; Ghosh, Prasanta Kumar
2018-04-01
The principles of the existing pitch estimation techniques are often different and complementary in nature. In this work, a frame selective dynamic programming (FSDP) method is proposed which exploits the complementary characteristics of two existing methods, namely, sub-harmonic to harmonic ratio (SHR) and sawtooth-wave inspired pitch estimator (SWIPE). Using variants of SHR and SWIPE, the proposed FSDP method classifies all the voiced frames into two classes-the first class consists of the frames where a confidence score maximization criterion is used for pitch estimation, while for the second class, a dynamic programming (DP) based approach is proposed. Experiments are performed on speech signals separately from KEELE, CSLU, and PaulBaghsaw corpora under clean and additive white Gaussian noise at 20, 10, 5, and 0 dB SNR conditions using four baseline schemes including SHR, SWIPE, and two DP based techniques. The pitch estimation performance of FSDP, when averaged over all SNRs, is found to be better than those of the baseline schemes suggesting the benefit of applying smoothness constraint using DP in selected frames in the proposed FSDP scheme. The VuV classification error from FSDP is also found to be lower than that from all four baseline schemes in almost all SNR conditions on three corpora.
Attia, A; Dhahbi, W; Chaouachi, A; Padulo, J; Wong, D P; Chamari, K
2017-03-01
Common methods to estimate vertical jump height (VJH) are based on the measurements of flight time (FT) or vertical reaction force. This study aimed to assess the measurement errors when estimating the VJH with flight time using photocell devices in comparison with the gold standard jump height measured by a force plate (FP). The second purpose was to determine the intrinsic reliability of the Optojump photoelectric cells in estimating VJH. For this aim, 20 subjects (age: 22.50±1.24 years) performed maximal vertical jumps in three modalities in randomized order: the squat jump (SJ), counter-movement jump (CMJ), and CMJ with arm swing (CMJarm). Each trial was simultaneously recorded by the FP and Optojump devices. High intra-class correlation coefficients (ICCs) for validity (0.98-0.99) and low limits of agreement (less than 1.4 cm) were found; even a systematic difference in jump height was consistently observed between FT and double integration of force methods (-31% to -27%; p<0.001) and a large effect size (Cohen's d >1.2). Intra-session reliability of Optojump was excellent, with ICCs ranging from 0.98 to 0.99, low coefficients of variation (3.98%), and low standard errors of measurement (0.8 cm). It was concluded that there was a high correlation between the two methods to estimate the vertical jump height, but the FT method cannot replace the gold standard, due to the large systematic bias. According to our results, the equations of each of the three jump modalities were presented in order to obtain a better estimation of the jump height.
O'Neill, William; Penn, Richard; Werner, Michael; Thomas, Justin
2015-06-01
Estimation of stochastic process models from data is a common application of time series analysis methods. Such system identification processes are often cast as hypothesis testing exercises whose intent is to estimate model parameters and test them for statistical significance. Ordinary least squares (OLS) regression and the Levenberg-Marquardt algorithm (LMA) have proven invaluable computational tools for models being described by non-homogeneous, linear, stationary, ordinary differential equations. In this paper we extend stochastic model identification to linear, stationary, partial differential equations in two independent variables (2D) and show that OLS and LMA apply equally well to these systems. The method employs an original nonparametric statistic as a test for the significance of estimated parameters. We show gray scale and color images are special cases of 2D systems satisfying a particular autoregressive partial difference equation which estimates an analogous partial differential equation. Several applications to medical image modeling and classification illustrate the method by correctly classifying demented and normal OLS models of axial magnetic resonance brain scans according to subject Mini Mental State Exam (MMSE) scores. Comparison with 13 image classifiers from the literature indicates our classifier is at least 14 times faster than any of them and has a classification accuracy better than all but one. Our modeling method applies to any linear, stationary, partial differential equation and the method is readily extended to 3D whole-organ systems. Further, in addition to being a robust image classifier, estimated image models offer insights into which parameters carry the most diagnostic image information and thereby suggest finer divisions could be made within a class. Image models can be estimated in milliseconds which translate to whole-organ models in seconds; such runtimes could make real-time medicine and surgery modeling possible.
Attia, A; Chaouachi, A; Padulo, J; Wong, DP; Chamari, K
2016-01-01
Common methods to estimate vertical jump height (VJH) are based on the measurements of flight time (FT) or vertical reaction force. This study aimed to assess the measurement errors when estimating the VJH with flight time using photocell devices in comparison with the gold standard jump height measured by a force plate (FP). The second purpose was to determine the intrinsic reliability of the Optojump photoelectric cells in estimating VJH. For this aim, 20 subjects (age: 22.50±1.24 years) performed maximal vertical jumps in three modalities in randomized order: the squat jump (SJ), counter-movement jump (CMJ), and CMJ with arm swing (CMJarm). Each trial was simultaneously recorded by the FP and Optojump devices. High intra-class correlation coefficients (ICCs) for validity (0.98-0.99) and low limits of agreement (less than 1.4 cm) were found; even a systematic difference in jump height was consistently observed between FT and double integration of force methods (-31% to -27%; p<0.001) and a large effect size (Cohen’s d>1.2). Intra-session reliability of Optojump was excellent, with ICCs ranging from 0.98 to 0.99, low coefficients of variation (3.98%), and low standard errors of measurement (0.8 cm). It was concluded that there was a high correlation between the two methods to estimate the vertical jump height, but the FT method cannot replace the gold standard, due to the large systematic bias. According to our results, the equations of each of the three jump modalities were presented in order to obtain a better estimation of the jump height. PMID:28416900
Time Is of the Essence: Factors Encouraging Out-of-Class Study Time
ERIC Educational Resources Information Center
Fukuda, Steve T.; Yoshida, Hiroshi
2013-01-01
Out-of-class study time is essential in students' language learning, but few studies in ELT measure out-of-class study time or investigate how teachers can encourage, rather than demand it. In Japan, out-of-class study time is lower than might be expected, ranging from zero to an hour per week. This study therefore sets out to establish those…
Software for Quantifying and Simulating Microsatellite Genotyping Error
Johnson, Paul C.D.; Haydon, Daniel T.
2007-01-01
Microsatellite genetic marker data are exploited in a variety of fields, including forensics, gene mapping, kinship inference and population genetics. In all of these fields, inference can be thwarted by failure to quantify and account for data errors, and kinship inference in particular can benefit from separating errors into two distinct classes: allelic dropout and false alleles. Pedant is MS Windows software for estimating locus-specific maximum likelihood rates of these two classes of error. Estimation is based on comparison of duplicate error-prone genotypes: neither reference genotypes nor pedigree data are required. Other functions include: plotting of error rate estimates and confidence intervals; simulations for performing power analysis and for testing the robustness of error rate estimates to violation of the underlying assumptions; and estimation of expected heterozygosity, which is a required input. The program, documentation and source code are available from http://www.stats.gla.ac.uk/~paulj/pedant.html. PMID:20066126
Hidden Semi-Markov Models and Their Application
NASA Astrophysics Data System (ADS)
Beyreuther, M.; Wassermann, J.
2008-12-01
In the framework of detection and classification of seismic signals there are several different approaches. Our choice for a more robust detection and classification algorithm is to adopt Hidden Markov Models (HMM), a technique showing major success in speech recognition. HMM provide a powerful tool to describe highly variable time series based on a double stochastic model and therefore allow for a broader class description than e.g. template based pattern matching techniques. Being a fully probabilistic model, HMM directly provide a confidence measure of an estimated classification. Furthermore and in contrast to classic artificial neuronal networks or support vector machines, HMM are incorporating the time dependence explicitly in the models thus providing a adequate representation of the seismic signal. As the majority of detection algorithms, HMM are not based on the time and amplitude dependent seismogram itself but on features estimated from the seismogram which characterize the different classes. Features, or in other words characteristic functions, are e.g. the sonogram bands, instantaneous frequency, instantaneous bandwidth or centroid time. In this study we apply continuous Hidden Semi-Markov Models (HSMM), an extension of continuous HMM. The duration probability of a HMM is an exponentially decaying function of the time, which is not a realistic representation of the duration of an earthquake. In contrast HSMM use Gaussians as duration probabilities, which results in an more adequate model. The HSMM detection and classification system is running online as an EARTHWORM module at the Bavarian Earthquake Service. Here the signals that are to be classified simply differ in epicentral distance. This makes it possible to easily decide whether a classification is correct or wrong and thus allows to better evaluate the advantages and disadvantages of the proposed algorithm. The evaluation is based on several month long continuous data and the results are additionally compared to the previously published discrete HMM, continuous HMM and a classic STA/LTA. The intermediate evaluation results are very promising.
Survival and recovery rates of American woodcock banded in Michigan
Krementz, David G.; Hines, James E.; Luukkonen, David R.
2003-01-01
American woodcock (Scolopax minor) population indices have declined since U.S. Fish and Wildlife Service (USFWS) monitoring began in 1968. Management to stop and/or reverse this population trend has been hampered by the lack of recent information on woodcock population parameters. Without recent information on survival rate trends, managers have had to assume that the recent declines in recruitment indices are the only parameter driving woodcock declines. Using program MARK, we estimated annual survival and recovery rates of adult and juvenile American woodcock, and estimated summer survival of local (young incapable of sustained flight) woodcock banded in Michigan between 1978 and 1998. We constructed a set of candidate models from a global model with age (local, juvenile, adult) and time (year)-dependent survival and recovery rates to no age or time-dependent survival and recovery rates. Five models were supported by the data, with all models suggesting that survival rates differed among age classes, and 4 models had survival rates that were constant over time. The fifth model suggested that juvenile and adult survival rates were linear on a logit scale over time. Survival rates averaged over likelihood-weighted model results were 0.8784 +/- 0.1048 (SE) for locals, 0.2646 +/- 0.0423 (SE) for juveniles, and 0.4898 +/- 0.0329 (SE) for adults. Weighted average recovery rates were 0.0326 +/- 0.0053 (SE) for juveniles and 0.0313 +/- 0.0047 (SE) for adults. Estimated differences between our survival estimates and those from prior years were small, and our confidence around those differences was variable and uncertain. juvenile survival rates were low.
Estimating proportions of objects from multispectral scanner data
NASA Technical Reports Server (NTRS)
Horwitz, H. M.; Lewis, J. T.; Pentland, A. P.
1975-01-01
Progress is reported in developing and testing methods of estimating, from multispectral scanner data, proportions of target classes in a scene when there are a significiant number of boundary pixels. Procedures were developed to exploit: (1) prior information concerning the number of object classes normally occurring in a pixel, and (2) spectral information extracted from signals of adjoining pixels. Two algorithms, LIMMIX and nine-point mixtures, are described along with supporting processing techniques. An important by-product of the procedures, in contrast to the previous method, is that they are often appropriate when the number of spectral bands is small. Preliminary tests on LANDSAT data sets, where target classes were (1) lakes and ponds, and (2) agricultural crops were encouraging.
Lehner, T.; Buckley, Helen R.; Murray, I. G.
1972-01-01
A parallel study of fluorescent, agglutinating, and precipitating antibodies to Candida albicans revealed that precipitating antibodies belong to the IgG class, whereas agglutinating antibodies reside in the IgG, IgM, and IgA classes. The three types as well as the three classes of antibodies were found in Candida endocarditis and mucocutaneous candidiasis. Immuno-absorption studies suggest that the three serological tests estimate antibodies to mannan determinants of Candida albicans. Images PMID:4555044
Gene regulatory networks: a coarse-grained, equation-free approach to multiscale computation.
Erban, Radek; Kevrekidis, Ioannis G; Adalsteinsson, David; Elston, Timothy C
2006-02-28
We present computer-assisted methods for analyzing stochastic models of gene regulatory networks. The main idea that underlies this equation-free analysis is the design and execution of appropriately initialized short bursts of stochastic simulations; the results of these are processed to estimate coarse-grained quantities of interest, such as mesoscopic transport coefficients. In particular, using a simple model of a genetic toggle switch, we illustrate the computation of an effective free energy Phi and of a state-dependent effective diffusion coefficient D that characterize an unavailable effective Fokker-Planck equation. Additionally we illustrate the linking of equation-free techniques with continuation methods for performing a form of stochastic "bifurcation analysis"; estimation of mean switching times in the case of a bistable switch is also implemented in this equation-free context. The accuracy of our methods is tested by direct comparison with long-time stochastic simulations. This type of equation-free analysis appears to be a promising approach to computing features of the long-time, coarse-grained behavior of certain classes of complex stochastic models of gene regulatory networks, circumventing the need for long Monte Carlo simulations.
Dziak, John J.; Bray, Bethany C.; Zhang, Jieting; Zhang, Minqiang; Lanza, Stephanie T.
2016-01-01
Several approaches are available for estimating the relationship of latent class membership to distal outcomes in latent profile analysis (LPA). A three-step approach is commonly used, but has problems with estimation bias and confidence interval coverage. Proposed improvements include the correction method of Bolck, Croon, and Hagenaars (BCH; 2004), Vermunt’s (2010) maximum likelihood (ML) approach, and the inclusive three-step approach of Bray, Lanza, & Tan (2015). These methods have been studied in the related case of latent class analysis (LCA) with categorical indicators, but not as well studied for LPA with continuous indicators. We investigated the performance of these approaches in LPA with normally distributed indicators, under different conditions of distal outcome distribution, class measurement quality, relative latent class size, and strength of association between latent class and the distal outcome. The modified BCH implemented in Latent GOLD had excellent performance. The maximum likelihood and inclusive approaches were not robust to violations of distributional assumptions. These findings broadly agree with and extend the results presented by Bakk and Vermunt (2016) in the context of LCA with categorical indicators. PMID:28630602
Burro, Roberto; Raccanello, Daniela; Pasini, Margherita; Brondino, Margherita
2018-01-01
Conceptualizing affect as a complex nonlinear dynamic process, we used latent class extended mixed models (LCMM) to understand whether there were unobserved groupings in a dataset including longitudinal measures. Our aim was to identify affect profiles over time in people vicariously exposed to terrorism, studying their relations with personality traits. The participants were 193 university students who completed online measures of affect during the seven days following two terrorist attacks (Paris, November 13, 2015; Brussels, March 22, 2016); Big Five personality traits; and antecedents of affect. After selecting students whose negative affect was influenced by the two attacks (33%), we analysed the data with the LCMM package of R. We identified two affect profiles, characterized by different trends over time: The first profile comprised students with lower positive affect and higher negative affect compared to the second profile. Concerning personality traits, conscientious-ness was lower for the first profile compared to the second profile, and vice versa for neuroticism. Findings are discussed for both their theoretical and applied relevance.
People's Risk Recognition Preceding Evacuation and Its Role in Demand Modeling and Planning.
Urata, Junji; Pel, Adam J
2018-05-01
Evacuation planning and management involves estimating the travel demand in the event that such action is required. This is usually done as a function of people's decision to evacuate, which we show is strongly linked to their risk awareness. We use an empirical data set, which shows tsunami evacuation behavior, to demonstrate that risk recognition is not synonymous with objective risk, but is instead determined by a combination of factors including risk education, information, and sociodemographics, and that it changes dynamically over time. Based on these findings, we formulate an ordered logit model to describe risk recognition combined with a latent class model to describe evacuation choices. Our proposed evacuation choice model along with a risk recognition class can evaluate quantitatively the influence of disaster mitigation measures, risk education, and risk information. The results obtained from the risk recognition model show that risk information has a greater impact in the sense that people recognize their high risk. The results of the evacuation choice model show that people who are unaware of their risk take a longer time to evacuate. © 2017 Society for Risk Analysis.
Malmusi, Davide; Vives, Alejandra; Benach, Joan; Borrell, Carme
2014-01-01
Women experience poorer health than men despite their longer life expectancy, due to a higher prevalence of non-fatal chronic illnesses. This paper aims to explore whether the unequal gender distribution of roles and resources can account for inequalities in general self-rated health (SRH) by gender, across social classes, in a Southern European population. Cross-sectional study of residents in Catalonia aged 25-64, using data from the 2006 population living conditions survey (n=5,817). Poisson regression models were used to calculate the fair/poor SRH prevalence ratio (PR) by gender and to estimate the contribution of variables assessing several dimensions of living conditions as the reduction in the PR after their inclusion in the model. Analyses were stratified by social class (non-manual and manual). SRH was poorer for women among both non-manual (PR 1.39, 95% CI 1.09-1.76) and manual social classes (PR 1.36, 95% CI 1.20-1.56). Adjustment for individual income alone eliminated the association between sex and SRH, especially among manual classes (PR 1.01, 95% CI 0.85-1.19; among non-manual 1.19, 0.92-1.54). The association was also reduced when adjusting by employment conditions among manual classes, and household material and economic situation, time in household chores and residential environment among non-manual classes. Gender inequalities in individual income appear to contribute largely to women's poorer health. Individual income may indicate the availability of economic resources, but also the history of access to the labour market and potentially the degree of independence and power within the household. Policies to facilitate women's labour market participation, to close the gender pay gap, or to raise non-contributory pensions may be helpful to improve women's health.
NASA Astrophysics Data System (ADS)
Prather, E. E.; Rudolph, A. L.; Brissenden, G.; Schlingman, W. M.
2011-09-01
We present the results of a national study on the teaching and learning of astronomy taught in general education, non-science major, introductory astronomy courses (Astro 101). Nearly 4000 students enrolled in 69 sections of Astro 101 taught at 31 institutions completed (pre- and post- instruction) the Light and Spectroscopy Concept Inventory (LSCI) from Fall 2006 to Fall 2007. The classes varied in size from very small (N < 10) to large (N˜180) and were from all types of institutions, including both 2-year and 4-year colleges and universities. To study how the instruction in different classrooms affected student learning, we developed and administered an Interactivity Assessment Instrument (IAI). This short survey, completed by instructors, allowed us to estimate the fraction of classroom time spent on learner- centered, active-engagement instruction such as Peer Instruction and collaborative tutorials. Pre-instruction LSCI scores were clustered around ˜25% (24 ± 2%), independent of class size and institution type; however, the gains measured varied from about (-)0.07-0.50. The distribution of gain scores indicates that differences were due to instruction in the classroom, not the type of class or institution. Interactivity Assessment Scores (IAS's) ranged from 0%-50%, showing that our IAI was able to distinguish between classes with higher and lower levels of interactivity. A comparison of class-averaged gain score to IAS showed that higher interactivity classes (IAS > 25%) were the only instructional environments capable of reaching the highest gains (
Adaptive hybrid simulations for multiscale stochastic reaction networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such amore » partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.« less
Adaptive hybrid simulations for multiscale stochastic reaction networks.
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.
Impact of triple-negative phenotype on prognosis of patients with breast cancer brain metastases.
Xu, Zhiyuan; Schlesinger, David; Toulmin, Sushila; Rich, Tyvin; Sheehan, Jason
2012-11-01
To elucidate survival times and identify potential prognostic factors in patients with triple-negative (TN) phenotype who harbored brain metastases arising from breast cancer and who underwent stereotactic radiosurgery (SRS). A total of 103 breast cancer patients with brain metastases were treated with SRS and then studied retrospectively. Twenty-four patients (23.3%) were TN. Survival times were estimated using the Kaplan-Meier method, with a log-rank test computing the survival time difference between groups. Univariate and multivariate analyses to predict potential prognostic factors were performed using a Cox proportional hazard regression model. The presence of TN phenotype was associated with worse survival times, including overall survival after the diagnosis of primary breast cancer (43 months vs. 82 months), neurologic survival after the diagnosis of intracranial metastases, and radiosurgical survival after SRS, with median survival times being 13 months vs. 25 months and 6 months vs. 16 months, respectively (p < 0.002 in all three comparisons). On multivariate analysis, radiosurgical survival benefit was associated with non-TN status and lower recursive partitioning analysis class at the initial SRS. The TN phenotype represents a significant adverse prognostic factor with respect to overall survival, neurologic survival, and radiosurgical survival in breast cancer patients with intracranial metastasis. Recursive partitioning analysis class also served as an important and independent prognostic factor. Copyright © 2012 Elsevier Inc. All rights reserved.
Shih, Peter; Kaul, Brian C; Jagannathan, S; Drallmeier, James A
2008-08-01
A novel reinforcement-learning-based dual-control methodology adaptive neural network (NN) controller is developed to deliver a desired tracking performance for a class of complex feedback nonlinear discrete-time systems, which consists of a second-order nonlinear discrete-time system in nonstrict feedback form and an affine nonlinear discrete-time system, in the presence of bounded and unknown disturbances. For example, the exhaust gas recirculation (EGR) operation of a spark ignition (SI) engine is modeled by using such a complex nonlinear discrete-time system. A dual-controller approach is undertaken where primary adaptive critic NN controller is designed for the nonstrict feedback nonlinear discrete-time system whereas the secondary one for the affine nonlinear discrete-time system but the controllers together offer the desired performance. The primary adaptive critic NN controller includes an NN observer for estimating the states and output, an NN critic, and two action NNs for generating virtual control and actual control inputs for the nonstrict feedback nonlinear discrete-time system, whereas an additional critic NN and an action NN are included for the affine nonlinear discrete-time system by assuming the state availability. All NN weights adapt online towards minimization of a certain performance index, utilizing gradient-descent-based rule. Using Lyapunov theory, the uniformly ultimate boundedness (UUB) of the closed-loop tracking error, weight estimates, and observer estimates are shown. The adaptive critic NN controller performance is evaluated on an SI engine operating with high EGR levels where the controller objective is to reduce cyclic dispersion in heat release while minimizing fuel intake. Simulation and experimental results indicate that engine out emissions drop significantly at 20% EGR due to reduction in dispersion in heat release thus verifying the dual-control approach.
Finite-time braiding exponents
NASA Astrophysics Data System (ADS)
Budišić, Marko; Thiffeault, Jean-Luc
2015-08-01
Topological entropy of a dynamical system is an upper bound for the sum of positive Lyapunov exponents; in practice, it is strongly indicative of the presence of mixing in a subset of the domain. Topological entropy can be computed by partition methods, by estimating the maximal growth rate of material lines or other material elements, or by counting the unstable periodic orbits of the flow. All these methods require detailed knowledge of the velocity field that is not always available, for example, when ocean flows are measured using a small number of floating sensors. We propose an alternative calculation, applicable to two-dimensional flows, that uses only a sparse set of flow trajectories as its input. To represent the sparse set of trajectories, we use braids, algebraic objects that record how trajectories exchange positions with respect to a projection axis. Material curves advected by the flow are represented as simplified loop coordinates. The exponential rate at which a braid stretches loops over a finite time interval is the Finite-Time Braiding Exponent (FTBE). We study FTBEs through numerical simulations of the Aref Blinking Vortex flow, as a representative of a general class of flows having a single invariant component with positive topological entropy. The FTBEs approach the value of the topological entropy from below as the length and number of trajectories is increased; we conjecture that this result holds for a general class of ergodic, mixing systems. Furthermore, FTBEs are computed robustly with respect to the numerical time step, details of braid representation, and choice of initial conditions. We find that, in the class of systems we describe, trajectories can be re-used to form different braids, which greatly reduces the amount of data needed to assess the complexity of the flow.
Finite-time braiding exponents.
Budišić, Marko; Thiffeault, Jean-Luc
2015-08-01
Topological entropy of a dynamical system is an upper bound for the sum of positive Lyapunov exponents; in practice, it is strongly indicative of the presence of mixing in a subset of the domain. Topological entropy can be computed by partition methods, by estimating the maximal growth rate of material lines or other material elements, or by counting the unstable periodic orbits of the flow. All these methods require detailed knowledge of the velocity field that is not always available, for example, when ocean flows are measured using a small number of floating sensors. We propose an alternative calculation, applicable to two-dimensional flows, that uses only a sparse set of flow trajectories as its input. To represent the sparse set of trajectories, we use braids, algebraic objects that record how trajectories exchange positions with respect to a projection axis. Material curves advected by the flow are represented as simplified loop coordinates. The exponential rate at which a braid stretches loops over a finite time interval is the Finite-Time Braiding Exponent (FTBE). We study FTBEs through numerical simulations of the Aref Blinking Vortex flow, as a representative of a general class of flows having a single invariant component with positive topological entropy. The FTBEs approach the value of the topological entropy from below as the length and number of trajectories is increased; we conjecture that this result holds for a general class of ergodic, mixing systems. Furthermore, FTBEs are computed robustly with respect to the numerical time step, details of braid representation, and choice of initial conditions. We find that, in the class of systems we describe, trajectories can be re-used to form different braids, which greatly reduces the amount of data needed to assess the complexity of the flow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akpinar, Berkcan; Mousavi, Seyed H., E-mail: mousavish@upmc.edu; McDowell, Michael M.
Purpose: Vestibular schwannomas (VS) are increasingly diagnosed in patients with normal hearing because of advances in magnetic resonance imaging. We sought to evaluate whether stereotactic radiosurgery (SRS) performed earlier after diagnosis improved long-term hearing preservation in this population. Methods and Materials: We queried our quality assessment registry and found the records of 1134 acoustic neuroma patients who underwent SRS during a 15-year period (1997-2011). We identified 88 patients who had VS but normal hearing with no subjective hearing loss at the time of diagnosis. All patients were Gardner-Robertson (GR) class I at the time of SRS. Fifty-seven patients underwent earlymore » (≤2 years from diagnosis) SRS and 31 patients underwent late (>2 years after diagnosis) SRS. At a median follow-up time of 75 months, we evaluated patient outcomes. Results: Tumor control rates (decreased or stable in size) were similar in the early (95%) and late (90%) treatment groups (P=.73). Patients in the early treatment group retained serviceable (GR class I/II) hearing and normal (GR class I) hearing longer than did patients in the late treatment group (serviceable hearing, P=.006; normal hearing, P<.0001, respectively). At 5 years after SRS, an estimated 88% of the early treatment group retained serviceable hearing and 77% retained normal hearing, compared with 55% with serviceable hearing and 33% with normal hearing in the late treatment group. Conclusions: SRS within 2 years after diagnosis of VS in normal hearing patients resulted in improved retention of all hearing measures compared with later SRS.« less
Longitudinal Course of Risk for Parental Post-Adoption Depression
Foli, Karen J.; South, Susan C.; Lim, Eunjung; Hebdon, Megan
2016-01-01
Objective To determine whether the Postpartum Depression Predictors Inventory-Revised (PDPI-R) could be used to reveal distinct classes of adoptive parents across time. Design Longitudinal data were collected via online surveys at 4-6 weeks pre-placement, 4-6 weeks post-placement, and 5-6 months post-placement. Setting Participants were primarily clients of the largest adoption agency in the United States. Participants Participants included 127 adoptive parents (68 mothers and 59 fathers). Methods We applied a latent class growth analysis to the PDPI-R and conducted mixed effects modeling of class, time, and class×time interaction for the following categories of explanatory variables: parental expectations; interpersonal variables; psychological symptoms; and life orientation. Results Four latent trajectory classes were found. Class 1 (55% of sample) showed a stably low level of PDPI-R scores over time. Class 2 (32%) reported mean scores below the cut-off points at all three time points. Class 3 (8%) started at an intermediate level and increased after post-placement, but decreased at 5-6 months post-placement. Class 4 (5%) had high mean scores at all three time points. Significant main effects were found for almost all explanatory variables for class and for several variables for time. Significant interactions between class and time were found for expectations about the child and amount of love and ambivalence in parent's intimate relationship. Conclusion Findings may assist nurses to be alert to trajectories of risk for post-adoption depression. Additional factors, not included in the PDPI-R, to determine risk for post-adoption depression may be needed for adoptive parents. PMID:26874267
SURE Estimates for a Heteroscedastic Hierarchical Model
Xie, Xianchao; Kou, S. C.; Brown, Lawrence D.
2014-01-01
Hierarchical models are extensively studied and widely used in statistics and many other scientific areas. They provide an effective tool for combining information from similar resources and achieving partial pooling of inference. Since the seminal work by James and Stein (1961) and Stein (1962), shrinkage estimation has become one major focus for hierarchical models. For the homoscedastic normal model, it is well known that shrinkage estimators, especially the James-Stein estimator, have good risk properties. The heteroscedastic model, though more appropriate for practical applications, is less well studied, and it is unclear what types of shrinkage estimators are superior in terms of the risk. We propose in this paper a class of shrinkage estimators based on Stein’s unbiased estimate of risk (SURE). We study asymptotic properties of various common estimators as the number of means to be estimated grows (p → ∞). We establish the asymptotic optimality property for the SURE estimators. We then extend our construction to create a class of semi-parametric shrinkage estimators and establish corresponding asymptotic optimality results. We emphasize that though the form of our SURE estimators is partially obtained through a normal model at the sampling level, their optimality properties do not heavily depend on such distributional assumptions. We apply the methods to two real data sets and obtain encouraging results. PMID:25301976
Probabilistic Open Set Recognition
NASA Astrophysics Data System (ADS)
Jain, Lalit Prithviraj
Real-world tasks in computer vision, pattern recognition and machine learning often touch upon the open set recognition problem: multi-class recognition with incomplete knowledge of the world and many unknown inputs. An obvious way to approach such problems is to develop a recognition system that thresholds probabilities to reject unknown classes. Traditional rejection techniques are not about the unknown; they are about the uncertain boundary and rejection around that boundary. Thus traditional techniques only represent the "known unknowns". However, a proper open set recognition algorithm is needed to reduce the risk from the "unknown unknowns". This dissertation examines this concept and finds existing probabilistic multi-class recognition approaches are ineffective for true open set recognition. We hypothesize the cause is due to weak adhoc assumptions combined with closed-world assumptions made by existing calibration techniques. Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under this assumption of incomplete class knowledge. For this, we formulate the problem as one of modeling positive training data by invoking statistical extreme value theory (EVT) near the decision boundary of positive data with respect to negative data. We provide a new algorithm called the PI-SVM for estimating the unnormalized posterior probability of class inclusion. This dissertation also introduces a new open set recognition model called Compact Abating Probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical EVT for score calibration with one-class and binary support vector machines. Building from the success of statistical EVT based recognition methods such as PI-SVM and W-SVM on the open set problem, we present a new general supervised learning algorithm for multi-class classification and multi-class open set recognition called the Extreme Value Local Basis (EVLB). The design of this algorithm is motivated by the observation that extrema from known negative class distributions are the closest negative points to any positive sample during training, and thus should be used to define the parameters of a probabilistic decision model. In the EVLB, the kernel distribution for each positive training sample is estimated via an EVT distribution fit over the distances to the separating hyperplane between positive training sample and closest negative samples, with a subset of the overall positive training data retained to form a probabilistic decision boundary. Using this subset as a frame of reference, the probability of a sample at test time decreases as it moves away from the positive class. Possessing this property, the EVLB is well-suited to open set recognition problems where samples from unknown or novel classes are encountered at test. Our experimental evaluation shows that the EVLB provides a substantial improvement in scalability compared to standard radial basis function kernel machines, as well as P I-SVM and W-SVM, with improved accuracy in many cases. We evaluate our algorithm on open set variations of the standard visual learning benchmarks, as well as with an open subset of classes from Caltech 256 and ImageNet. Our experiments show that PI-SVM, WSVM and EVLB provide significant advances over the previous state-of-the-art solutions for the same tasks.
A comparison of methods for DPLL loop filter design
NASA Technical Reports Server (NTRS)
Aguirre, S.; Hurd, W. J.; Kumar, R.; Statman, J.
1986-01-01
Four design methodologies for loop filters for a class of digital phase-locked loops (DPLLs) are presented. The first design maps an optimum analog filter into the digital domain; the second approach designs a filter that minimizes in discrete time weighted combination of the variance of the phase error due to noise and the sum square of the deterministic phase error component; the third method uses Kalman filter estimation theory to design a filter composed of a least squares fading memory estimator and a predictor. The last design relies on classical theory, including rules for the design of compensators. Linear analysis is used throughout the article to compare different designs, and includes stability, steady state performance and transient behavior of the loops. Design methodology is not critical when the loop update rate can be made high relative to loop bandwidth, as the performance approaches that of continuous time. For low update rates, however, the miminization method is significantly superior to the other methods.
The inverse problem of estimating the gravitational time dilation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gusev, A. V., E-mail: avg@sai.msu.ru; Litvinov, D. A.; Rudenko, V. N.
2016-11-15
Precise testing of the gravitational time dilation effect suggests comparing the clocks at points with different gravitational potentials. Such a configuration arises when radio frequency standards are installed at orbital and ground stations. The ground-based standard is accessible directly, while the spaceborne one is accessible only via the electromagnetic signal exchange. Reconstructing the current frequency of the spaceborne standard is an ill-posed inverse problem whose solution depends significantly on the characteristics of the stochastic electromagnetic background. The solution for Gaussian noise is known, but the nature of the standards themselves is associated with nonstationary fluctuations of a wide class ofmore » distributions. A solution is proposed for a background of flicker fluctuations with a spectrum (1/f){sup γ}, where 1 < γ < 3, and stationary increments. The results include formulas for the error in reconstructing the frequency of the spaceborne standard and numerical estimates for the accuracy of measuring the relativistic redshift effect.« less
Estimation in a semi-Markov transformation model
Dabrowska, Dorota M.
2012-01-01
Multi-state models provide a common tool for analysis of longitudinal failure time data. In biomedical applications, models of this kind are often used to describe evolution of a disease and assume that patient may move among a finite number of states representing different phases in the disease progression. Several authors developed extensions of the proportional hazard model for analysis of multi-state models in the presence of covariates. In this paper, we consider a general class of censored semi-Markov and modulated renewal processes and propose the use of transformation models for their analysis. Special cases include modulated renewal processes with interarrival times specified using transformation models, and semi-Markov processes with with one-step transition probabilities defined using copula-transformation models. We discuss estimation of finite and infinite dimensional parameters of the model, and develop an extension of the Gaussian multiplier method for setting confidence bands for transition probabilities. A transplant outcome data set from the Center for International Blood and Marrow Transplant Research is used for illustrative purposes. PMID:22740583
Fire danger rating over Mediterranean Europe based on fire radiative power derived from Meteosat
NASA Astrophysics Data System (ADS)
Pinto, Miguel M.; DaCamara, Carlos C.; Trigo, Isabel F.; Trigo, Ricardo M.; Feridun Turkman, K.
2018-02-01
We present a procedure that allows the operational generation of daily forecasts of fire danger over Mediterranean Europe. The procedure combines historical information about radiative energy released by fire events with daily meteorological forecasts, as provided by the Satellite Application Facility for Land Surface Analysis (LSA SAF) and the European Centre for Medium-Range Weather Forecasts (ECMWF). Fire danger is estimated based on daily probabilities of exceedance of daily energy released by fires occurring at the pixel level. Daily probability considers meteorological factors by means of the Canadian Fire Weather Index (FWI) and is estimated using a daily model based on a generalized Pareto distribution. Five classes of fire danger are then associated with daily probability estimated by the daily model. The model is calibrated using 13 years of data (2004-2016) and validated against the period of January-September 2017. Results obtained show that about 72 % of events releasing daily energy above 10 000 GJ belong to the extreme
class of fire danger, a considerably high fraction that is more than 1.5 times the values obtained when using the currently operational Fire Danger Forecast module of the European Forest Fire Information System (EFFIS) or the Fire Risk Map (FRM) product disseminated by the LSA SAF. Besides assisting in wildfire management, the procedure is expected to help in decision making on prescribed burning within the framework of agricultural and forest management practices.
Use of three-point taper systems in timber cruising
James W. Flewelling; Richard L. Ernst; Lawrence M. Raynes
2000-01-01
Tree volumes and profiles are often estimated as functions of total height and DBH. Alternative estimators include form-class methods, importance sampling, the centroid method, and multi-point profile (taper) estimation systems; all of these require some measurement or estimate of upper stem diameters. The multi-point profile system discussed here allows for upper stem...
NASA Astrophysics Data System (ADS)
Stewart, John
2015-04-01
The amount of time spent on out-of-class activities such as working homework, reading, and studying for examinations is presented for 10 years of an introductory, calculus-based physics class at a large public university. While the class underwent significant change in the 10 years studied, the amount of time invested by students in weeks not containing an in-semester examination was constant and did not vary with the length of the reading or homework assignments. The amount of time spent preparing for examinations did change as the course was modified. The time spent on class assignments, both reading and homework, did not scale linearly with the length of the assignment. The time invested in both reading and homework per length of the assignment decreased as the assignments became longer. The class average time invested in examination preparation did change with the average performance on previous examinations in the same class, with more time spent in preparation for lower previous examination scores (R2 = 0 . 70).