NASA Astrophysics Data System (ADS)
Zhu, Aichun; Wang, Tian; Snoussi, Hichem
2018-03-01
This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.
ERIC Educational Resources Information Center
Coughlin, Kevin B.
2013-01-01
This study is intended to provide researchers with empirically derived guidelines for conducting factor analytic studies in research contexts that include dichotomous and continuous levels of measurement. This study is based on the hypotheses that ordinary least squares (OLS) factor analysis will yield more accurate parameter estimates than…
Estimation of delays and other parameters in nonlinear functional differential equations
NASA Technical Reports Server (NTRS)
Banks, H. T.; Lamm, P. K. D.
1983-01-01
A spline-based approximation scheme for nonlinear nonautonomous delay differential equations is discussed. Convergence results (using dissipative type estimates on the underlying nonlinear operators) are given in the context of parameter estimation problems which include estimation of multiple delays and initial data as well as the usual coefficient-type parameters. A brief summary of some of the related numerical findings is also given.
Vector Graph Assisted Pedestrian Dead Reckoning Using an Unconstrained Smartphone
Qian, Jiuchao; Pei, Ling; Ma, Jiabin; Ying, Rendong; Liu, Peilin
2015-01-01
The paper presents a hybrid indoor positioning solution based on a pedestrian dead reckoning (PDR) approach using built-in sensors on a smartphone. To address the challenges of flexible and complex contexts of carrying a phone while walking, a robust step detection algorithm based on motion-awareness has been proposed. Given the fact that step length is influenced by different motion states, an adaptive step length estimation algorithm based on motion recognition is developed. Heading estimation is carried out by an attitude acquisition algorithm, which contains a two-phase filter to mitigate the distortion of magnetic anomalies. In order to estimate the heading for an unconstrained smartphone, principal component analysis (PCA) of acceleration is applied to determine the offset between the orientation of smartphone and the actual heading of a pedestrian. Moreover, a particle filter with vector graph assisted particle weighting is introduced to correct the deviation in step length and heading estimation. Extensive field tests, including four contexts of carrying a phone, have been conducted in an office building to verify the performance of the proposed algorithm. Test results show that the proposed algorithm can achieve sub-meter mean error in all contexts. PMID:25738763
Students' Accuracy of Measurement Estimation: Context, Units, and Logical Thinking
ERIC Educational Resources Information Center
Jones, M. Gail; Gardner, Grant E.; Taylor, Amy R.; Forrester, Jennifer H.; Andre, Thomas
2012-01-01
This study examined students' accuracy of measurement estimation for linear distances, different units of measure, task context, and the relationship between accuracy estimation and logical thinking. Middle school students completed a series of tasks that included estimating the length of various objects in different contexts and completed a test…
Advances in parameter estimation techniques applied to flexible structures
NASA Technical Reports Server (NTRS)
Maben, Egbert; Zimmerman, David C.
1994-01-01
In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.
Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements
NASA Astrophysics Data System (ADS)
Kassab, Mohamed; Daneva, Maya; Ormandjieva, Olga
The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient attention to this need. This paper presents a flexible, yet systematic approach to the early requirements-based effort estimation, based on Non-Functional Requirements ontology. It complementarily uses one standard functional size measurement model and a linear regression technique. We report on a case study which illustrates the application of our solution approach in context and also helps evaluate our experiences in using it.
Leveraging prognostic baseline variables to gain precision in randomized trials
Colantuoni, Elizabeth; Rosenblum, Michael
2015-01-01
We focus on estimating the average treatment effect in a randomized trial. If baseline variables are correlated with the outcome, then appropriately adjusting for these variables can improve precision. An example is the analysis of covariance (ANCOVA) estimator, which applies when the outcome is continuous, the quantity of interest is the difference in mean outcomes comparing treatment versus control, and a linear model with only main effects is used. ANCOVA is guaranteed to be at least as precise as the standard unadjusted estimator, asymptotically, under no parametric model assumptions and also is locally semiparametric efficient. Recently, several estimators have been developed that extend these desirable properties to more general settings that allow any real-valued outcome (e.g., binary or count), contrasts other than the difference in mean outcomes (such as the relative risk), and estimators based on a large class of generalized linear models (including logistic regression). To the best of our knowledge, we give the first simulation study in the context of randomized trials that compares these estimators. Furthermore, our simulations are not based on parametric models; instead, our simulations are based on resampling data from completed randomized trials in stroke and HIV in order to assess estimator performance in realistic scenarios. We provide practical guidance on when these estimators are likely to provide substantial precision gains and describe a quick assessment method that allows clinical investigators to determine whether these estimators could be useful in their specific trial contexts. PMID:25872751
Zhang, Guomin; Sandanayake, Malindu; Setunge, Sujeeva; Li, Chunqing; Fang, Jun
2017-02-01
Emissions from equipment usage and transportation at the construction stage are classified as the direct emissions which include both greenhouse gas (GHG) and non-GHG emissions due to partial combustion of fuel. Unavailability of a reliable and complete inventory restricts an accurate emission evaluation on construction work. The study attempts to review emission factor standards readily available worldwide for estimating emissions from construction equipment. Emission factors published by United States Environmental Protection Agency (US EPA), Australian National Greenhouse Accounts (AUS NGA), Intergovernmental Panel on Climate Change (IPCC) and European Environmental Agency (EEA) are critically reviewed to identify their strengths and weaknesses. A selection process based on the availability and applicability is then developed to help identify the most suitable emission factor standards for estimating emissions from construction equipment in the Australian context. A case study indicates that a fuel based emission factor is more suitable for GHG emission estimation and a time based emission factor is more appropriate for estimation of non-GHG emissions. However, the selection of emission factor standards also depends on factors like the place of analysis (country of origin), data availability and the scope of analysis. Therefore, suitable modifications and assumptions should be incorporated in order to represent these factors. Copyright © 2016 Elsevier Ltd. All rights reserved.
Form-To-Expectation Matching Effects on First-Pass Eye Movement Measures During Reading
Farmer, Thomas A.; Yan, Shaorong; Bicknell, Klinton; Tanenhaus, Michael K.
2015-01-01
Recent EEG/MEG studies suggest that when contextual information is highly predictive of some property of a linguistic signal, expectations generated from context can be translated into surprisingly low-level estimates of the physical form-based properties likely to occur in subsequent portions of the unfolding signal. Whether form-based expectations are generated and assessed during natural reading, however, remains unclear. We monitored eye movements while participants read phonologically typical and atypical nouns in noun-predictive contexts (Experiment 1), demonstrating that when a noun is strongly expected, fixation durations on first-pass eye movement measures, including first fixation duration, gaze duration, and go-past times, are shorter for nouns with category typical form-based features. In Experiments 2 and 3, typical and atypical nouns were placed in sentential contexts normed to create expectations of variable strength for a noun. Context and typicality interacted significantly at gaze duration. These results suggest that during reading, form-based expectations that are translated from higher-level category-based expectancies can facilitate the processing of a word in context, and that their effect on lexical processing is graded based on the strength of category expectancy. PMID:25915072
An adaptive ARX model to estimate the RUL of aluminum plates based on its crack growth
NASA Astrophysics Data System (ADS)
Barraza-Barraza, Diana; Tercero-Gómez, Víctor G.; Beruvides, Mario G.; Limón-Robles, Jorge
2017-01-01
A wide variety of Condition-Based Maintenance (CBM) techniques deal with the problem of predicting the time for an asset fault. Most statistical approaches rely on historical failure data that might not be available in several practical situations. To address this issue, practitioners might require the use of self-starting approaches that consider only the available knowledge about the current degradation process and the asset operating context to update the prognostic model. Some authors use Autoregressive (AR) models for this purpose that are adequate when the asset operating context is constant, however, if it is variable, the accuracy of the models can be affected. In this paper, three autoregressive models with exogenous variables (ARX) were constructed, and their capability to estimate the remaining useful life (RUL) of a process was evaluated following the case of the aluminum crack growth problem. An existing stochastic model of aluminum crack growth was implemented and used to assess RUL estimation performance of the proposed ARX models through extensive Monte Carlo simulations. Point and interval estimations were made based only on individual history, behavior, operating conditions and failure thresholds. Both analytic and bootstrapping techniques were used in the estimation process. Finally, by including recursive parameter estimation and a forgetting factor, the ARX methodology adapts to changing operating conditions and maintain the focus on the current degradation level of an asset.
Estimating consumer familiarity with health terminology: a context-based approach.
Zeng-Treitler, Qing; Goryachev, Sergey; Tse, Tony; Keselman, Alla; Boxwala, Aziz
2008-01-01
Effective health communication is often hindered by a "vocabulary gap" between language familiar to consumers and jargon used in medical practice and research. To present health information to consumers in a comprehensible fashion, we need to develop a mechanism to quantify health terms as being more likely or less likely to be understood by typical members of the lay public. Prior research has used approaches including syllable count, easy word list, and frequency count, all of which have significant limitations. In this article, we present a new method that predicts consumer familiarity using contextual information. The method was applied to a large query log data set and validated using results from two previously conducted consumer surveys. We measured the correlation between the survey result and the context-based prediction, syllable count, frequency count, and log normalized frequency count. The correlation coefficient between the context-based prediction and the survey result was 0.773 (p < 0.001), which was higher than the correlation coefficients between the survey result and the syllable count, frequency count, and log normalized frequency count (p < or = 0.012). The context-based approach provides a good alternative to the existing term familiarity assessment methods.
Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Michał; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi; Robinet, Florent; Schmidt, Patricia; Smith, Rory; Veitch, John; Wade, Madeline; Aoudia, Sofiane; Bose, Sukanta; Calderon Bustillo, Juan; Canizares, Priscilla; Capano, Colin; Clark, James; Colla, Alberto; Cuoco, Elena; Da Silva Costa, Carlos; Dal Canton, Tito; Evangelista, Edgar; Goetz, Evan; Gupta, Anuradha; Hannam, Mark; Keitel, David; Lackey, Benjamin; Logue, Joshua; Mohapatra, Satyanarayan; Piergiovanni, Francesco; Privitera, Stephen; Prix, Reinhard; Pürrer, Michael; Re, Virginia; Serafinelli, Roberto; Wade, Leslie; Wen, Linqing; Wette, Karl; Whelan, John; Palomba, C; Prodi, G
The Amaldi 10 Parallel Session C2 on gravitational wave (GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.
NASA Technical Reports Server (NTRS)
Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Michal; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi
2015-01-01
The Amaldi 10 Parallel Session C2 on gravitational wave(GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.
Evaluating a Pivot-Based Approach for Bilingual Lexicon Extraction
Kim, Jae-Hoon; Kwon, Hong-Seok; Seo, Hyeong-Won
2015-01-01
A pivot-based approach for bilingual lexicon extraction is based on the similarity of context vectors represented by words in a pivot language like English. In this paper, in order to show validity and usability of the pivot-based approach, we evaluate the approach in company with two different methods for estimating context vectors: one estimates them from two parallel corpora based on word association between source words (resp., target words) and pivot words and the other estimates them from two parallel corpora based on word alignment tools for statistical machine translation. Empirical results on two language pairs (e.g., Korean-Spanish and Korean-French) have shown that the pivot-based approach is very promising for resource-poor languages and this approach observes its validity and usability. Furthermore, for words with low frequency, our method is also well performed. PMID:25983745
NASA Astrophysics Data System (ADS)
Scanlon, B. R.; Zhang, Z.; Reitz, M.; Rodell, M.; Sanford, W. E.; Save, H.; Wiese, D. N.; Croteau, M. J.; McGuire, V. L.; Pool, D. R.; Faunt, C. C.; Zell, W.
2017-12-01
Groundwater storage depletion is a critical issue for many of the major aquifers in the U.S., particularly during intense droughts. GRACE (Gravity Recovery and Climate Experiment) satellite-based estimates of groundwater storage changes have attracted considerable media attention in the U.S. and globally and interest in GRACE products continues to increase. For this reason, a Powell Research Group was formed to: (1) Assess variations in groundwater storage using a variety of GRACE products and other storage components (snow, surface water, and soil moisture) for major aquifers in the U.S., (2) Quantify long-term trends in groundwater storage from ground-based monitoring and regional and national modeling, and (3) Use ground-based monitoring and modeling to interpret GRACE water storage changes within the context of extreme droughts and over-exploitation of groundwater. The group now has preliminary estimates from long-term trends and seasonal fluctuations in water storage using different GRACE solutions, including CSR, JPL and GSFC. Approaches to quantifying uncertainties in GRACE data are included. This work also shows how GRACE sees groundwater depletion in unconfined versus confined aquifers, and plans for future work will link GRACE data to regional groundwater models. The wealth of ground-based observations for the U.S. provides a unique opportunity to assess the reliability of GRACE-based estimates of groundwater storage changes.
A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model
Lin, Qing; Han, Youngjoon
2014-01-01
A wearable guidance system is designed to provide context-dependent guidance messages to blind people while they traverse local pathways. The system is composed of three parts: moving scene analysis, walking context estimation and audio message delivery. The combination of a downward-pointing laser scanner and a camera is used to solve the challenging problem of moving scene analysis. By integrating laser data profiles and image edge profiles, a multimodal profile model is constructed to estimate jointly the ground plane, object locations and object types, by using a Bayesian network. The outputs of the moving scene analysis are further employed to estimate the walking context, which is defined as a fuzzy safety level that is inferred through a fuzzy logic model. Depending on the estimated walking context, the audio messages that best suit the current context are delivered to the user in a flexible manner. The proposed system is tested under various local pathway scenes, and the results confirm its efficiency in assisting blind people to attain autonomous mobility. PMID:25302812
Carroll, Raymond J; Delaigle, Aurore; Hall, Peter
2011-03-01
In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.
Comparison of Optimal Design Methods in Inverse Problems
Banks, H. T.; Holm, Kathleen; Kappel, Franz
2011-01-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762
Development of an OSSE Framework for a Global Atmospheric Data Assimilation System
NASA Technical Reports Server (NTRS)
Gelaro, Ronald; Errico, Ronald M.; Prive, N.
2012-01-01
Observing system simulation experiments (OSSEs) are powerful tools for estimating the usefulness of various configurations of envisioned observing systems and data assimilation techniques. Their utility stems from their being conducted in an entirely simulated context, utilizing simulated observations having simulated errors and drawn from a simulation of the earth's environment. Observations are generated by applying physically based algorithms to the simulated state, such as performed during data assimilation or using other appropriate algorithms. Adding realistic instrument plus representativeness errors, including their biases and correlations, can be critical for obtaining realistic assessments of the impact of a proposed observing system or analysis technique. If estimates of the expected accuracy of proposed observations are realistic, then the OSSE can be also used to learn how best to utilize the new information, accelerating its transition to operations once the real data are available. As with any inferences from simulations, however, it is first imperative that some baseline OSSEs are performed and well validated against corresponding results obtained with a real observing system. This talk provides an overview of, and highlights critical issues related to, the development of an OSSE framework for the tropospheric weather prediction component of the NASA GEOS-5 global atmospheric data assimilation system. The framework includes all existing observations having significant impact on short-term forecast skill. Its validity has been carefully assessed using a range of metrics that can be evaluated in both the OSSE and real contexts, including adjoint-based estimates of observation impact. A preliminary application to the Aeolus Doppler wind lidar mission, scheduled for launch by the European Space Agency in 2014, has also been investigated.
Esteban, Segundo; Girón-Sierra, Jose M.; Polo, Óscar R.; Angulo, Manuel
2016-01-01
Most satellites use an on-board attitude estimation system, based on available sensors. In the case of low-cost satellites, which are of increasing interest, it is usual to use magnetometers and Sun sensors. A Kalman filter is commonly recommended for the estimation, to simultaneously exploit the information from sensors and from a mathematical model of the satellite motion. It would be also convenient to adhere to a quaternion representation. This article focuses on some problems linked to this context. The state of the system should be represented in observable form. Singularities due to alignment of measured vectors cause estimation problems. Accommodation of the Kalman filter originates convergence difficulties. The article includes a new proposal that solves these problems, not needing changes in the Kalman filter algorithm. In addition, the article includes assessment of different errors, initialization values for the Kalman filter; and considers the influence of the magnetic dipole moment perturbation, showing how to handle it as part of the Kalman filter framework. PMID:27809250
Esteban, Segundo; Girón-Sierra, Jose M; Polo, Óscar R; Angulo, Manuel
2016-10-31
Most satellites use an on-board attitude estimation system, based on available sensors. In the case of low-cost satellites, which are of increasing interest, it is usual to use magnetometers and Sun sensors. A Kalman filter is commonly recommended for the estimation, to simultaneously exploit the information from sensors and from a mathematical model of the satellite motion. It would be also convenient to adhere to a quaternion representation. This article focuses on some problems linked to this context. The state of the system should be represented in observable form. Singularities due to alignment of measured vectors cause estimation problems. Accommodation of the Kalman filter originates convergence difficulties. The article includes a new proposal that solves these problems, not needing changes in the Kalman filter algorithm. In addition, the article includes assessment of different errors, initialization values for the Kalman filter; and considers the influence of the magnetic dipole moment perturbation, showing how to handle it as part of the Kalman filter framework.
Sources of interference in item and associative recognition memory.
Osth, Adam F; Dennis, Simon
2015-04-01
A powerful theoretical framework for exploring recognition memory is the global matching framework, in which a cue's memory strength reflects the similarity of the retrieval cues being matched against the contents of memory simultaneously. Contributions at retrieval can be categorized as matches and mismatches to the item and context cues, including the self match (match on item and context), item noise (match on context, mismatch on item), context noise (match on item, mismatch on context), and background noise (mismatch on item and context). We present a model that directly parameterizes the matches and mismatches to the item and context cues, which enables estimation of the magnitude of each interference contribution (item noise, context noise, and background noise). The model was fit within a hierarchical Bayesian framework to 10 recognition memory datasets that use manipulations of strength, list length, list strength, word frequency, study-test delay, and stimulus class in item and associative recognition. Estimates of the model parameters revealed at most a small contribution of item noise that varies by stimulus class, with virtually no item noise for single words and scenes. Despite the unpopularity of background noise in recognition memory models, background noise estimates dominated at retrieval across nearly all stimulus classes with the exception of high frequency words, which exhibited equivalent levels of context noise and background noise. These parameter estimates suggest that the majority of interference in recognition memory stems from experiences acquired before the learning episode. (c) 2015 APA, all rights reserved).
Li, Meina; Kwak, Keun-Chang; Kim, Youn Tae
2016-01-01
Conventionally, indirect calorimetry has been used to estimate oxygen consumption in an effort to accurately measure human body energy expenditure. However, calorimetry requires the subject to wear a mask that is neither convenient nor comfortable. The purpose of our study is to develop a patch-type sensor module with an embedded incremental radial basis function neural network (RBFNN) for estimating the energy expenditure. The sensor module contains one ECG electrode and a three-axis accelerometer, and can perform real-time heart rate (HR) and movement index (MI) monitoring. The embedded incremental network includes linear regression (LR) and RBFNN based on context-based fuzzy c-means (CFCM) clustering. This incremental network is constructed by building a collection of information granules through CFCM clustering that is guided by the distribution of error of the linear part of the LR model. PMID:27669249
Patient Experience-based Value Sets: Are They Stable?
Pickard, A Simon; Hung, Yu-Ting; Lin, Fang-Ju; Lee, Todd A
2017-11-01
Although societal preference weights are desirable to inform resource-allocation decision-making, patient experienced health state-based value sets can be useful for clinical decision-making, but context may matter. To estimate EQ-5D value sets using visual analog scale (VAS) ratings for patients undergoing knee replacement surgery and compare the estimates before and after surgery. We used the Patient Reported Outcome Measures data collected by the UK National Health Service on patients undergoing knee replacement from 2009 to 2012. Generalized least squares regression models were used to derive value sets based on the EQ-5D-3 level using a development sample before and after surgery, and model performance was examined using a validation sample. A total of 90,450 preoperative and postoperative valuations were included. For preoperative valuations, the largest decrement in VAS values was associated with the dimension of anxiety/depression, followed by self-care, mobility, usual activities, and pain/discomfort. However, pain/discomfort had a greater impact on VAS value decrement in postoperative valuations. Compared with preoperative health problems, postsurgical health problems were associated with larger value decrements, with significant differences in several levels and dimensions, including level 2 of mobility, level 2/3 of usual activities, level 3 of pain/discomfort, and level 3 of anxiety/depression. Similar results were observed across subgroups stratified by age and sex. Findings suggest patient experience-based value sets are not stable (ie, context such as timing matters). However, the knowledge that lower values are assigned to health states postsurgery compared with presurgery may be useful for the patient-doctor decision-making process.
EEG-based workload estimation across affective contexts
Mühl, Christian; Jeunet, Camille; Lotte, Fabien
2014-01-01
Workload estimation from electroencephalographic signals (EEG) offers a highly sensitive tool to adapt the human–computer interaction to the user state. To create systems that reliably work in the complexity of the real world, a robustness against contextual changes (e.g., mood), has to be achieved. To study the resilience of state-of-the-art EEG-based workload classification against stress we devise a novel experimental protocol, in which we manipulated the affective context (stressful/non-stressful) while the participant solved a task with two workload levels. We recorded self-ratings, behavior, and physiology from 24 participants to validate the protocol. We test the capability of different, subject-specific workload classifiers using either frequency-domain, time-domain, or both feature varieties to generalize across contexts. We show that the classifiers are able to transfer between affective contexts, though performance suffers independent of the used feature domain. However, cross-context training is a simple and powerful remedy allowing the extraction of features in all studied feature varieties that are more resilient to task-unrelated variations in signal characteristics. Especially for frequency-domain features, across-context training is leading to a performance comparable to within-context training and testing. We discuss the significance of the result for neurophysiology-based workload detection in particular and for the construction of reliable passive brain–computer interfaces in general. PMID:24971046
A Framework for Context Sensitive Risk-Based Access Control in Medical Information Systems
Choi, Donghee; Kim, Dohoon; Park, Seog
2015-01-01
Since the access control environment has changed and the threat of insider information leakage has come to the fore, studies on risk-based access control models that decide access permissions dynamically have been conducted vigorously. Medical information systems should protect sensitive data such as medical information from insider threat and enable dynamic access control depending on the context such as life-threatening emergencies. In this paper, we suggest an approach and framework for context sensitive risk-based access control suitable for medical information systems. This approach categorizes context information, estimating and applying risk through context- and treatment-based permission profiling and specifications by expanding the eXtensible Access Control Markup Language (XACML) to apply risk. The proposed framework supports quick responses to medical situations and prevents unnecessary insider data access through dynamic access authorization decisions in accordance with the severity of the context and treatment. PMID:26075013
Katherine A. Zeller; Kevin McGarigal; Paul Beier; Samuel A. Cushman; T. Winston Vickers; Walter M. Boyce
2014-01-01
Estimating landscape resistance to animal movement is the foundation for connectivity modeling, and resource selection functions based on point data are commonly used to empirically estimate resistance. In this study, we used GPS data points acquired at 5-min intervals from radiocollared pumas in southern California to model context-dependent point selection...
NASA Astrophysics Data System (ADS)
Mantas, V. M.; Liu, Z.; Pereira, A. J. S. C.
2015-04-01
The full potential of Satellite Rainfall Estimates (SRE) can only be realized if timely access to the datasets is possible. Existing data distribution web portals are often focused on global products and offer limited customization options, especially for the purpose of routine regional monitoring. Furthermore, most online systems are designed to meet the needs of desktop users, limiting the compatibility with mobile devices. In response to the growing demand for SRE and to address the current limitations of available web portals a project was devised to create a set of freely available applications and services, available at a common portal that can: (1) simplify cross-platform access to Tropical Rainfall Measuring Mission Online Visualization and Analysis System (TOVAS) data (including from Android mobile devices), (2) provide customized and continuous monitoring of SRE in response to user demands and (3) combine data from different online data distribution services, including rainfall estimates, river gauge measurements or imagery from Earth Observation missions at a single portal, known as the Tropical Rainfall Measuring Mission (TRMM) Explorer. The TRMM Explorer project suite includes a Python-based web service and Android applications capable of providing SRE and ancillary data in different intuitive formats with the focus on regional and continuous analysis. The outputs include dynamic plots, tables and data files that can also be used to feed downstream applications and services. A case study in Southern Angola is used to describe the potential of the TRMM Explorer for SRE distribution and analysis in the context of ungauged watersheds. The development of a collection of data distribution instances helped to validate the concept and identify the limitations of the program, in a real context and based on user feedback. The TRMM Explorer can successfully supplement existing web portals distributing SRE and provide a cost-efficient resource to small and medium-sized organizations with specific SRE monitoring needs, namely in developing and transition countries.
Mishra, Sharmistha; Boily, Marie-Claude; Schwartz, Sheree; Beyrer, Chris; Blanchard, James F; Moses, Stephen; Castor, Delivette; Phaswana-Mafuya, Nancy; Vickerman, Peter; Drame, Fatou; Alary, Michel; Baral, Stefan D
2016-08-01
In the context of generalized human immunodeficiency virus (HIV) epidemics, there has been limited recent investment in HIV surveillance and prevention programming for key populations including female sex workers. Often implicit in the decision to limit investment in these epidemic settings are assumptions including that commercial sex is not significant to the sustained transmission of HIV, and HIV interventions designed to reach "all segments of society" will reach female sex workers and clients. Emerging empiric and model-based evidence is challenging these assumptions. This article highlights the frameworks and estimates used to characterize the role of sex work in HIV epidemics as well as the relevant empiric data landscape on sex work in generalized HIV epidemics and their strengths and limitations. Traditional approaches to estimate the contribution of sex work to HIV epidemics do not capture the potential for upstream and downstream sexual and vertical HIV transmission. Emerging approaches such as the transmission population attributable fraction from dynamic mathematical models can address this gap. To move forward, the HIV scientific community must begin by replacing assumptions about the epidemiology of generalized HIV epidemics with data and more appropriate methods of estimating the contribution of unprotected sex in the context of sex work. Copyright © 2016 Elsevier Inc. All rights reserved.
Cost-effectiveness of human papillomavirus vaccination in the United States.
Chesson, Harrell W; Ekwueme, Donatus U; Saraiya, Mona; Markowitz, Lauri E
2008-02-01
We describe a simplified model, based on the current economic and health effects of human papillomavirus (HPV), to estimate the cost-effectiveness of HPV vaccination of 12-year-old girls in the United States. Under base-case parameter values, the estimated cost per quality-adjusted life year gained by vaccination in the context of current cervical cancer screening practices in the United States ranged from $3,906 to $14,723 (2005 US dollars), depending on factors such as whether herd immunity effects were assumed; the types of HPV targeted by the vaccine; and whether the benefits of preventing anal, vaginal, vulvar, and oropharyngeal cancers were included. The results of our simplified model were consistent with published studies based on more complex models when key assumptions were similar. This consistency is reassuring because models of varying complexity will be essential tools for policy makers in the development of optimal HPV vaccination strategies.
An analysis of context-based similarity tasks in textbooks from Brazil and the United States
NASA Astrophysics Data System (ADS)
Barcelos Amaral, Rúbia; Hollebrands, Karen
2017-11-01
Three textbooks from Brazil and three textbooks from the United States were analysed with a focus on similarity and context-based tasks. Students' opportunities to learn similarity were examined by considering whether students were provided context-based tasks of high cognitive demand and whether those tasks included missing or superfluous information. Although books in the United States included more tasks, the proportion of tasks focused on similarity were about the same. Context-based similarity tasks accounted for 9%-29% of the similarity tasks, and many of these contextual tasks were of low cognitive demand. In addition, the types of contexts that were included in the textbooks were critiqued and examples provided.
Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study
NASA Astrophysics Data System (ADS)
Gao, Yun; Kontoyiannis, Ioannis; Bienenstock, Elie
2008-06-01
Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW) method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM) with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i) For all estimators considered, the main source of error is the bias. (ii) The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii) The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv) The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in the data, and with longer word-lengths the empirical distribution is severely undersampled, leading to large biases.
The Azimuth Structure of Nuclear Collisions — I
NASA Astrophysics Data System (ADS)
Trainor, Thomas A.; Kettler, David T.
We describe azimuth structure commonly associated with elliptic and directed flow in the context of 2D angular autocorrelations for the purpose of precise separation of so-called nonflow (mainly minijets) from flow. We extend the Fourier-transform description of azimuth structure to include power spectra and autocorrelations related by the Wiener-Khintchine theorem. We analyze several examples of conventional flow analysis in that context and question the relevance of reaction plane estimation to flow analysis. We introduce the 2D angular autocorrelation with examples from data analysis and describe a simulation exercise which demonstrates precise separation of flow and nonflow using the 2D autocorrelation method. We show that an alternative correlation measure based on Pearson's normalized covariance provides a more intuitive measure of azimuth structure.
How Metastrategic Considerations Influence the Selection of Frequency Estimation Strategies
ERIC Educational Resources Information Center
Brown, Norman R.
2008-01-01
Prior research indicates that enumeration-based frequency estimation strategies become increasingly common as memory for relevant event instances improves and that moderate levels of context memory are associated with moderate rates of enumeration [Brown, N. R. (1995). Estimation strategies and the judgment of event frequency. Journal of…
NASA Astrophysics Data System (ADS)
Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan
2016-02-01
Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.
Power Recycled Weak Value Based Metrology
2015-04-29
PAGE The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...cavity, one is able to more efficiently use the input laser power by increasing the total power inside the interferometer. In the context of these weak...EV I EW LE T T ER S week ending 1 MAY 2015 0031-9007=15=114(17)=170801(5) 170801-1 © 2015 American Physical Society We consider a continuous wave laser
Document localization algorithms based on feature points and straight lines
NASA Astrophysics Data System (ADS)
Skoryukina, Natalya; Shemiakina, Julia; Arlazarov, Vladimir L.; Faradjev, Igor
2018-04-01
The important part of the system of a planar rectangular object analysis is the localization: the estimation of projective transform from template image of an object to its photograph. The system also includes such subsystems as the selection and recognition of text fields, the usage of contexts etc. In this paper three localization algorithms are described. All algorithms use feature points and two of them also analyze near-horizontal and near- vertical lines on the photograph. The algorithms and their combinations are tested on a dataset of real document photographs. Also the method of localization quality estimation is proposed that allows configuring the localization subsystem independently of the other subsystems quality.
A tale of two "forests": random forest machine learning AIDS tropical forest carbon mapping.
Mascaro, Joseph; Asner, Gregory P; Knapp, David E; Kennedy-Bowdoin, Ty; Martin, Roberta E; Anderson, Christopher; Higgins, Mark; Chadwick, K Dana
2014-01-01
Accurate and spatially-explicit maps of tropical forest carbon stocks are needed to implement carbon offset mechanisms such as REDD+ (Reduced Deforestation and Degradation Plus). The Random Forest machine learning algorithm may aid carbon mapping applications using remotely-sensed data. However, Random Forest has never been compared to traditional and potentially more reliable techniques such as regionally stratified sampling and upscaling, and it has rarely been employed with spatial data. Here, we evaluated the performance of Random Forest in upscaling airborne LiDAR (Light Detection and Ranging)-based carbon estimates compared to the stratification approach over a 16-million hectare focal area of the Western Amazon. We considered two runs of Random Forest, both with and without spatial contextual modeling by including--in the latter case--x, and y position directly in the model. In each case, we set aside 8 million hectares (i.e., half of the focal area) for validation; this rigorous test of Random Forest went above and beyond the internal validation normally compiled by the algorithm (i.e., called "out-of-bag"), which proved insufficient for this spatial application. In this heterogeneous region of Northern Peru, the model with spatial context was the best preforming run of Random Forest, and explained 59% of LiDAR-based carbon estimates within the validation area, compared to 37% for stratification or 43% by Random Forest without spatial context. With the 60% improvement in explained variation, RMSE against validation LiDAR samples improved from 33 to 26 Mg C ha(-1) when using Random Forest with spatial context. Our results suggest that spatial context should be considered when using Random Forest, and that doing so may result in substantially improved carbon stock modeling for purposes of climate change mitigation.
Assessment of Physical Activity and Energy Expenditure: An Overview of Objective Measures
Hills, Andrew P.; Mokhtar, Najat; Byrne, Nuala M.
2014-01-01
The ability to assess energy expenditure (EE) and estimate physical activity (PA) in free-living individuals is extremely important in the global context of non-communicable diseases including malnutrition, overnutrition (obesity), and diabetes. It is also important to appreciate that PA and EE are different constructs with PA defined as any bodily movement that results in EE and accordingly, energy is expended as a result of PA. However, total energy expenditure, best assessed using the criterion doubly labeled water (DLW) technique, includes components in addition to physical activity energy expenditure, namely resting energy expenditure and the thermic effect of food. Given the large number of assessment techniques currently used to estimate PA in humans, it is imperative to understand the relative merits of each. The goal of this review is to provide information on the utility and limitations of a range of objective measures of PA and their relationship with EE. The measures discussed include those based on EE or oxygen uptake including DLW, activity energy expenditure, physical activity level, and metabolic equivalent; those based on heart rate monitoring and motion sensors; and because of their widespread use, selected subjective measures. PMID:25988109
Side-information-dependent correlation channel estimation in hash-based distributed video coding.
Deligiannis, Nikos; Barbarien, Joeri; Jacobs, Marc; Munteanu, Adrian; Skodras, Athanassios; Schelkens, Peter
2012-04-01
In the context of low-cost video encoding, distributed video coding (DVC) has recently emerged as a potential candidate for uplink-oriented applications. This paper builds on a concept of correlation channel (CC) modeling, which expresses the correlation noise as being statistically dependent on the side information (SI). Compared with classical side-information-independent (SII) noise modeling adopted in current DVC solutions, it is theoretically proven that side-information-dependent (SID) modeling improves the Wyner-Ziv coding performance. Anchored in this finding, this paper proposes a novel algorithm for online estimation of the SID CC parameters based on already decoded information. The proposed algorithm enables bit-plane-by-bit-plane successive refinement of the channel estimation leading to progressively improved accuracy. Additionally, the proposed algorithm is included in a novel DVC architecture that employs a competitive hash-based motion estimation technique to generate high-quality SI at the decoder. Experimental results corroborate our theoretical gains and validate the accuracy of the channel estimation algorithm. The performance assessment of the proposed architecture shows remarkable and consistent coding gains over a germane group of state-of-the-art distributed and standard video codecs, even under strenuous conditions, i.e., large groups of pictures and highly irregular motion content.
Vision-Aided Context-Aware Framework for Personal Navigation Services
NASA Astrophysics Data System (ADS)
Saeedi, S.; Moussa, A.; El-Sheimy, N., , Dr.
2012-07-01
The ubiquity of mobile devices (such as smartphones and tablet-PCs) has encouraged the use of location-based services (LBS) that are relevant to the current location and context of a mobile user. The main challenge of LBS is to find a pervasive and accurate personal navigation system (PNS) in different situations of a mobile user. In this paper, we propose a method of personal navigation for pedestrians that allows a user to freely move in outdoor environments. This system aims at detection of the context information which is useful for improving personal navigation. The context information for a PNS consists of user activity modes (e.g. walking, stationary, driving, and etc.) and the mobile device orientation and placement with respect to the user. After detecting the context information, a low-cost integrated positioning algorithm has been employed to estimate pedestrian navigation parameters. The method is based on the integration of the relative user's motion (changes of velocity and heading angle) estimation based on the video image matching and absolute position information provided by GPS. A Kalman filter (KF) has been used to improve the navigation solution when the user is walking and the phone is in his/her hand. The Experimental results demonstrate the capabilities of this method for outdoor personal navigation systems.
Estimation of phase derivatives using discrete chirp-Fourier-transform-based method.
Gorthi, Sai Siva; Rastogi, Pramod
2009-08-15
Estimation of phase derivatives is an important task in many interferometric measurements in optical metrology. This Letter introduces a method based on discrete chirp-Fourier transform for accurate and direct estimation of phase derivatives, even in the presence of noise. The method is introduced in the context of the analysis of reconstructed interference fields in digital holographic interferometry. We present simulation and experimental results demonstrating the utility of the proposed method.
Short-term memory affects color perception in context.
Olkkonen, Maria; Allred, Sarah R
2014-01-01
Color-based object selection - for instance, looking for ripe tomatoes in the market - places demands on both perceptual and memory processes: it is necessary to form a stable perceptual estimate of surface color from a variable visual signal, as well as to retain multiple perceptual estimates in memory while comparing objects. Nevertheless, perceptual and memory processes in the color domain are generally studied in separate research programs with the assumption that they are independent. Here, we demonstrate a strong failure of independence between color perception and memory: the effect of context on color appearance is substantially weakened by a short retention interval between a reference and test stimulus. This somewhat counterintuitive result is consistent with Bayesian estimation: as the precision of the representation of the reference surface and its context decays in memory, prior information gains more weight, causing the retained percepts to be drawn toward prior information about surface and context color. This interaction implies that to fully understand information processing in real-world color tasks, perception and memory need to be considered jointly.
Sotardi, Valerie A
2018-05-01
Educational measures of anxiety focus heavily on students' experiences with tests yet overlook other assessment contexts. In this research, two brief multiscale questionnaires were developed and validated to measure trait evaluation anxiety (MTEA-12) and state evaluation anxiety (MSEA-12) for use in various assessment contexts in non-clinical, educational settings. The research included a cross-sectional analysis of self-report data using authentic assessment settings in which evaluation anxiety was measured. Instruments were tested using a validation sample of 241 first-year university students in New Zealand. Scale development included component structures for state and trait scales based on existing theoretical frameworks. Analyses using confirmatory factor analysis and descriptive statistics indicate that the scales are reliable and structurally valid. Multivariate general linear modeling using subscales from the MTEA-12, MSEA-12, and student grades suggest adequate criterion-related validity. Initial predictive validity in which one relevant MTEA-12 factor explained between 21% and 54% of the variance in three MSEA-12 factors. Results document MTEA-12 and MSEA-12 as reliable measures of trait and state dimensions of evaluation anxiety for test and writing contexts. Initial estimates suggest the scales as having promising validity, and recommendations for further validation are outlined.
Application of the quantum spin glass theory to image restoration.
Inoue, J I
2001-04-01
Quantum fluctuation is introduced into the Markov random-field model for image restoration in the context of a Bayesian approach. We investigate the dependence of the quantum fluctuation on the quality of a black and white image restoration by making use of statistical mechanics. We find that the maximum posterior marginal (MPM) estimate based on the quantum fluctuation gives a fine restoration in comparison with the maximum a posteriori estimate or the thermal fluctuation based MPM estimate.
NASA Technical Reports Server (NTRS)
Tilton, J. C.; Swain, P. H. (Principal Investigator); Vardeman, S. B.
1981-01-01
A key input to a statistical classification algorithm, which exploits the tendency of certain ground cover classes to occur more frequently in some spatial context than in others, is a statistical characterization of the context: the context distribution. An unbiased estimator of the context distribution is discussed which, besides having the advantage of statistical unbiasedness, has the additional advantage over other estimation techniques of being amenable to an adaptive implementation in which the context distribution estimate varies according to local contextual information. Results from applying the unbiased estimator to the contextual classification of three real LANDSAT data sets are presented and contrasted with results from non-contextual classifications and from contextual classifications utilizing other context distribution estimation techniques.
ERIC Educational Resources Information Center
Finch, Holmes
2010-01-01
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
ARE LITERACY SKILLS ASSOCIATED WITH YOUNG ADULTS’ HEALTH IN AFRICA? EVIDENCE FROM MALAWI
Smith-Greenaway, Emily
2014-01-01
This study investigates whether literacy skills are a distinct dimension of education that influences young adults’ health in the southeast African context of Malawi. It uses new data from Tsogolo la Thanzi, a study of young adults in southern Malawi, to achieve three aims. The first is descriptive: to demonstrate a direct assessment for measuring literacy in a population-based survey, and show that it captures variability in skills among young adults, including those with comparable levels of educational attainment. The second aim is to identify whether literacy influences young adults’ health—net of their educational attainment and other confounding factors. Multivariate analyses reveal that literacy is associated with two measures of physical health: self-rated health and prolonged sickness. Because literacy is a key determinant of health, the third aim is to provide insight into how to measure it: can commonly used indirect approaches to estimating literacy (e.g., based on educational attainment or self-reports), accurately capture its prevalence and relationship with health? In a second set of analyses, bivariate results show whether, and the extent to which, indirect measures of literacy overestimate literacy’s prevalence, and multivariate models assess whether indirect estimates of literacy capture its relationship with health. The findings support future efforts to incorporate literacy assessments into population surveys to accurately estimate literacy’s prevalence and health benefits, particularly in contexts like Malawi where access to high-quality schools remains limited. PMID:25164414
Networked high-speed auroral observations combined with radar measurements for multi-scale insights
NASA Astrophysics Data System (ADS)
Hirsch, M.; Semeter, J. L.
2015-12-01
Networks of ground-based instruments to study terrestrial aurora for the purpose of analyzing particle precipitation characteristics driving the aurora have been established. Additional funding is pouring into future ground-based auroral observation networks consisting of combinations of tossable, portable, and fixed installation ground-based legacy equipment. Our approach to this problem using the High Speed Tomography (HiST) system combines tightly-synchronized filtered auroral optical observations capturing temporal features of order 10 ms with supporting measurements from incoherent scatter radar (ISR). ISR provides a broader spatial context up to order 100 km laterally on one minute time scales, while our camera field of view (FOV) is chosen to be order 10 km at auroral altitudes in order to capture 100 m scale lateral auroral features. The dual-scale observations of ISR and HiST fine-scale optical observations may be coupled through a physical model using linear basis functions to estimate important ionospheric quantities such as electron number density in 3-D (time, perpendicular and parallel to the geomagnetic field).Field measurements and analysis using HiST and PFISR are presented from experiments conducted at the Poker Flat Research Range in central Alaska. Other multiscale configuration candidates include supplementing networks of all-sky cameras such as THEMIS with co-locations of HiST-like instruments to fuse wide FOV measurements with the fine-scale HiST precipitation characteristic estimates. Candidate models for this coupling include GLOW and TRANSCAR. Future extensions of this work may include incorporating line of sight total electron count estimates from ground-based networks of GPS receivers in a sensor fusion problem.
Byrne, Abbey; Morgan, Alison; Soto, Eliana Jimenez; Dettrick, Zoe
2012-11-12
Unmet need for family planning is responsible for 7.4 million disability-adjusted life years and 30% of the maternity-related disease burden. An estimated 35% of births are unintended and some 200 million couples state a desire to delay pregnancy or cease fertility but are not using contraception. Unmet need is higher among the poorest, lesser educated, rural residents and women under 19 years. The barriers to, and successful strategies for, satisfying all demand for modern contraceptives are heavily influenced by context. Successfully overcoming this to increase the uptake of family planning is estimated to reduce the risk of maternal death by up to 58% as well as contribute to poverty reduction, women's empowerment and educational, social and economic participation, national development and environmental protection. To strengthen health systems for delivery of context-specific, equity-focused reproductive, maternal, newborn and child health services (RMNCH), the Investment Case study was applied in the Asia-Pacific region. Staff of local and central government and non-government organisations analysed data indicative of health service delivery through a supply-demand oriented framework to identify constraints to RMNCH scale-up. Planners developed contextualised strategies and the projected coverage increases were modelled for estimates of marginal impact on maternal mortality and costs over a five year period. In Indonesia, Philippines and Nepal the constraints behind incomplete coverage of family planning services included: weaknesses in commodities logistic management; geographical inaccessibility; limitations in health worker skills and numbers; legislation; and religious and cultural ideologies. Planned activities included: streamlining supply systems; establishment of Community Health Teams for integrated RMNCH services; local recruitment of staff and refresher training; task-shifting; and follow-up cards. Modelling showed varying marginal impact and costs for each setting with potential for significant reductions in the maternal mortality rate; up to 28% (25.1-30.7) over five years, costing up to a marginal USD 1.34 (1.32-1.35) per capita in the first year. Local health planners are in a prime position to devise feasible context-specific activities to overcome constraints and increase met need for family planning to accelerate progress towards MDG 5.
Non-stationary noise estimation using dictionary learning and Gaussian mixture models
NASA Astrophysics Data System (ADS)
Hughes, James M.; Rockmore, Daniel N.; Wang, Yang
2014-02-01
Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.
Context Modeler for Wavelet Compression of Spectral Hyperspectral Images
NASA Technical Reports Server (NTRS)
Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.
NASA Astrophysics Data System (ADS)
Clark, Katherine; van Tongeren, Martie; Christensen, Frans M.; Brouwer, Derk; Nowack, Bernd; Gottschalk, Fadri; Micheletti, Christian; Schmid, Kaspar; Gerritsen, Rianda; Aitken, Rob; Vaquero, Celina; Gkanis, Vasileios; Housiadas, Christos; de Ipiña, Jesús María López; Riediker, Michael
2012-09-01
The aim of this paper is to describe the process and challenges in building exposure scenarios for engineered nanomaterials (ENM), using an exposure scenario format similar to that used for the European Chemicals regulation (REACH). Over 60 exposure scenarios were developed based on information from publicly available sources (literature, books, and reports), publicly available exposure estimation models, occupational sampling campaign data from partnering institutions, and industrial partners regarding their own facilities. The primary focus was on carbon-based nanomaterials, nano-silver (nano-Ag) and nano-titanium dioxide (nano-TiO2), and included occupational and consumer uses of these materials with consideration of the associated environmental release. The process of building exposure scenarios illustrated the availability and limitations of existing information and exposure assessment tools for characterizing exposure to ENM, particularly as it relates to risk assessment. This article describes the gaps in the information reviewed, recommends future areas of ENM exposure research, and proposes types of information that should, at a minimum, be included when reporting the results of such research, so that the information is useful in a wider context.
Iothalamate versus estimated GFR in a Hispanic-dominant pediatric renal transplant population.
Papez, Karen E; Barletta, Gina-Marie; Hsieh, Stephanie; Joseph, Mark; Morgenstern, Bruce Z
2013-12-01
Accurate knowledge of glomerular filtration rate (GFR) is essential to the practice of nephrology. Routine surveillance of GFR is most commonly executed using estimated GFR (eGFR) calculations, most often from serum creatinine measurements. However, cystatin C-based equations have demonstrated earlier sensitivity to decline in renal function. The literature regarding eGFR from cystatin C has few references that include transplant recipients. Additionally, for most of the published eGFR equations, patients of Hispanic ethnicity have not been enrolled in sufficient numbers. The applicability of several eGFR equations to the pediatric kidney transplant population at our center were compared in the context of determining whether Hispanic ethnicity was associated with equation performance. Updated Schwartz, CKiD, and Zappitelli eGFR estimation equations demonstrated the highest correlations. The authors recommend further prospective investigations to validate and identify factors contributing to these findings.
Short-Term Memory Affects Color Perception in Context
Olkkonen, Maria; Allred, Sarah R.
2014-01-01
Color-based object selection — for instance, looking for ripe tomatoes in the market — places demands on both perceptual and memory processes: it is necessary to form a stable perceptual estimate of surface color from a variable visual signal, as well as to retain multiple perceptual estimates in memory while comparing objects. Nevertheless, perceptual and memory processes in the color domain are generally studied in separate research programs with the assumption that they are independent. Here, we demonstrate a strong failure of independence between color perception and memory: the effect of context on color appearance is substantially weakened by a short retention interval between a reference and test stimulus. This somewhat counterintuitive result is consistent with Bayesian estimation: as the precision of the representation of the reference surface and its context decays in memory, prior information gains more weight, causing the retained percepts to be drawn toward prior information about surface and context color. This interaction implies that to fully understand information processing in real-world color tasks, perception and memory need to be considered jointly. PMID:24475131
An introduction to Bayesian statistics in health psychology.
Depaoli, Sarah; Rus, Holly M; Clifton, James P; van de Schoot, Rens; Tiemensma, Jitske
2017-09-01
The aim of the current article is to provide a brief introduction to Bayesian statistics within the field of health psychology. Bayesian methods are increasing in prevalence in applied fields, and they have been shown in simulation research to improve the estimation accuracy of structural equation models, latent growth curve (and mixture) models, and hierarchical linear models. Likewise, Bayesian methods can be used with small sample sizes since they do not rely on large sample theory. In this article, we discuss several important components of Bayesian statistics as they relate to health-based inquiries. We discuss the incorporation and impact of prior knowledge into the estimation process and the different components of the analysis that should be reported in an article. We present an example implementing Bayesian estimation in the context of blood pressure changes after participants experienced an acute stressor. We conclude with final thoughts on the implementation of Bayesian statistics in health psychology, including suggestions for reviewing Bayesian manuscripts and grant proposals. We have also included an extensive amount of online supplementary material to complement the content presented here, including Bayesian examples using many different software programmes and an extensive sensitivity analysis examining the impact of priors.
Sharing simulation-based training courses between institutions: opportunities and challenges.
Laack, Torrey A; Lones, Ellen A; Schumacher, Donna R; Todd, Frances M; Cook, David A
2017-01-01
Sharing simulation-based training (SBT) courses between institutions could reduce time to develop new content but also presents challenges. We evaluate the process of sharing SBT courses across institutions in a mixed method study estimating the time required and identifying barriers and potential solutions. Two US academic medical institutions explored instructor experiences with the process of sharing four courses (two at each site) using personal interviews and a written survey and estimated the time needed to develop new content vs implement existing SBT courses. The project team spent approximately 618 h creating a collaboration infrastructure to support course sharing. Sharing two SBT courses was estimated to save 391 h compared with developing two new courses. In the qualitative analysis, participants noted the primary benefit of course sharing was time savings. Barriers included difficulty finding information and understanding overall course flow. Suggestions for improvement included establishing a standardized template, clearly identifying the target audience, providing a course overview, communicating with someone familiar with the original SBT course, employing an intuitive file-sharing platform, and considering local culture, context, and needs. Sharing SBT courses between institutions is feasible but not without challenges. An initial investment in a sharing infrastructure may facilitate downstream time savings compared with developing content de novo.
ERIC Educational Resources Information Center
Matteucci, Maria Cristina; Tomasetto, Carlo; Selleri, Patrizia; Carugati, Felice
2008-01-01
Achievement evaluation in school contexts may be considered as a kind of social judgment, which is affected by social and moral determinants since it is not merely an estimation of pupils' accomplishment (Dompnier, Pansu, & Bressoux, 2006; Weiner, 2003). Teachers' judgments have been investigated starting from the analysis of two theoretical…
A web-based rapid assessment tool for production publishing solutions
NASA Astrophysics Data System (ADS)
Sun, Tong
2010-02-01
Solution assessment is a critical first-step in understanding and measuring the business process efficiency enabled by an integrated solution package. However, assessing the effectiveness of any solution is usually a very expensive and timeconsuming task which involves lots of domain knowledge, collecting and understanding the specific customer operational context, defining validation scenarios and estimating the expected performance and operational cost. This paper presents an intelligent web-based tool that can rapidly assess any given solution package for production publishing workflows via a simulation engine and create a report for various estimated performance metrics (e.g. throughput, turnaround time, resource utilization) and operational cost. By integrating the digital publishing workflow ontology and an activity based costing model with a Petri-net based workflow simulation engine, this web-based tool allows users to quickly evaluate any potential digital publishing solutions side-by-side within their desired operational contexts, and provides a low-cost and rapid assessment for organizations before committing any purchase. This tool also benefits the solution providers to shorten the sales cycles, establishing a trustworthy customer relationship and supplement the professional assessment services with a proven quantitative simulation and estimation technology.
Honesty-humility in contemporary students: manipulations of self-image by inflated IQ estimations.
Kajonius, P J
2014-08-01
The HEXACO model offers a complement to the Big Five model, including a sixth factor, Honesty-Humility, and its four facets (Sincerity, Fairness, Greed-avoidance, and Modesty). The four facets of Honesty-Humility and three indicators of intelligence (one performance-based cognitive ability test, one self-estimated academic potential, and one self-report of previous IQ test results) were assessed in students entering higher education (N = 187). A significant negative correlation was observed between Honesty-Humility and self-reported intelligence (r = -.37), most evident in the Modesty facet. These results may be interpreted as tendencies of exaggeration, using a theoretical frame of psychological image-management, concluding that the Honesty-Humility trait captures students' self-ambitions, particularly within the context of an individualistic, competitive culture such as Sweden.
NASA Astrophysics Data System (ADS)
Nilsen, K.; van Soesbergen, A.; Matthews, Z.
2016-12-01
Socioeconomic development depends on local environments. However, the scientific evidence quantifying the impact of environmental factors on health, nutrition and poverty at subnational levels is limited. This is because socioeconomic indicators are derived from sample surveys representative only at aggregate levels compared to environmental variables mostly available in high-resolution grids. Cambodia was selected because of its commitment to development in the context of a rapidly deteriorating environment. Having made considerable progress since 2005, access to health services is limited, a quarter of the population is still poor and 40% rural children are malnourished. Cambodia is also facing considerable environmental challenges including high deforestation rates, land degradation and natural hazards. Addressing existing gaps in the knowledge of environmental impacts on health and livelihoods, this study applies small area estimation (SAE) to quantify health, nutritional and poverty outcomes in the context of local environments. SAE produces reliable subnational estimates of socioeconomic outcomes available only from sample surveys by combining them with information from auxiliary sources (census). A model is used to explain common trades across areas and a random effect structure is applied to explain the observed extra heterogeneity. SAE models predicting health, nutrition and poverty outcomes excluding and including contextual environmental variables on natural hazards vulnerability, forest cover, climate, and agricultural production are compared. Results are mapped at regional and district levels to spatially assess the impacts of environmental variation on the outcomes. Inter and intra-regional inequalities are also estimated to examine the efficacy of health/socioeconomic policy targeting based on geographic location. Preliminary results suggest that localised environmental factors have considerable impacts on the indicators estimated and should therefore not be ignored. While there are large regional differences, pockets of malnutrition, poverty and inequitable health outcomes within regions are identified. The inequality decomposition shows under and over-coverage of geographical targeting when environmental factors are taken into account.
Schempf, Ashley H; Kaufman, Jay S
2012-10-01
A common epidemiologic objective is to evaluate the contribution of residential context to individual-level disparities by race or socioeconomic position. We reviewed analytic strategies to account for the total (observed and unobserved factors) contribution of environmental context to health inequalities, including conventional fixed effects (FE) and hybrid FE implemented within a random effects (RE) or a marginal model. To illustrate results and limitations of the various analytic approaches of accounting for the total contextual component of health disparities, we used data on births nested within neighborhoods as an applied example of evaluating neighborhood confounding of racial disparities in gestational age at birth, including both a continuous and a binary outcome. Ordinary and RE models provided disparity estimates that can be substantially biased in the presence of neighborhood confounding. Both FE and hybrid FE models can account for cluster level confounding and provide disparity estimates unconfounded by neighborhood, with the latter having greater flexibility in allowing estimation of neighborhood-level effects and intercept/slope variability when implemented in a RE specification. Given the range of models that can be implemented in a hybrid approach and the frequent goal of accounting for contextual confounding, this approach should be used more often. Published by Elsevier Inc.
Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test
ERIC Educational Resources Information Center
Ho, Tsung-Han; Dodd, Barbara G.
2012-01-01
In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…
Estimation of risks to children from exposure to airborne pollutants is often complicated by the lack of reliable epidemiological data specific to this age group. As a result, risks are generally estimated from extrapolations based on data obtained in other human age groups (e.g....
On the Relation between the Linear Factor Model and the Latent Profile Model
ERIC Educational Resources Information Center
Halpin, Peter F.; Dolan, Conor V.; Grasman, Raoul P. P. P.; De Boeck, Paul
2011-01-01
The relationship between linear factor models and latent profile models is addressed within the context of maximum likelihood estimation based on the joint distribution of the manifest variables. Although the two models are well known to imply equivalent covariance decompositions, in general they do not yield equivalent estimates of the…
Estimation of Item Response Theory Parameters in the Presence of Missing Data
ERIC Educational Resources Information Center
Finch, Holmes
2008-01-01
Missing data are a common problem in a variety of measurement settings, including responses to items on both cognitive and affective assessments. Researchers have shown that such missing data may create problems in the estimation of item difficulty parameters in the Item Response Theory (IRT) context, particularly if they are ignored. At the same…
Català, Andreu; Rodríguez Martín, Daniel; van der Aa, Nico; Chen, Wei; Rauterberg, Matthias
2013-01-01
Background Freezing of gait (FoG) is one of the most disturbing and least understood symptoms in Parkinson disease (PD). Although the majority of existing assistive systems assume accurate detections of FoG episodes, the detection itself is still an open problem. The specificity of FoG is its dependency on the context of a patient, such as the current location or activity. Knowing the patient's context might improve FoG detection. One of the main technical challenges that needs to be solved in order to start using contextual information for FoG detection is accurate estimation of the patient's position and orientation toward key elements of his or her indoor environment. Objective The objectives of this paper are to (1) present the concept of the monitoring system, based on wearable and ambient sensors, which is designed to detect FoG using the spatial context of the user, (2) establish a set of requirements for the application of position and orientation tracking in FoG detection, (3) evaluate the accuracy of the position estimation for the tracking system, and (4) evaluate two different methods for human orientation estimation. Methods We developed a prototype system to localize humans and track their orientation, as an important prerequisite for a context-based FoG monitoring system. To setup the system for experiments with real PD patients, the accuracy of the position and orientation tracking was assessed under laboratory conditions in 12 participants. To collect the data, the participants were asked to wear a smartphone, with and without known orientation around the waist, while walking over a predefined path in the marked area captured by two Kinect cameras with non-overlapping fields of view. Results We used the root mean square error (RMSE) as the main performance measure. The vision based position tracking algorithm achieved RMSE = 0.16 m in position estimation for upright standing people. The experimental results for the proposed human orientation estimation methods demonstrated the adaptivity and robustness to changes in the smartphone attachment position, when the fusion of both vision and inertial information was used. Conclusions The system achieves satisfactory accuracy on indoor position tracking for the use in the FoG detection application with spatial context. The combination of inertial and vision information has the potential for correct patient heading estimation even when the inertial wearable sensor device is put into an a priori unknown position. PMID:25098265
Takač, Boris; Català, Andreu; Rodríguez Martín, Daniel; van der Aa, Nico; Chen, Wei; Rauterberg, Matthias
2013-07-15
Freezing of gait (FoG) is one of the most disturbing and least understood symptoms in Parkinson disease (PD). Although the majority of existing assistive systems assume accurate detections of FoG episodes, the detection itself is still an open problem. The specificity of FoG is its dependency on the context of a patient, such as the current location or activity. Knowing the patient's context might improve FoG detection. One of the main technical challenges that needs to be solved in order to start using contextual information for FoG detection is accurate estimation of the patient's position and orientation toward key elements of his or her indoor environment. The objectives of this paper are to (1) present the concept of the monitoring system, based on wearable and ambient sensors, which is designed to detect FoG using the spatial context of the user, (2) establish a set of requirements for the application of position and orientation tracking in FoG detection, (3) evaluate the accuracy of the position estimation for the tracking system, and (4) evaluate two different methods for human orientation estimation. We developed a prototype system to localize humans and track their orientation, as an important prerequisite for a context-based FoG monitoring system. To setup the system for experiments with real PD patients, the accuracy of the position and orientation tracking was assessed under laboratory conditions in 12 participants. To collect the data, the participants were asked to wear a smartphone, with and without known orientation around the waist, while walking over a predefined path in the marked area captured by two Kinect cameras with non-overlapping fields of view. We used the root mean square error (RMSE) as the main performance measure. The vision based position tracking algorithm achieved RMSE = 0.16 m in position estimation for upright standing people. The experimental results for the proposed human orientation estimation methods demonstrated the adaptivity and robustness to changes in the smartphone attachment position, when the fusion of both vision and inertial information was used. The system achieves satisfactory accuracy on indoor position tracking for the use in the FoG detection application with spatial context. The combination of inertial and vision information has the potential for correct patient heading estimation even when the inertial wearable sensor device is put into an a priori unknown position.
Aorta modeling with the element-based zero-stress state and isogeometric discretization
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Sasaki, Takafumi
2017-02-01
Patient-specific arterial fluid-structure interaction computations, including aorta computations, require an estimation of the zero-stress state (ZSS), because the image-based arterial geometries do not come from a ZSS. We have earlier introduced a method for estimation of the element-based ZSS (EBZSS) in the context of finite element discretization of the arterial wall. The method has three main components. 1. An iterative method, which starts with a calculated initial guess, is used for computing the EBZSS such that when a given pressure load is applied, the image-based target shape is matched. 2. A method for straight-tube segments is used for computing the EBZSS so that we match the given diameter and longitudinal stretch in the target configuration and the "opening angle." 3. An element-based mapping between the artery and straight-tube is extracted from the mapping between the artery and straight-tube segments. This provides the mapping from the arterial configuration to the straight-tube configuration, and from the estimated EBZSS of the straight-tube configuration back to the arterial configuration, to be used as the initial guess for the iterative method that matches the image-based target shape. Here we present the version of the EBZSS estimation method with isogeometric wall discretization. With isogeometric discretization, we can obtain the element-based mapping directly, instead of extracting it from the mapping between the artery and straight-tube segments. That is because all we need for the element-based mapping, including the curvatures, can be obtained within an element. With NURBS basis functions, we may be able to achieve a similar level of accuracy as with the linear basis functions, but using larger-size and much fewer elements. Higher-order NURBS basis functions allow representation of more complex shapes within an element. To show how the new EBZSS estimation method performs, we first present 2D test computations with straight-tube configurations. Then we show how the method can be used in a 3D computation where the target geometry is coming from medical image of a human aorta.
Audio visual speech source separation via improved context dependent association model
NASA Astrophysics Data System (ADS)
Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz
2014-12-01
In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.
NASA Technical Reports Server (NTRS)
Krajewski, Witold F.; Rexroth, David T.; Kiriaki, Kiriakie
1991-01-01
Two problems related to radar rainfall estimation are described. The first part is a description of a preliminary data analysis for the purpose of statistical estimation of rainfall from multiple (radar and raingage) sensors. Raingage, radar, and joint radar-raingage estimation is described, and some results are given. Statistical parameters of rainfall spatial dependence are calculated and discussed in the context of optimal estimation. Quality control of radar data is also described. The second part describes radar scattering by ellipsoidal raindrops. An analytical solution is derived for the Rayleigh scattering regime. Single and volume scattering are presented. Comparison calculations with the known results for spheres and oblate spheroids are shown.
Bayesian effect estimation accounting for adjustment uncertainty.
Wang, Chi; Parmigiani, Giovanni; Dominici, Francesca
2012-09-01
Model-based estimation of the effect of an exposure on an outcome is generally sensitive to the choice of which confounding factors are included in the model. We propose a new approach, which we call Bayesian adjustment for confounding (BAC), to estimate the effect of an exposure of interest on the outcome, while accounting for the uncertainty in the choice of confounders. Our approach is based on specifying two models: (1) the outcome as a function of the exposure and the potential confounders (the outcome model); and (2) the exposure as a function of the potential confounders (the exposure model). We consider Bayesian variable selection on both models and link the two by introducing a dependence parameter, ω, denoting the prior odds of including a predictor in the outcome model, given that the same predictor is in the exposure model. In the absence of dependence (ω= 1), BAC reduces to traditional Bayesian model averaging (BMA). In simulation studies, we show that BAC, with ω > 1, estimates the exposure effect with smaller bias than traditional BMA, and improved coverage. We, then, compare BAC, a recent approach of Crainiceanu, Dominici, and Parmigiani (2008, Biometrika 95, 635-651), and traditional BMA in a time series data set of hospital admissions, air pollution levels, and weather variables in Nassau, NY for the period 1999-2005. Using each approach, we estimate the short-term effects of on emergency admissions for cardiovascular diseases, accounting for confounding. This application illustrates the potentially significant pitfalls of misusing variable selection methods in the context of adjustment uncertainty. © 2012, The International Biometric Society.
Whitty, Jennifer A; Oliveira Gonçalves, Ana Sofia
2018-06-01
The aim of this study was to compare the acceptability, validity and concordance of discrete choice experiment (DCE) and best-worst scaling (BWS) stated preference approaches in health. A systematic search of EMBASE, Medline, AMED, PubMed, CINAHL, Cochrane Library and EconLit databases was undertaken in October to December 2016 without date restriction. Studies were included if they were published in English, presented empirical data related to the administration or findings of traditional format DCE and object-, profile- or multiprofile-case BWS, and were related to health. Study quality was assessed using the PREFS checklist. Fourteen articles describing 12 studies were included, comparing DCE with profile-case BWS (9 studies), DCE and multiprofile-case BWS (1 study), and profile- and multiprofile-case BWS (2 studies). Although limited and inconsistent, the balance of evidence suggests that preferences derived from DCE and profile-case BWS may not be concordant, regardless of the decision context. Preferences estimated from DCE and multiprofile-case BWS may be concordant (single study). Profile- and multiprofile-case BWS appear more statistically efficient than DCE, but no evidence is available to suggest they have a greater response efficiency. Little evidence suggests superior validity for one format over another. Participant acceptability may favour DCE, which had a lower self-reported task difficulty and was preferred over profile-case BWS in a priority setting but not necessarily in other decision contexts. DCE and profile-case BWS may be of equal validity but give different preference estimates regardless of the health context; thus, they may be measuring different constructs. Therefore, choice between methods is likely to be based on normative considerations related to coherence with theoretical frameworks and on pragmatic considerations related to ease of data collection.
A framework for quantifying net benefits of alternative prognostic models.
Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G
2012-01-30
New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd.
A framework for quantifying net benefits of alternative prognostic models‡
Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G
2012-01-01
New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd. PMID:21905066
ERF1_2 -- Enhanced River Reach File 2.0
Nolan, Jacqueline V.; Brakebill, John W.; Alexander, Richard B.; Schwarz, Gregory E.
2003-01-01
The digital segmented network based on watershed boundaries, ERF1_2, includes enhancements to the U.S. Environmental Protection Agency's River Reach File 1 (RF1) (USEPA, 1996; DeWald and others, 1985) to support national and regional-scale surface water-quality modeling. Alexander and others (1999) developed ERF1, which assessed the hydrologic integrity of the digital reach traces and calculated the mean water time-of-travel in river reaches and reservoirs. ERF1_2 serves as the foundation for SPARROW (Spatially Referenced Regressions (of nutrient transport) on Watershed) modeling. Within the context of a Geographic Information System, SPARROW estimates the proportion of watersheds in the conterminous U.S. with outflow concentrations of several nutrients, including total nitrogen and total phosphorus, (Smith, R.A., Schwarz, G.E., and Alexander, R.B., 1997). This version of the network expands on ERF1 (Version 1.2; Alexander, et al., 1999) and includes the incremental and total drainage area derived from 1-kilometer (km) elevation data for North America. Previous estimates of the water time-of-travel were recomputed for reaches with water-quality monitoring sites that included two reaches. The mean flow and velocity estimates for these split reaches are based on previous estimation methods (Alexander et al., 1999) and are unchanged in ERF1_2. Drainage area calculations provide data used to estimate the contribution of a given nutrient to the outflow. Data estimates depend on the accuracy of node connectivity. Reaches split at water-quality or pesticide-monitoring sites indicate the source point for estimating the contribution and transport of nutrients and their loads throughout the watersheds. The ERF1_2 coverage extends the earlier drainage area founded on the 1-kilometer data for North America (Verdin, 1996; Verdin and Jenson, 1996). A 1-kilometer raster grid of ERF1_2 projected to Lambert Azimuthal Equal Area, NAD 27 Datum (Snyder, 1987), was merged with the HYDRO1K flow direction data set (Verdin and Jenson, 1996) to generate a DEM-based watershed grid, ERF1_2WS_LG. The watershed boundaries are maintained in a raster (grid cell) format as well as a vector (polygon) format for subsequent model analysis. Both the coverage, ERF1_2, and the grid, ERF1_2WS_LG, are available at: URL:http://water.usgs.gov/lookup/getspatial?erf1_2
Context retrieval and description benefits for recognition of unfamiliar faces.
Jones, Todd C; Robinson, Kealagh; Steel, Brenna C
2018-04-19
Describing unfamiliar faces during or immediately after their presentation in a study phase can produce better recognition memory performance compared with a view-only control condition. We treated descriptions as elaborative information that is part of the study context and investigated how context retrieval influences recognition memory. Following general dual-process theories, we hypothesized that recollection would be used to recall descriptions and that description recall would influence recognition decisions, including the level of recognition confidence. In four experiments description conditions produced higher hit rates and higher levels of recognition confidence than control conditions. Participants recalled descriptive content on some trials, and this context retrieval was linked to an increase in the recognition confidence level. Repeating study faces in description conditions increased recognition scores, recognition confidence level, and context retrieval. Estimates of recollection from Yonelinas' (1994) dual-process signal detection ROCs were, on average, very close to the measures of context recall. Description conditions also produced higher estimates of familiarity. Finally, we found evidence that participants engaged in description activity in some ostensibly view-only trials. An emphasis on the information participants use in making their recognition decisions can advance understanding on description effects when descriptions are part of the study trial context. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Hekler, Eric B; Buman, Matthew P; Grieco, Lauren; Rosenberger, Mary; Winter, Sandra J; Haskell, William; King, Abby C
2015-04-15
There is increasing interest in using smartphones as stand-alone physical activity monitors via their built-in accelerometers, but there is presently limited data on the validity of this approach. The purpose of this work was to determine the validity and reliability of 3 Android smartphones for measuring physical activity among midlife and older adults. A laboratory (study 1) and a free-living (study 2) protocol were conducted. In study 1, individuals engaged in prescribed activities including sedentary (eg, sitting), light (sweeping), moderate (eg, walking 3 mph on a treadmill), and vigorous (eg, jogging 5 mph on a treadmill) activity over a 2-hour period wearing both an ActiGraph and 3 Android smartphones (ie, HTC MyTouch, Google Nexus One, and Motorola Cliq). In the free-living study, individuals engaged in usual daily activities over 7 days while wearing an Android smartphone (Google Nexus One) and an ActiGraph. Study 1 included 15 participants (age: mean 55.5, SD 6.6 years; women: 56%, 8/15). Correlations between the ActiGraph and the 3 phones were strong to very strong (ρ=.77-.82). Further, after excluding bicycling and standing, cut-point derived classifications of activities yielded a high percentage of activities classified correctly according to intensity level (eg, 78%-91% by phone) that were similar to the ActiGraph's percent correctly classified (ie, 91%). Study 2 included 23 participants (age: mean 57.0, SD 6.4 years; women: 74%, 17/23). Within the free-living context, results suggested a moderate correlation (ie, ρ=.59, P<.001) between the raw ActiGraph counts/minute and the phone's raw counts/minute and a strong correlation on minutes of moderate-to-vigorous physical activity (MVPA; ie, ρ=.67, P<.001). Results from Bland-Altman plots suggested close mean absolute estimates of sedentary (mean difference=-26 min/day of sedentary behavior) and MVPA (mean difference=-1.3 min/day of MVPA) although there was large variation. Overall, results suggest that an Android smartphone can provide comparable estimates of physical activity to an ActiGraph in both a laboratory-based and free-living context for estimating sedentary and MVPA and that different Android smartphones may reliably confer similar estimates.
NASA Astrophysics Data System (ADS)
Siripatana, Adil; Mayo, Talea; Sraj, Ihab; Knio, Omar; Dawson, Clint; Le Maitre, Olivier; Hoteit, Ibrahim
2017-08-01
Bayesian estimation/inversion is commonly used to quantify and reduce modeling uncertainties in coastal ocean model, especially in the framework of parameter estimation. Based on Bayes rule, the posterior probability distribution function (pdf) of the estimated quantities is obtained conditioned on available data. It can be computed either directly, using a Markov chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation approach, which is heavily exploited in large dimensional state estimation problems. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach due to the restricted Gaussian prior and noise assumptions that are generally imposed in these methods. This contribution aims at evaluating the effectiveness of utilizing an ensemble Kalman-based data assimilation method for parameter estimation of a coastal ocean model against an MCMC polynomial chaos (PC)-based scheme. We focus on quantifying the uncertainties of a coastal ocean ADvanced CIRCulation (ADCIRC) model with respect to the Manning's n coefficients. Based on a realistic framework of observation system simulation experiments (OSSEs), we apply an ensemble Kalman filter and the MCMC method employing a surrogate of ADCIRC constructed by a non-intrusive PC expansion for evaluating the likelihood, and test both approaches under identical scenarios. We study the sensitivity of the estimated posteriors with respect to the parameters of the inference methods, including ensemble size, inflation factor, and PC order. A full analysis of both methods, in the context of coastal ocean model, suggests that an ensemble Kalman filter with appropriate ensemble size and well-tuned inflation provides reliable mean estimates and uncertainties of Manning's n coefficients compared to the full posterior distributions inferred by MCMC.
Inference and Prediction of Metabolic Network Fluxes
Nikoloski, Zoran; Perez-Storey, Richard; Sweetlove, Lee J.
2015-01-01
In this Update, we cover the basic principles of the estimation and prediction of the rates of the many interconnected biochemical reactions that constitute plant metabolic networks. This includes metabolic flux analysis approaches that utilize the rates or patterns of redistribution of stable isotopes of carbon and other atoms to estimate fluxes, as well as constraints-based optimization approaches such as flux balance analysis. Some of the major insights that have been gained from analysis of fluxes in plants are discussed, including the functioning of metabolic pathways in a network context, the robustness of the metabolic phenotype, the importance of cell maintenance costs, and the mechanisms that enable energy and redox balancing at steady state. We also discuss methodologies to exploit 'omic data sets for the construction of tissue-specific metabolic network models and to constrain the range of permissible fluxes in such models. Finally, we consider the future directions and challenges faced by the field of metabolic network flux phenotyping. PMID:26392262
NASA Astrophysics Data System (ADS)
Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.
2016-01-01
18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a clinical context and showed a good accuracy both in ideal and in realistic conditions.
Precise orbit determination based on raw GPS measurements
NASA Astrophysics Data System (ADS)
Zehentner, Norbert; Mayer-Gürr, Torsten
2016-03-01
Precise orbit determination is an essential part of the most scientific satellite missions. Highly accurate knowledge of the satellite position is used to geolocate measurements of the onboard sensors. For applications in the field of gravity field research, the position itself can be used as observation. In this context, kinematic orbits of low earth orbiters (LEO) are widely used, because they do not include a priori information about the gravity field. The limiting factor for the achievable accuracy of the gravity field through LEO positions is the orbit accuracy. We make use of raw global positioning system (GPS) observations to estimate the kinematic satellite positions. The method is based on the principles of precise point positioning. Systematic influences are reduced by modeling and correcting for all known error sources. Remaining effects such as the ionospheric influence on the signal propagation are either unknown or not known to a sufficient level of accuracy. These effects are modeled as unknown parameters in the estimation process. The redundancy in the adjustment is reduced; however, an improvement in orbit accuracy leads to a better gravity field estimation. This paper describes our orbit determination approach and its mathematical background. Some examples of real data applications highlight the feasibility of the orbit determination method based on raw GPS measurements. Its suitability for gravity field estimation is presented in a second step.
Comparative assessment of techniques for initial pose estimation using monocular vision
NASA Astrophysics Data System (ADS)
Sharma, Sumant; D`Amico, Simone
2016-06-01
This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.
Christopher W. Woodall; Vicente J. Monleon
2008-01-01
The USDA Forest Service's Forest Inventory and Analysis program conducts an inventory of forests of the United States including down woody materials (DWM). In this report we provide the rationale and context for a national inventory of DWM, describe the components sampled, discuss the sampling protocol used and corresponding estimation procedures, and provide...
Multilevel modeling of single-case data: A comparison of maximum likelihood and Bayesian estimation.
Moeyaert, Mariola; Rindskopf, David; Onghena, Patrick; Van den Noortgate, Wim
2017-12-01
The focus of this article is to describe Bayesian estimation, including construction of prior distributions, and to compare parameter recovery under the Bayesian framework (using weakly informative priors) and the maximum likelihood (ML) framework in the context of multilevel modeling of single-case experimental data. Bayesian estimation results were found similar to ML estimation results in terms of the treatment effect estimates, regardless of the functional form and degree of information included in the prior specification in the Bayesian framework. In terms of the variance component estimates, both the ML and Bayesian estimation procedures result in biased and less precise variance estimates when the number of participants is small (i.e., 3). By increasing the number of participants to 5 or 7, the relative bias is close to 5% and more precise estimates are obtained for all approaches, except for the inverse-Wishart prior using the identity matrix. When a more informative prior was added, more precise estimates for the fixed effects and random effects were obtained, even when only 3 participants were included. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Gender, age, and psychosocial context of the perception of facial esthetics.
Tole, Nikoleta; Lajnert, Vlatka; Kovacevic Pavicic, Daniela; Spalj, Stjepan
2014-01-01
To explore the effects of gender, age, and psychosocial context on the perception of facial esthetics. The study included 1,444 Caucasian subjects aged 16 to 85 years. Two sets of color photographs illustrating 13 male and 13 female Caucasian facial type alterations, representing different skeletal and dentoalveolar components of sagittal maxillary-mandibular relationships, were used to estimate the facial profile attractiveness. The examinees graded the profiles based on a 0 to 10 numerical rating scale. The examinees graded the profiles of their own sex only from a social perspective, whereas opposite sex profiles were graded both from the social and emotional perspective separately. The perception of facial esthetics was found to be related to the gender, age, and psychosocial context of evaluation (p < 0.05). The most attractive profiles to men are the orthognathic female profile from the social perspective and the moderate bialveolar protrusion from the emotional perspective. The most attractive profile to women is the orthognathic male profile, when graded from the social aspect, and the mild bialveolar retrusion when graded from the emotional aspect. The age increase of the assessor results in a higher attractiveness grade. When planning treatment that modifies the facial profile, the clinician should bear in mind that the perception of facial profile esthetics is a complex phenomenon influenced by biopsychosocial factors. This study allows a better understanding of the concept of perception of facial esthetics that includes gender, age, and psychosocial context. © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Bowman, Christopher; Haith, Gary; Steinberg, Alan; Morefield, Charles; Morefield, Michael
2013-05-01
This paper describes methods to affordably improve the robustness of distributed fusion systems by opportunistically leveraging non-traditional data sources. Adaptive methods help find relevant data, create models, and characterize the model quality. These methods also can measure the conformity of this non-traditional data with fusion system products including situation modeling and mission impact prediction. Non-traditional data can improve the quantity, quality, availability, timeliness, and diversity of the baseline fusion system sources and therefore can improve prediction and estimation accuracy and robustness at all levels of fusion. Techniques are described that automatically learn to characterize and search non-traditional contextual data to enable operators integrate the data with the high-level fusion systems and ontologies. These techniques apply the extension of the Data Fusion & Resource Management Dual Node Network (DNN) technical architecture at Level 4. The DNN architecture supports effectively assessment and management of the expanded portfolio of data sources, entities of interest, models, and algorithms including data pattern discovery and context conformity. Affordable model-driven and data-driven data mining methods to discover unknown models from non-traditional and `big data' sources are used to automatically learn entity behaviors and correlations with fusion products, [14 and 15]. This paper describes our context assessment software development, and the demonstration of context assessment of non-traditional data to compare to an intelligence surveillance and reconnaissance fusion product based upon an IED POIs workflow.
An Analysis of Context-Based Similarity Tasks in Textbooks from Brazil and the United States
ERIC Educational Resources Information Center
Barcelos Amaral, Rúbia; Hollebrands, Karen
2017-01-01
Three textbooks from Brazil and three textbooks from the United States were analysed with a focus on similarity and context-based tasks. Students' opportunities to learn similarity were examined by considering whether students were provided context-based tasks of high cognitive demand and whether those tasks included missing or superfluous…
2012-01-01
Background Unmet need for family planning is responsible for 7.4 million disability-adjusted life years and 30% of the maternity-related disease burden. An estimated 35% of births are unintended and some 200 million couples state a desire to delay pregnancy or cease fertility but are not using contraception. Unmet need is higher among the poorest, lesser educated, rural residents and women under 19 years. The barriers to, and successful strategies for, satisfying all demand for modern contraceptives are heavily influenced by context. Successfully overcoming this to increase the uptake of family planning is estimated to reduce the risk of maternal death by up to 58% as well as contribute to poverty reduction, women’s empowerment and educational, social and economic participation, national development and environmental protection. Methods To strengthen health systems for delivery of context-specific, equity-focused reproductive, maternal, newborn and child health services (RMNCH), the Investment Case study was applied in the Asia-Pacific region. Staff of local and central government and non-government organisations analysed data indicative of health service delivery through a supply–demand oriented framework to identify constraints to RMNCH scale-up. Planners developed contextualised strategies and the projected coverage increases were modelled for estimates of marginal impact on maternal mortality and costs over a five year period. Results In Indonesia, Philippines and Nepal the constraints behind incomplete coverage of family planning services included: weaknesses in commodities logistic management; geographical inaccessibility; limitations in health worker skills and numbers; legislation; and religious and cultural ideologies. Planned activities included: streamlining supply systems; establishment of Community Health Teams for integrated RMNCH services; local recruitment of staff and refresher training; task-shifting; and follow-up cards. Modelling showed varying marginal impact and costs for each setting with potential for significant reductions in the maternal mortality rate; up to 28% (25.1-30.7) over five years, costing up to a marginal USD 1.34 (1.32-1.35) per capita in the first year. Conclusion Local health planners are in a prime position to devise feasible context-specific activities to overcome constraints and increase met need for family planning to accelerate progress towards MDG 5. PMID:23140196
Visual Persons Behavior Diary Generation Model based on Trajectories and Pose Estimation
NASA Astrophysics Data System (ADS)
Gang, Chen; Bin, Chen; Yuming, Liu; Hui, Li
2018-03-01
The behavior pattern of persons was the important output of the surveillance analysis. This paper focus on the generation model of visual person behavior diary. The pipeline includes the person detection, tracking, and the person behavior classify. This paper adopts the deep convolutional neural model YOLO (You Only Look Once)V2 for person detection module. Multi person tracking was based on the detection framework. The Hungarian assignment algorithm was used to the matching. The person appearance model was integrated by HSV color model and Hash code model. The person object motion was estimated by the Kalman Filter. The multi objects were matching with exist tracklets through the appearance and motion location distance by the Hungarian assignment method. A long continuous trajectory for one person was get by the spatial-temporal continual linking algorithm. And the face recognition information was used to identify the trajectory. The trajectories with identification information can be used to generate the visual diary of person behavior based on the scene context information and person action estimation. The relevant modules are tested in public data sets and our own capture video sets. The test results show that the method can be used to generate the visual person behavior pattern diary with certain accuracy.
Conditions under which Arousal Does and Does Not Elevate Height Estimates
Storbeck, Justin; Stefanucci, Jeanine K.
2014-01-01
We present a series of experiments that explore the boundary conditions for how emotional arousal influences height estimates. Four experiments are presented, which investigated the influence of context, situation-relevance, intensity, and attribution of arousal on height estimates. In Experiment 1, we manipulated the environmental context to signal either danger (viewing a height from above) or safety (viewing a height from below). High arousal only increased height estimates made from above. In Experiment 2, two arousal inductions were used that contained either 1) height-relevant arousing images or 2) height-irrelevant arousing images. Regardless of theme, arousal increased height estimates compared to a neutral group. In Experiment 3, arousal intensity was manipulated by inserting an intermediate or long delay between the induction and height estimates. A brief, but not a long, delay from the arousal induction served to increase height estimates. In Experiment 4, an attribution manipulation was included, and those participants who were made aware of the source of their arousal reduced their height estimates compared to participants who received no attribution instructions. Thus, arousal that is attributed to its true source is discounted from feelings elicited by the height, thereby reducing height estimates. Overall, we suggest that misattributed, embodied arousal is used as a cue when estimating heights from above that can lead to overestimation. PMID:24699393
Context-Dependent Piano Music Transcription With Convolutional Sparse Coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cogliati, Andrea; Duan, Zhiyao; Wohlberg, Brendt
This study presents a novel approach to automatic transcription of piano music in a context-dependent setting. This approach employs convolutional sparse coding to approximate the music waveform as the summation of piano note waveforms (dictionary elements) convolved with their temporal activations (onset transcription). The piano note waveforms are pre-recorded for the specific piano to be transcribed in the specific environment. During transcription, the note waveforms are fixed and their temporal activations are estimated and post-processed to obtain the pitch and onset transcription. This approach works in the time domain, models temporal evolution of piano notes, and estimates pitches and onsetsmore » simultaneously in the same framework. Finally, experiments show that it significantly outperforms a state-of-the-art music transcription method trained in the same context-dependent setting, in both transcription accuracy and time precision, in various scenarios including synthetic, anechoic, noisy, and reverberant environments.« less
Context-Dependent Piano Music Transcription With Convolutional Sparse Coding
Cogliati, Andrea; Duan, Zhiyao; Wohlberg, Brendt
2016-08-04
This study presents a novel approach to automatic transcription of piano music in a context-dependent setting. This approach employs convolutional sparse coding to approximate the music waveform as the summation of piano note waveforms (dictionary elements) convolved with their temporal activations (onset transcription). The piano note waveforms are pre-recorded for the specific piano to be transcribed in the specific environment. During transcription, the note waveforms are fixed and their temporal activations are estimated and post-processed to obtain the pitch and onset transcription. This approach works in the time domain, models temporal evolution of piano notes, and estimates pitches and onsetsmore » simultaneously in the same framework. Finally, experiments show that it significantly outperforms a state-of-the-art music transcription method trained in the same context-dependent setting, in both transcription accuracy and time precision, in various scenarios including synthetic, anechoic, noisy, and reverberant environments.« less
Single-cell entropy for accurate estimation of differentiation potency from a cell's transcriptome
NASA Astrophysics Data System (ADS)
Teschendorff, Andrew E.; Enver, Tariq
2017-06-01
The ability to quantify differentiation potential of single cells is a task of critical importance. Here we demonstrate, using over 7,000 single-cell RNA-Seq profiles, that differentiation potency of a single cell can be approximated by computing the signalling promiscuity, or entropy, of a cell's transcriptome in the context of an interaction network, without the need for feature selection. We show that signalling entropy provides a more accurate and robust potency estimate than other entropy-based measures, driven in part by a subtle positive correlation between the transcriptome and connectome. Signalling entropy identifies known cell subpopulations of varying potency and drug resistant cancer stem-cell phenotypes, including those derived from circulating tumour cells. It further reveals that expression heterogeneity within single-cell populations is regulated. In summary, signalling entropy allows in silico estimation of the differentiation potency and plasticity of single cells and bulk samples, providing a means to identify normal and cancer stem-cell phenotypes.
Estimating Causal Effects with Ancestral Graph Markov Models
Malinsky, Daniel; Spirtes, Peter
2017-01-01
We present an algorithm for estimating bounds on causal effects from observational data which combines graphical model search with simple linear regression. We assume that the underlying system can be represented by a linear structural equation model with no feedback, and we allow for the possibility of latent variables. Under assumptions standard in the causal search literature, we use conditional independence constraints to search for an equivalence class of ancestral graphs. Then, for each model in the equivalence class, we perform the appropriate regression (using causal structure information to determine which covariates to include in the regression) to estimate a set of possible causal effects. Our approach is based on the “IDA” procedure of Maathuis et al. (2009), which assumes that all relevant variables have been measured (i.e., no unmeasured confounders). We generalize their work by relaxing this assumption, which is often violated in applied contexts. We validate the performance of our algorithm on simulated data and demonstrate improved precision over IDA when latent variables are present. PMID:28217244
Single-cell entropy for accurate estimation of differentiation potency from a cell's transcriptome
Teschendorff, Andrew E.; Enver, Tariq
2017-01-01
The ability to quantify differentiation potential of single cells is a task of critical importance. Here we demonstrate, using over 7,000 single-cell RNA-Seq profiles, that differentiation potency of a single cell can be approximated by computing the signalling promiscuity, or entropy, of a cell's transcriptome in the context of an interaction network, without the need for feature selection. We show that signalling entropy provides a more accurate and robust potency estimate than other entropy-based measures, driven in part by a subtle positive correlation between the transcriptome and connectome. Signalling entropy identifies known cell subpopulations of varying potency and drug resistant cancer stem-cell phenotypes, including those derived from circulating tumour cells. It further reveals that expression heterogeneity within single-cell populations is regulated. In summary, signalling entropy allows in silico estimation of the differentiation potency and plasticity of single cells and bulk samples, providing a means to identify normal and cancer stem-cell phenotypes. PMID:28569836
Slip-based terrain estimation with a skid-steer vehicle
NASA Astrophysics Data System (ADS)
Reina, Giulio; Galati, Rocco
2016-10-01
In this paper, a novel approach for online terrain characterisation is presented using a skid-steer vehicle. In the context of this research, terrain characterisation refers to the estimation of physical parameters that affects the terrain ability to support vehicular motion. These parameters are inferred from the modelling of the kinematic and dynamic behaviour of a skid-steer vehicle that reveals the underlying relationships governing the vehicle-terrain interaction. The concept of slip track is introduced as a measure of the slippage experienced by the vehicle during turning motion. The proposed terrain estimation system includes common onboard sensors, that is, wheel encoders, electrical current sensors and yaw rate gyroscope. Using these components, the system can characterise terrain online during normal vehicle operations. Experimental results obtained from different surfaces are presented to validate the system in the field showing its effectiveness and potential benefits to implement adaptive driving assistance systems or to automatically update the parameters of onboard control and planning algorithms.
Estimation of images degraded by film-grain noise.
Naderi, F; Sawchuk, A A
1978-04-15
Film-grain noise describes the intrinsic noise produced by a photographic emulsion during the process of image recording and reproduction. In this paper we consider the restoration of images degraded by film-grain noise. First a detailed model for the over-all photographic imaging system is presented. The model includes linear blurring effects and the signal-dependent effect of film-grain noise. The accuracy of this model is tested by simulating images according to it and comparing the results to images of similar targets that were actually recorded on film. The restoration of images degraded by film-grain noise is then considered in the context of estimation theory. A discrete Wiener filer is developed which explicitly allows for the signal dependence of the noise. The filter adaptively alters its characteristics based on the nonstationary first order statistics of an image and is shown to have advantages over the conventional Wiener filter. Experimental results for modeling and the adaptive estimation filter are presented.
Petrou, Stavros; Kwon, Joseph; Madan, Jason
2018-05-10
Economic analysts are increasingly likely to rely on systematic reviews and meta-analyses of health state utility values to inform the parameter inputs of decision-analytic modelling-based economic evaluations. Beyond the context of economic evaluation, evidence from systematic reviews and meta-analyses of health state utility values can be used to inform broader health policy decisions. This paper provides practical guidance on how to conduct a systematic review and meta-analysis of health state utility values. The paper outlines a number of stages in conducting a systematic review, including identifying the appropriate evidence, study selection, data extraction and presentation, and quality and relevance assessment. The paper outlines three broad approaches that can be used to synthesise multiple estimates of health utilities for a given health state or condition, namely fixed-effect meta-analysis, random-effects meta-analysis and mixed-effects meta-regression. Each approach is illustrated by a synthesis of utility values for a hypothetical decision problem, and software code is provided. The paper highlights a number of methodological issues pertinent to the conduct of meta-analysis or meta-regression. These include the importance of limiting synthesis to 'comparable' utility estimates, for example those derived using common utility measurement approaches and sources of valuation; the effects of reliance on limited or poorly reported published data from primary utility assessment studies; the use of aggregate outcomes within analyses; approaches to generating measures of uncertainty; handling of median utility values; challenges surrounding the disentanglement of utility estimates collected serially within the context of prospective observational studies or prospective randomised trials; challenges surrounding the disentanglement of intervention effects; and approaches to measuring model validity. Areas of methodological debate and avenues for future research are highlighted.
Automatic tree parameter extraction by a Mobile LiDAR System in an urban context.
Herrero-Huerta, Mónica; Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo
2018-01-01
In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees.
Automatic tree parameter extraction by a Mobile LiDAR System in an urban context
Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo
2018-01-01
In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees. PMID:29689076
Time series modeling by a regression approach based on a latent process.
Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice
2009-01-01
Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.
How much e-waste is there in US basements and attics? Results from a national survey.
Saphores, Jean-Daniel M; Nixon, Hilary; Ogunseitan, Oladele A; Shapiro, Andrew A
2009-08-01
The fate of used electronic products (e-waste) is of increasing concern because of their toxicity and the growing volume of e-waste. Addressing these concerns requires developing the recycling infrastructure, but good estimates of the volume of e-waste stored by US households are still unavailable. In this context, we make two contributions based on a national random survey of 2136 US households. First, we explain how much e-waste is stored by US households using count models. Significant explanatory variables include age, marital and employment status, ethnicity, household size, previous e-waste recycling behavior, and to some extent education, home ownership, and understanding the consequences of recycling, but neither income nor knowledge of e-waste recycling laws. Second, we estimate that on average, each US household has 4.1 small (
Comparison of optimal design methods in inverse problems
NASA Astrophysics Data System (ADS)
Banks, H. T.; Holm, K.; Kappel, F.
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).
Context-Awareness Based Personalized Recommendation of Anti-Hypertension Drugs.
Chen, Dexin; Jin, Dawei; Goh, Tiong-Thye; Li, Na; Wei, Leiru
2016-09-01
The World Health Organization estimates that almost one-third of the world's adult population are suffering from hypertension which has gradually become a "silent killer". Due to the varieties of anti-hypertensive drugs, patients are interested in how these drugs can be selected to match their respective conditions. This study provides a personalized recommendation service system of anti-hypertensive drugs based on context-awareness and designs a context ontology framework of the service. In addition, this paper introduces a Semantic Web Rule Language (SWRL)-based rule to provide high-level context reasoning and information recommendation and to overcome the limitation of ontology reasoning. To make the information recommendation of the drugs more personalized, this study also devises three categories of information recommendation rules that match different priority levels and uses a ranking algorithm to optimize the recommendation. The experiment conducted shows that combining the anti-hypertensive drugs personalized recommendation service context ontology (HyRCO) with the optimized rule reasoning can achieve a higher-quality personalized drug recommendation service. Accordingly this exploratory study of the personalized recommendation service for hypertensive drugs and its method can be easily adopted for other diseases.
Misspecification of Cox regression models with composite endpoints
Wu, Longyang; Cook, Richard J
2012-01-01
Researchers routinely adopt composite endpoints in multicenter randomized trials designed to evaluate the effect of experimental interventions in cardiovascular disease, diabetes, and cancer. Despite their widespread use, relatively little attention has been paid to the statistical properties of estimators of treatment effect based on composite endpoints. We consider this here in the context of multivariate models for time to event data in which copula functions link marginal distributions with a proportional hazards structure. We then examine the asymptotic and empirical properties of the estimator of treatment effect arising from a Cox regression model for the time to the first event. We point out that even when the treatment effect is the same for the component events, the limiting value of the estimator based on the composite endpoint is usually inconsistent for this common value. We find that in this context the limiting value is determined by the degree of association between the events, the stochastic ordering of events, and the censoring distribution. Within the framework adopted, marginal methods for the analysis of multivariate failure time data yield consistent estimators of treatment effect and are therefore preferred. We illustrate the methods by application to a recent asthma study. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22736519
Comparison of Globally Complete Versions of GPCP and CMAP Monthly Precipitation Analyses
NASA Technical Reports Server (NTRS)
Curtis, Scott; Adler, Robert; Huffman, George
1998-01-01
In this study two global observational precipitation products, namely the Global Precipitation Climatology Project's (GPCP) community data set and CPC's Merged Analysis of Precipitation (CMAP), are compared on global to regional scales in the context of the different satellite and gauge data inputs and merger techniques. The average annual global precipitation rates, calculated from data common in regions/times to both GPCP and CMAP, are similar for the two. However, CMAP is larger than GPCP in the tropics because: (1) CMAP values in the tropics are adjusted month-by month to atoll gauge data in the West Pacific, which are greater than any satellite observations used; and (2) CMAP is produced from a linear combination of data inputs, which tends to give higher values than the microwave emission estimates alone to which the inputs are adjusted in the GPCP merger over the ocean. The CMAP month-to-month adjustment to the atolls also appears to introduce temporal variations throughout the tropics which are not detected by satellite-only products. On the other hand, GPCP is larger than CMAP in the high-latitude oceans, where CMAP includes the scattering based microwave estimates which are consistently smaller than the emission estimates used in both techniques. Also, in the polar regions GPCP transitions from the emission microwave estimates to the larger TOVS-based estimates. Finally, in high-latitude land areas GPCP can be significantly larger than CMAP because GPCP attempts to correct the gauge estimates for errors due to wind loss effects.
Assessment of Receiver Signal Strength Sensing for Location Estimation Based on Fisher Information
Nielsen, John; Nielsen, Christopher
2016-01-01
Currently there is almost ubiquitous availability of wireless signaling for data communications within commercial building complexes resulting in receiver signal strength (RSS) observables that are typically sufficient for generating viable location estimates of mobile wireless devices. However, while RSS observables are generally plentiful, achieving an accurate estimation of location is difficult due to several factors affecting the electromagnetic coupling between the mobile antenna and the building access points that are not modeled and hence contribute to the overall estimation uncertainty. Such uncertainty is typically mitigated with a moderate redundancy of RSS sensor observations in combination with other constraints imposed on the mobile trajectory. In this paper, the Fisher Information (FI) of a set of RSS sensor observations in the context of variables related to the mobile location is developed. This provides a practical method of determining the potential location accuracy for the given set of wireless signals available. Furthermore, the information value of individual RSS measurements can be quantified and the RSS observables weighted accordingly in estimation combining algorithms. The practical utility of using FI in this context was demonstrated experimentally with an extensive set of RSS measurements recorded in an office complex. The resulting deviation of the mobile location estimation based on application of weighted likelihood processing to the experimental RSS data was shown to agree closely with the Cramer Rao bound determined from the FI analysis. PMID:27669262
Hoffman, Paul; Lambon Ralph, Matthew A; Rogers, Timothy T
2013-09-01
Semantic ambiguity is typically measured by summing the number of senses or dictionary definitions that a word has. Such measures are somewhat subjective and may not adequately capture the full extent of variation in word meaning, particularly for polysemous words that can be used in many different ways, with subtle shifts in meaning. Here, we describe an alternative, computationally derived measure of ambiguity based on the proposal that the meanings of words vary continuously as a function of their contexts. On this view, words that appear in a wide range of contexts on diverse topics are more variable in meaning than those that appear in a restricted set of similar contexts. To quantify this variation, we performed latent semantic analysis on a large text corpus to estimate the semantic similarities of different linguistic contexts. From these estimates, we calculated the degree to which the different contexts associated with a given word vary in their meanings. We term this quantity a word's semantic diversity (SemD). We suggest that this approach provides an objective way of quantifying the subtle, context-dependent variations in word meaning that are often present in language. We demonstrate that SemD is correlated with other measures of ambiguity and contextual variability, as well as with frequency and imageability. We also show that SemD is a strong predictor of performance in semantic judgments in healthy individuals and in patients with semantic deficits, accounting for unique variance beyond that of other predictors. SemD values for over 30,000 English words are provided as supplementary materials.
Climate sensitivity estimated from temperature reconstructions of the Last Glacial Maximum
NASA Astrophysics Data System (ADS)
Schmittner, A.; Urban, N.; Shakun, J. D.; Mahowald, N. M.; Clark, P. U.; Bartlein, P. J.; Mix, A. C.; Rosell-Melé, A.
2011-12-01
In 1959 IJ Good published the discussion "Kinds of Probability" in Science. Good identified (at least) five kinds. The need for (at least) a sixth kind of probability when quantifying uncertainty in the context of climate science is discussed. This discussion brings out the differences in weather-like forecasting tasks and climate-links tasks, with a focus on the effective use both of science and of modelling in support of decision making. Good also introduced the idea of a "Dynamic probability" a probability one expects to change without any additional empirical evidence; the probabilities assigned by a chess playing program when it is only half thorough its analysis being an example. This case is contrasted with the case of "Mature probabilities" where a forecast algorithm (or model) has converged on its asymptotic probabilities and the question hinges in whether or not those probabilities are expected to change significantly before the event in question occurs, even in the absence of new empirical evidence. If so, then how might one report and deploy such immature probabilities in scientific-support of decision-making rationally? Mature Probability is suggested as a useful sixth kind, although Good would doubtlessly argue that we can get by with just one, effective communication with decision makers may be enhanced by speaking as if the others existed. This again highlights the distinction between weather-like contexts and climate-like contexts. In the former context one has access to a relevant climatology (a relevant, arguably informative distribution prior to any model simulations), in the latter context that information is not available although one can fall back on the scientific basis upon which the model itself rests, and estimate the probability that the model output is in fact misinformative. This subjective "probability of a big surprise" is one way to communicate the probability of model-based information holding in practice, the probability that the information the model-based probability is conditioned on holds. It is argued that no model-based climate-like probability forecast is complete without a quantitative estimate of its own irrelevance, and that the clear identification of model-based probability forecasts as mature or immature, are critical elements for maintaining the credibility of science-based decision support, and can shape uncertainty quantification more widely.
[Basic assessment of needs for training in evidence-based medicine in Slovakia].
Bacharova, L; Hlavacka, S; Rusnakova, V
2001-01-01
The health care reform in Slovakia produces a desire for greater responsibility for and control of strategic decisions and to be better able to evaluate international knowledge and experience in the specific national social and professional contexts. Evidence based medicine (EBM) provides an increasingly organised and accessible database of international knowledge in health and health care, capable of informing decisions at the macro and micro levels. The aim of this pilot study was to assess education, training and other capacity building needs in EBM and evidence based health care. This study was primarily qualitative and based on a triangular approach, which included: (1) The analysis of the situation in pre- and postgraduate education in Slovakia aiming to the estimation of needs in EBM and critical appraisal skills training; (2) The analysis of questionnaires distributed in a sample of 50 medical doctors and university educated public health workers undergoing a postgraduate training; (3) The discussion in focused groups. The findings and analysis uncovered a gap in knowledge and experience of EBM approaches, particularly of searching for evidence, utilising information technology, of undertaking critical appraisals of the validity and quality of external evidence and of knowledge of English. On the other hand the findings revealed a high access to information including the Internet access at the workplace, an increasing awareness of the need for up-date information, a demand for training and potential opportunities for action. The effective implementation introduction of EBM approach would require changes in broader political, cultural and behavioural contexts, including changes in pre- and postgraduate systems of professional and managerial education, changes in professional and managerial attitudes and changes in emphasis in skills and capacity building and improvements in knowledge management systems at the national level.
The effects of environmental context on recognition memory and claims of remembering.
Hockley, William E
2008-11-01
Recognition memory for words was tested in same or different contexts using the remember/know response procedure. Context was manipulated by presenting words in different screen colors and locations and by presenting words against real-world photographs. Overall hit and false-alarm rates were higher for tests presented in an old context compared to a new context. This concordant effect was seen in both remember responses and estimates of familiarity. Similar results were found for rearranged pairings of old study contexts and targets, for study contexts that were unique or were repeated with different words, and for new picture contexts that were physically similar to old contexts. Similar results were also found when subjects focused attention on the study words, but a different pattern of results was obtained when subjects explicitly associated the study words with their picture context. The results show that subjective feelings of recollection play a role in the effects of environmental context but are likely based more on a sense of familiarity that is evoked by the context than on explicit associations between targets and their study context.
A History-based Estimation for LHCb job requirements
NASA Astrophysics Data System (ADS)
Rauschmayr, Nathalie
2015-12-01
The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the reduction of the overall memory footprint. Therefore, it also needs to be studied how memory consumption of jobs can be estimated. A detailed workload analysis of past LHCb jobs is presented. It includes a study of job features and their correlation with runtime and memory consumption. Following the features, a supervised learning algorithm is developed based on a history based prediction. The aim is to learn over time how jobs’ runtime and memory evolve influenced due to changes in experiment conditions and software versions. It will be shown that estimation can be notably improved if experiment conditions are taken into account.
Cost-of-illness studies in heart failure: a systematic review 2004-2016.
Lesyuk, Wladimir; Kriza, Christine; Kolominsky-Rabas, Peter
2018-05-02
Heart failure is a major and growing medical and economic problem worldwide as 1-2% of the healthcare budget are spent for heart failure. The prevalence of heart failure has increased over the past decades and it is expected that there will be further raise due to the higher proportion of elderly in the western societies. In this context cost-of-illness studies can significantly contribute to a better understanding of the drivers and problems which lead to the increasing costs in heart failure. The aim of this study was to perform a systematic review of published cost-of-illness studies related to heart failure to highlight the increasing cost impact of heart failure. A systematic review was conducted from 2004 to 2016 to identify cost-of-illness studies related to heart failure, searching PubMed (Medline), Cochrane, Science Direct (Embase), Scopus and CRD York Database. Of the total of 16 studies identified, 11 studies reported prevalence-based estimates, 2 studies focused on incidence-based data and 3 articles presented both types of cost data. A large variation concerning cost components and estimates can be noted. Only three studies estimated indirect costs. Most of the included studies have shown that the costs for hospital admission are the most expensive cost element. Estimates for annual prevalence-based costs for heart failure patients range from $868 for South Korea to $25,532 for Germany. The lifetime costs for heart failure patients have been estimated to $126.819 per patient. Our review highlights the considerable and growing economic burden of heart failure on the health care systems. The cost-of-illness studies included in this review show large variations in methodology used and the cost results vary consequently. High quality data from cost-of-illness studies with a robust methodology applied can inform policy makers about the major cost drivers of heart failure and can be used as the basis of further economic evaluations.
An Estimation of the Logarithmic Timescale in Ergodic Dynamics
NASA Astrophysics Data System (ADS)
Gomez, Ignacio S.
An estimation of the logarithmic timescale in quantum systems having an ergodic dynamics in the semiclassical limit, is presented. The estimation is based on an extension of the Krieger’s finite generator theorem for discretized σ-algebras and using the time rescaling property of the Kolmogorov-Sinai entropy. The results are in agreement with those obtained in the literature but with a simpler mathematics and within the context of the ergodic theory. Moreover, some consequences of the Poincaré’s recurrence theorem are also explored.
Regional Development Impacts Multi-Regional - Multi-Industry Model (MRMI) Users Manual,
1982-09-01
indicators, described in Chapter 2, are estimated as well. Finally, MRMI is flexible, as it can incorporate alternative macroeconomic , national inter...national and regional economic contexts and data sources for estimating macroeconomic and direct impacts data. Considerations for ensuring consistency...Chapter 4 is devoted to model execution and the interpretation of its output. As MRMI forecasts are based upon macroeconomic , national inter-industry
A bootstrap estimation scheme for chemical compositional data with nondetects
Palarea-Albaladejo, J; Martín-Fernández, J.A; Olea, Ricardo A.
2014-01-01
The bootstrap method is commonly used to estimate the distribution of estimators and their associated uncertainty when explicit analytic expressions are not available or are difficult to obtain. It has been widely applied in environmental and geochemical studies, where the data generated often represent parts of whole, typically chemical concentrations. This kind of constrained data is generically called compositional data, and they require specialised statistical methods to properly account for their particular covariance structure. On the other hand, it is not unusual in practice that those data contain labels denoting nondetects, that is, concentrations falling below detection limits. Nondetects impede the implementation of the bootstrap and represent an additional source of uncertainty that must be taken into account. In this work, a bootstrap scheme is devised that handles nondetects by adding an imputation step within the resampling process and conveniently propagates their associated uncertainly. In doing so, it considers the constrained relationships between chemical concentrations originated from their compositional nature. Bootstrap estimates using a range of imputation methods, including new stochastic proposals, are compared across scenarios of increasing difficulty. They are formulated to meet compositional principles following the log-ratio approach, and an adjustment is introduced in the multivariate case to deal with nonclosed samples. Results suggest that nondetect bootstrap based on model-based imputation is generally preferable. A robust approach based on isometric log-ratio transformations appears to be particularly suited in this context. Computer routines in the R statistical programming language are provided.
2015-04-30
from the MIT Sloan School that provide a relative complexity score for functions (Product and Context Complexity). The PMA assesses the complexity...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or
ERIC Educational Resources Information Center
Fennell, Francis (Skip)
1998-01-01
Presents two activities involving number sense in and around the shopping mall. Activities include estimation, measurement, and applications using percent. Concludes that it is appropriate to help students visualize numbers, particularly large numbers, in a context that is familiar and will be constantly reinforced. (ASK)
Langholz, Bryan; Thomas, Duncan C.; Stovall, Marilyn; Smith, Susan A.; Boice, John D.; Shore, Roy E.; Bernstein, Leslie; Lynch, Charles F.; Zhang, Xinbo; Bernstein, Jonine L.
2009-01-01
Summary Methods for the analysis of individually matched case-control studies with location-specific radiation dose and tumor location information are described. These include likelihood methods for analyses that just use cases with precise location of tumor information and methods that also include cases with imprecise tumor location information. The theory establishes that each of these likelihood based methods estimates the same radiation rate ratio parameters, within the context of the appropriate model for location and subject level covariate effects. The underlying assumptions are characterized and the potential strengths and limitations of each method are described. The methods are illustrated and compared using the WECARE study of radiation and asynchronous contralateral breast cancer. PMID:18647297
ERIC Educational Resources Information Center
Brammeier, Monique; Chow, Joan M.; Samuel, Michael C.; Organista, Kurt C.; Miller, Jamie; Bolan, Gail
2008-01-01
Context: The prevalence of sexually transmitted diseases and associated risk behaviors among California farmworkers is not well described. Purpose: To estimate the prevalence of sexually transmitted diseases (STDs) and associated risk behaviors among California farmworkers. Methods: Cross-sectional analysis of population-based survey data from 6…
Value Added Based on Educational Positions in Dutch Secondary Education
ERIC Educational Resources Information Center
Timmermans, Anneke C.; Bosker, Roel J.; de Wolf, Inge F.; Doolaard, Simone; van der Werf, Margaretha P. C.
2014-01-01
Estimating added value as an indicator of school effectiveness in the context of educational accountability often occurs using test or examination scores of students. This study investigates the possibilities for using scores of educational positions as an alternative indicator. A number of advantages of a value added indicator based on…
Before the N400: Effects of Lexical-Semantic Violations in Visual Cortex
ERIC Educational Resources Information Center
Dikker, Suzanne; Pylkkanen, Liina
2011-01-01
There exists an increasing body of research demonstrating that language processing is aided by context-based predictions. Recent findings suggest that the brain generates estimates about the likely physical appearance of upcoming words based on syntactic predictions: words that do not physically look like the expected syntactic category show…
Leyrat, Clémence; Seaman, Shaun R; White, Ian R; Douglas, Ian; Smeeth, Liam; Kim, Joseph; Resche-Rigon, Matthieu; Carpenter, James R; Williamson, Elizabeth J
2017-01-01
Inverse probability of treatment weighting is a popular propensity score-based approach to estimate marginal treatment effects in observational studies at risk of confounding bias. A major issue when estimating the propensity score is the presence of partially observed covariates. Multiple imputation is a natural approach to handle missing data on covariates: covariates are imputed and a propensity score analysis is performed in each imputed dataset to estimate the treatment effect. The treatment effect estimates from each imputed dataset are then combined to obtain an overall estimate. We call this method MIte. However, an alternative approach has been proposed, in which the propensity scores are combined across the imputed datasets (MIps). Therefore, there are remaining uncertainties about how to implement multiple imputation for propensity score analysis: (a) should we apply Rubin's rules to the inverse probability of treatment weighting treatment effect estimates or to the propensity score estimates themselves? (b) does the outcome have to be included in the imputation model? (c) how should we estimate the variance of the inverse probability of treatment weighting estimator after multiple imputation? We studied the consistency and balancing properties of the MIte and MIps estimators and performed a simulation study to empirically assess their performance for the analysis of a binary outcome. We also compared the performance of these methods to complete case analysis and the missingness pattern approach, which uses a different propensity score model for each pattern of missingness, and a third multiple imputation approach in which the propensity score parameters are combined rather than the propensity scores themselves (MIpar). Under a missing at random mechanism, complete case and missingness pattern analyses were biased in most cases for estimating the marginal treatment effect, whereas multiple imputation approaches were approximately unbiased as long as the outcome was included in the imputation model. Only MIte was unbiased in all the studied scenarios and Rubin's rules provided good variance estimates for MIte. The propensity score estimated in the MIte approach showed good balancing properties. In conclusion, when using multiple imputation in the inverse probability of treatment weighting context, MIte with the outcome included in the imputation model is the preferred approach.
Field evaluation of distance-estimation error during wetland-dependent bird surveys
Nadeau, Christopher P.; Conway, Courtney J.
2012-01-01
Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x̄error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x̄error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point-count surveys. Evaluating distance-estimation errors will allow investigators to better evaluate the accuracy of avian density and trend estimates. Moreover, investigators who evaluate distance-estimation errors could employ recently developed models to incorporate distance-estimation error into analyses. We encourage further development of such models, including the inclusion of such models into distance-analysis software.
Bhalla, Kavi; Harrison, James E
2016-04-01
Burden of disease and injury methods can be used to summarise and compare the effects of conditions in terms of disability-adjusted life years (DALYs). Burden estimation methods are not inherently complex. However, as commonly implemented, the methods include complex modelling and estimation. To provide a simple and open-source software tool that allows estimation of incidence-DALYs due to injury, given data on incidence of deaths and non-fatal injuries. The tool includes a default set of estimation parameters, which can be replaced by users. The tool was written in Microsoft Excel. All calculations and values can be seen and altered by users. The parameter sets currently used in the tool are based on published sources. The tool is available without charge online at http://calculator.globalburdenofinjuries.org. To use the tool with the supplied parameter sets, users need to only paste a table of population and injury case data organised by age, sex and external cause of injury into a specified location in the tool. Estimated DALYs can be read or copied from tables and figures in another part of the tool. In some contexts, a simple and user-modifiable burden calculator may be preferable to undertaking a more complex study to estimate the burden of disease. The tool and the parameter sets required for its use can be improved by user innovation, by studies comparing DALYs estimates calculated in this way and in other ways, and by shared experience of its use. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Estimating the Success of OD Applications.
ERIC Educational Resources Information Center
Golembiewski, Robert T.; And Others
1982-01-01
Organizational development (OD) and its future are discussed. Examines database implications about OD's applications. Reports an effort to transcend the limitations of the literature, based on a very intensive search for OD applications in both business and government contexts. (CT)
Design of double fuzzy clustering-driven context neural networks.
Kim, Eun-Hu; Oh, Sung-Kwun; Pedrycz, Witold
2018-08-01
In this study, we introduce a novel category of double fuzzy clustering-driven context neural networks (DFCCNNs). The study is focused on the development of advanced design methodologies for redesigning the structure of conventional fuzzy clustering-based neural networks. The conventional fuzzy clustering-based neural networks typically focus on dividing the input space into several local spaces (implied by clusters). In contrast, the proposed DFCCNNs take into account two distinct local spaces called context and cluster spaces, respectively. Cluster space refers to the local space positioned in the input space whereas context space concerns a local space formed in the output space. Through partitioning the output space into several local spaces, each context space is used as the desired (target) local output to construct local models. To complete this, the proposed network includes a new context layer for reasoning about context space in the output space. In this sense, Fuzzy C-Means (FCM) clustering is useful to form local spaces in both input and output spaces. The first one is used in order to form clusters and train weights positioned between the input and hidden layer, whereas the other one is applied to the output space to form context spaces. The key features of the proposed DFCCNNs can be enumerated as follows: (i) the parameters between the input layer and hidden layer are built through FCM clustering. The connections (weights) are specified as constant terms being in fact the centers of the clusters. The membership functions (represented through the partition matrix) produced by the FCM are used as activation functions located at the hidden layer of the "conventional" neural networks. (ii) Following the hidden layer, a context layer is formed to approximate the context space of the output variable and each node in context layer means individual local model. The outputs of the context layer are specified as a combination of both weights formed as linear function and the outputs of the hidden layer. The weights are updated using the least square estimation (LSE)-based method. (iii) At the output layer, the outputs of context layer are decoded to produce the corresponding numeric output. At this time, the weighted average is used and the weights are also adjusted with the use of the LSE scheme. From the viewpoint of performance improvement, the proposed design methodologies are discussed and experimented with the aid of benchmark machine learning datasets. Through the experiments, it is shown that the generalization abilities of the proposed DFCCNNs are better than those of the conventional FCNNs reported in the literature. Copyright © 2018 Elsevier Ltd. All rights reserved.
Matuszewski, Szymon; Frątczak-Łagiewska, Katarzyna
2018-02-05
Insects colonizing human or animal cadavers may be used to estimate post-mortem interval (PMI) usually by aging larvae or pupae sampled on a crime scene. The accuracy of insect age estimates in a forensic context is reduced by large intraspecific variation in insect development time. Here we test the concept that insect size at emergence may be used to predict insect physiological age and accordingly to improve the accuracy of age estimates in forensic entomology. Using results of laboratory study on development of forensically-useful beetle Creophilus maxillosus (Linnaeus, 1758) (Staphylinidae) we demonstrate that its physiological age at emergence [i.e. thermal summation value (K) needed for emergence] fall with an increase of beetle size. In the validation study it was found that K estimated based on the adult insect size was significantly closer to the true K as compared to K from the general thermal summation model. Using beetle length at emergence as a predictor variable and male or female specific model regressing K against beetle length gave the most accurate predictions of age. These results demonstrate that size of C. maxillosus at emergence improves accuracy of age estimates in a forensic context.
Modeling emission rates and exposures from outdoor cooking
NASA Astrophysics Data System (ADS)
Edwards, Rufus; Princevac, Marko; Weltman, Robert; Ghasemian, Masoud; Arora, Narendra K.; Bond, Tami
2017-09-01
Approximately 3 billion individuals rely on solid fuels for cooking globally. For a large portion of these - an estimated 533 million - cooking is outdoors, where emissions from cookstoves pose a health risk to both cooks and other household and village members. Models that estimate emissions rates from stoves in indoor environments that would meet WHO air quality guidelines (AQG), explicitly don't account for outdoor cooking. The objectives of this paper are to link health based exposure guidelines with emissions from outdoor cookstoves, using a Monte Carlo simulation of cooking times from Haryana India coupled with inverse Gaussian dispersion models. Mean emission rates for outdoor cooking that would result in incremental increases in personal exposure equivalent to the WHO AQG during a 24-h period were 126 ± 13 mg/min for cooking while squatting and 99 ± 10 mg/min while standing. Emission rates modeled for outdoor cooking are substantially higher than emission rates for indoor cooking to meet AQG, because the models estimate impact of emissions on personal exposure concentrations rather than microenvironment concentrations, and because the smoke disperses more readily outdoors compared to indoor environments. As a result, many more stoves including the best performing solid-fuel biomass stoves would meet AQG when cooking outdoors, but may also result in substantial localized neighborhood pollution depending on housing density. Inclusion of the neighborhood impact of pollution should be addressed more formally both in guidelines on emissions rates from stoves that would be protective of health, and also in wider health impact evaluation efforts and burden of disease estimates. Emissions guidelines should better represent the different contexts in which stoves are being used, especially because in these contexts the best performing solid fuel stoves have the potential to provide significant benefits.
Survey-based socio-economic data from slums in Bangalore, India
NASA Astrophysics Data System (ADS)
Roy, Debraj; Palavalli, Bharath; Menon, Niveditha; King, Robin; Pfeffer, Karin; Lees, Michael; Sloot, Peter M. A.
2018-01-01
In 2010, an estimated 860 million people were living in slums worldwide, with around 60 million added to the slum population between 2000 and 2010. In 2011, 200 million people in urban Indian households were considered to live in slums. In order to address and create slum development programmes and poverty alleviation methods, it is necessary to understand the needs of these communities. Therefore, we require data with high granularity in the Indian context. Unfortunately, there is a paucity of highly granular data at the level of individual slums. We collected the data presented in this paper in partnership with the slum dwellers in order to overcome the challenges such as validity and efficacy of self reported data. Our survey of Bangalore covered 36 slums across the city. The slums were chosen based on stratification criteria, which included geographical location of the slum, whether the slum was resettled or rehabilitated, notification status of the slum, the size of the slum and the religious profile. This paper describes the relational model of the slum dataset, the variables in the dataset, the variables constructed for analysis and the issues identified with the dataset. The data collected includes around 267,894 data points spread over 242 questions for 1,107 households. The dataset can facilitate interdisciplinary research on spatial and temporal dynamics of urban poverty and well-being in the context of rapid urbanization of cities in developing countries.
Williams, Matthew L; Burnap, Pete; Sloan, Luke
2017-01-01
New and emerging forms of data, including posts harvested from social media sites such as Twitter, have become part of the sociologist’s data diet. In particular, some researchers see an advantage in the perceived ‘public’ nature of Twitter posts, representing them in publications without seeking informed consent. While such practice may not be at odds with Twitter’s terms of service, we argue there is a need to interpret these through the lens of social science research methods that imply a more reflexive ethical approach than provided in ‘legal’ accounts of the permissible use of these data in research publications. To challenge some existing practice in Twitter-based research, this article brings to the fore: (1) views of Twitter users through analysis of online survey data; (2) the effect of context collapse and online disinhibition on the behaviours of users; and (3) the publication of identifiable sensitive classifications derived from algorithms. PMID:29276313
DaMatta, Fábio M; Avila, Rodrigo T; Cardoso, Amanda A; Martins, Samuel C V; Ramalho, José C
2018-05-30
Coffee is one of the most important global crops and provides a livelihood to millions of people living in developing countries. Coffee species have been described as being highly sensitive to climate change, as largely deduced from modeling studies based on predictions of rising temperatures and changing rainfall patterns. Here, we discuss the physiological responses of the coffee tree in the context of present and ongoing climate changes, including drought, heat, and light stresses, and interactions between these factors. We also summarize recent insights on the physiological and agronomic performance of coffee at elevated atmospheric CO 2 concentrations and highlight the key role of CO 2 in mitigating the harmful effects of heat stress. Evidence is shown suggesting that warming, per se, may be less harmful to coffee suitability than previously estimated, at least under the conditions of an adequate water supply. Finally, we discuss several mitigation strategies to improve crop performance in a changing world.
Williams, Matthew L; Burnap, Pete; Sloan, Luke
2017-12-01
New and emerging forms of data, including posts harvested from social media sites such as Twitter, have become part of the sociologist's data diet. In particular, some researchers see an advantage in the perceived 'public' nature of Twitter posts, representing them in publications without seeking informed consent. While such practice may not be at odds with Twitter's terms of service, we argue there is a need to interpret these through the lens of social science research methods that imply a more reflexive ethical approach than provided in 'legal' accounts of the permissible use of these data in research publications. To challenge some existing practice in Twitter-based research, this article brings to the fore: (1) views of Twitter users through analysis of online survey data; (2) the effect of context collapse and online disinhibition on the behaviours of users; and (3) the publication of identifiable sensitive classifications derived from algorithms.
Identification and feedback control in structures with piezoceramic actuators
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.; Wang, Y.
1992-01-01
In this lecture we give fundamental well-posedness results for a variational formulation of a class of damped second order partial differential equations with unbounded input or control coefficients. Included as special cases in this class are structures with piezoceramic actuators. We consider approximation techniques leading to computational methods in the context of both parameter estimation and feedback control problems for these systems. Rigorous convergence results for parameter estimates and feedback gains are discussed.
The role of different social contexts in shaping influenza transmission during the 2009 pandemic
NASA Astrophysics Data System (ADS)
Ajelli, Marco; Poletti, Piero; Melegaro, Alessia; Merler, Stefano
2014-11-01
Evaluating the relative importance of different social contexts in which infection transmission occurs is critical for identifying optimal intervention strategies. Nonetheless, an overall picture of influenza transmission in different social contexts has yet to emerge. Here we provide estimates of the fraction of infections generated in different social contexts during the 2009 H1N1 pandemic in Italy by making use of a highly detailed individual-based model accounting for time use data and parametrized on the basis of observed age-specific seroprevalence. We found that 41.6% (95%CI: 39-43.7%) of infections occurred in households, 26.7% (95%CI: 21-33.2) in schools, 3.3% (95%CI: 1.7-5%) in workplaces, and 28.4% (95%CI: 24.6-31.9%) in the general community. The above estimates strongly depend on the lower susceptibility to infection of individuals 19+ years old compared to younger ones, estimated to be 0.2 (95%CI 0.12-0.28). We also found that school closure over the weekends contributed to decrease the effective reproduction number of about 8% and significantly affected the pattern of transmission. These results highlight the pivotal role played by schools in the transmission of the 2009 H1N1 influenza. They may be relevant in the evaluation of intervention options and, hence, for informing policy decisions.
Direct health care costs of occupational asthma in Spain: an estimation from 2008.
García Gómez, Montserrat; Urbanos Garrido, Rosa; Castañeda López, Rosario; López Menduiña, Patricia
2012-10-01
Occupational asthma (OA) is the most common work-related disease in industrialized countries. In 2008, only 556 cases of OA had been diagnosed in Spain, which is quite far from even the most conservative estimates. In this context, the aim of this paper is to estimate the number of asthma cases attributable to the work setting in Spain in 2008 as well as the related health care costs for the same year. The number of cases of OA was calculated from estimates of attributable risk given by previous studies. The cost estimation focused on direct health-care costs and it was based both on data from the National Health System's (NHS) analytical accounting and from secondary sources. The number of prevalent cases of work-related asthma in Spain during 2008 ranges between 168 713 and 204 705 cases based on symptomatic diagnosis, entailing an associated cost from 318.1 to 355.8 million Euros. These figures fall to a range between 82 635 and 100 264 cases when bronchial hyperreactivity is included as a diagnostic criterion, at a cost of 155.8-174.3 million Euros. Slightly more than 18 million Euros represent the health-care costs of those cases requiring specialized care. Estimations of OA are very relevant to adequately prevent this disease. The treatment of OA, which involves a significant cost, is being financed by the NHS, although it should be covered by Social Security. Copyright © 2012 SEPAR. Published by Elsevier España, S.L. All rights reserved.
NASA Astrophysics Data System (ADS)
Paiva, Rodrigo C. D.; Durand, Michael T.; Hossain, Faisal
2015-01-01
Recent efforts have sought to estimate river discharge and other surface water-related quantities using spaceborne sensors, with better spatial coverage but worse temporal sampling as compared with in situ measurements. The Surface Water and Ocean Topography (SWOT) mission will provide river discharge estimates globally from space. However, questions on how to optimally use the spatially distributed but asynchronous satellite observations to generate continuous fields still exist. This paper presents a statistical model (River Kriging-RK), for estimating discharge time series in a river network in the context of the SWOT mission. RK uses discharge estimates at different locations and times to produce a continuous field using spatiotemporal kriging. A key component of RK is the space-time river discharge covariance, which was derived analytically from the diffusive wave approximation of Saint Venant's equations. The RK covariance also accounts for the loss of correlation at confluences. The model performed well in a case study on Ganges-Brahmaputra-Meghna (GBM) River system in Bangladesh using synthetic SWOT observations. The correlation model reproduced empirically derived values. RK (R2=0.83) outperformed other kriging-based methods (R2=0.80), as well as a simple time series linear interpolation (R2=0.72). RK was used to combine discharge from SWOT and in situ observations, improving estimates when the latter is included (R2=0.91). The proposed statistical concepts may eventually provide a feasible framework to estimate continuous discharge time series across a river network based on SWOT data, other altimetry missions, and/or in situ data.
NASA Astrophysics Data System (ADS)
Anugrah, I. R.; Mudzakir, A.; Sumarna, O.
2017-09-01
Teaching materials used in Indonesia generally just emphasize remembering skill so that the students’ science literacy is low. Innovation is needed to transform traditional teaching materials so that it can stimulate students’ science literacy, one of which is by context-based approach. This study focused on the construction of context-based module for high school using Organic Light-Emitting Diode (OLED) topics. OLED was chosen because it is an up-to-date topic and relevant to real life. This study used Model of Educational Reconstruction (MER) to reconstruct science content structure about OLED through combining scientist’s perspectives with student’s preconceptions and national curriculum. Literature review of OLED includes its definition, components, characteristics and working principle. Student’s preconceptions about OLED are obtained through interviews. The result shows that student’s preconceptions have not been fully similar with the scientist’s perspective. One of the reasons is that some of the related Chemistry concepts are too complicated. Through curriculum analysis, Chemistry about OLED that are appropriate for high school are Bohr’s atomic theory, redox and organic chemistry including polymers and aromatics. The OLED context and its Chemistry concept were developed into context-based module by adapting science literacy-based learning. This module is expected to increase students’ science literacy performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt Derr; Milos Manic
Location Based Services (LBS), context aware applications, and people and object tracking depend on the ability to locate mobile devices, also known as localization, in the wireless landscape. Localization enables a diverse set of applications that include, but are not limited to, vehicle guidance in an industrial environment, security monitoring, self-guided tours, personalized communications services, resource tracking, mobile commerce services, guiding emergency workers during fire emergencies, habitat monitoring, environmental surveillance, and receiving alerts. This paper presents a new neural network approach (LENSR) based on a competitive topological Counter Propagation Network (CPN) with k-nearest neighborhood vector mapping, for indoor location estimationmore » based on received signal strength. The advantage of this approach is both speed and accuracy. The tested accuracy of the algorithm was 90.6% within 1 meter and 96.4% within 1.5 meters. Several approaches for location estimation using WLAN technology were reviewed for comparison of results.« less
Estimation and applications of size-based distributions in forestry
Jeffrey H. Gove
2003-01-01
Size-based distributions arise in several contexts in forestry and ecology. Simple power relationships (e.g., basal area and diameter at breast height) between variables are one such area of interest arising from a modeling perspective. Another, probability proportional to size sampline (PPS), is found in the most widely used methods for sampling standing or dead and...
Do U.S. States’ Socioeconomic and Policy Contexts Shape Adult Disability?
Hayward, Mark D.; Wolf, Douglas A.
2017-01-01
Growing disparities in adult mortality across U.S. states point to the importance of assessing disparities in other domains of health. Here, we estimate state-level differences in disability, and draw on the WHO socio-ecological framework to assess the role of ecological factors in explaining these differences. Our study is based on data from 5.5 million adults aged 25–94 years in the 2010–2014 waves of the American Community Survey. Disability is defined as difficulty with mobility, independent living, self-care, vision, hearing, or cognition. We first provide estimates of age-standardized and age-specific disability prevalence by state. We then estimate multilevel models to assess how states’ socioeconomic and policy contexts shape the probability of having a disability. Age-standardized disability prevalence differs markedly by state, from 12.9% in North Dakota and Minnesota to 23.5% in West Virginia. Disability was lower in states with stronger economic output, more income equality, longer histories of tax credits for low-income workers, and higher cigarette taxes (for middle-age women), net of individuals’ socio-demographic characteristics. States’ socioeconomic and policy contexts appear particularly important for older adults. Findings underscore the importance of socio-ecological influences on disability. PMID:28219027
Proper orthogonal decomposition-based spectral higher-order stochastic estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baars, Woutijn J., E-mail: wbaars@unimelb.edu.au; Tinney, Charles E.
A unique routine, capable of identifying both linear and higher-order coherence in multiple-input/output systems, is presented. The technique combines two well-established methods: Proper Orthogonal Decomposition (POD) and Higher-Order Spectra Analysis. The latter of these is based on known methods for characterizing nonlinear systems by way of Volterra series. In that, both linear and higher-order kernels are formed to quantify the spectral (nonlinear) transfer of energy between the system's input and output. This reduces essentially to spectral Linear Stochastic Estimation when only first-order terms are considered, and is therefore presented in the context of stochastic estimation as spectral Higher-Order Stochastic Estimationmore » (HOSE). The trade-off to seeking higher-order transfer kernels is that the increased complexity restricts the analysis to single-input/output systems. Low-dimensional (POD-based) analysis techniques are inserted to alleviate this void as POD coefficients represent the dynamics of the spatial structures (modes) of a multi-degree-of-freedom system. The mathematical framework behind this POD-based HOSE method is first described. The method is then tested in the context of jet aeroacoustics by modeling acoustically efficient large-scale instabilities as combinations of wave packets. The growth, saturation, and decay of these spatially convecting wave packets are shown to couple both linearly and nonlinearly in the near-field to produce waveforms that propagate acoustically to the far-field for different frequency combinations.« less
Volcanic stratospheric sulfur injections and aerosol optical depth from 500 BCE to 1900 CE
NASA Astrophysics Data System (ADS)
Toohey, Matthew; Sigl, Michael
2017-11-01
The injection of sulfur into the stratosphere by explosive volcanic eruptions is the cause of significant climate variability. Based on sulfate records from a suite of ice cores from Greenland and Antarctica, the eVolv2k database includes estimates of the magnitudes and approximate source latitudes of major volcanic stratospheric sulfur injection (VSSI) events from 500 BCE to 1900 CE, constituting an update of prior reconstructions and an extension of the record by 1000 years. The database incorporates improvements to the ice core records (in terms of synchronisation and dating) and refinements to the methods used to estimate VSSI from ice core records, and it includes first estimates of the random uncertainties in VSSI values. VSSI estimates for many of the largest eruptions, including Samalas (1257), Tambora (1815), and Laki (1783), are within 10 % of prior estimates. A number of strong events are included in eVolv2k which are largely underestimated or not included in earlier VSSI reconstructions, including events in 540, 574, 682, and 1108 CE. The long-term annual mean VSSI from major volcanic eruptions is estimated to be ˜ 0.5 Tg [S] yr-1, ˜ 50 % greater than a prior reconstruction due to the identification of more events and an increase in the magnitude of many intermediate events. A long-term latitudinally and monthly resolved stratospheric aerosol optical depth (SAOD) time series is reconstructed from the eVolv2k VSSI estimates, and the resulting global mean SAOD is found to be similar (within 33 %) to a prior reconstruction for most of the largest eruptions. The long-term (500 BCE-1900 CE) average global mean SAOD estimated from the eVolv2k VSSI estimates including a constant background injection of stratospheric sulfur is ˜ 0.014, 30 % greater than a prior reconstruction. These new long-term reconstructions of past VSSI and SAOD variability give context to recent volcanic forcing, suggesting that the 20th century was a period of somewhat weaker than average volcanic forcing, with current best estimates of 20th century mean VSSI and SAOD values being 25 and 14 % less, respectively, than the mean of the 500 BCE to 1900 CE period. The reconstructed VSSI and SAOD data are available at https://doi.org/10.1594/WDCC/eVolv2k_v2.
Boehler, Christian E H; Lord, Joanne
2016-01-01
Published cost-effectiveness estimates can vary considerably, both within and between countries. Despite extensive discussion, little is known empirically about factors relating to these variations. To use multilevel statistical modeling to integrate cost-effectiveness estimates from published economic evaluations to investigate potential causes of variation. Cost-effectiveness studies of statins for cardiovascular disease prevention were identified by systematic review. Estimates of incremental costs and effects were extracted from reported base case, sensitivity, and subgroup analyses, with estimates grouped in studies and in countries. Three bivariate models were developed: a cross-classified model to accommodate data from multinational studies, a hierarchical model with multinational data allocated to a single category at country level, and a hierarchical model excluding multinational data. Covariates at different levels were drawn from a long list of factors suggested in the literature. We found 67 studies reporting 2094 cost-effectiveness estimates relating to 23 countries (6 studies reporting for more than 1 country). Data and study-level covariates included patient characteristics, intervention and comparator cost, and some study methods (e.g., discount rates and time horizon). After adjusting for these factors, the proportion of variation attributable to countries was negligible in the cross-classified model but moderate in the hierarchical models (14%-19% of total variance). Country-level variables that improved the fit of the hierarchical models included measures of income and health care finance, health care resources, and population risks. Our analysis suggested that variability in published cost-effectiveness estimates is related more to differences in study methods than to differences in national context. Multinational studies were associated with much lower country-level variation than single-country studies. These findings are for a single clinical question and may be atypical. © The Author(s) 2015.
Water quality monitoring records for estimating tap water arsenic and nitrate: a validation study.
Searles Nielsen, Susan; Kuehn, Carrie M; Mueller, Beth A
2010-01-28
Tap water may be an important source of exposure to arsenic and nitrate. Obtaining and analyzing samples in the context of large studies of health effects can be expensive. As an alternative, studies might estimate contaminant levels in individual homes by using publicly available water quality monitoring records, either alone or in combination with geographic information systems (GIS). We examined the validity of records-based methods in Washington State, where arsenic and nitrate contamination is prevalent but generally observed at modest levels. Laboratory analysis of samples from 107 homes (median 0.6 microg/L arsenic, median 0.4 mg/L nitrate as nitrogen) served as our "gold standard." Using Spearman's rho we compared these measures to estimates obtained using only the homes' street addresses and recent and/or historical measures from publicly monitored water sources within specified distances (radii) ranging from one half mile to 10 miles. Agreement improved as distance decreased, but the proportion of homes for which we could estimate summary measures also decreased. When including all homes, agreement was 0.05-0.24 for arsenic (8 miles), and 0.31-0.33 for nitrate (6 miles). Focusing on the closest source yielded little improvement. Agreement was greatest among homes with private wells. For homes on a water system, agreement improved considerably if we included only sources serving the relevant system (rho = 0.29 for arsenic, rho = 0.60 for nitrate). Historical water quality databases show some promise for categorizing epidemiologic study participants in terms of relative tap water nitrate levels. Nonetheless, such records-based methods must be used with caution, and their use for arsenic may be limited.
A place-based model of local activity spaces: individual place exposure and characteristics
NASA Astrophysics Data System (ADS)
Hasanzadeh, Kamyar; Laatikainen, Tiina; Kyttä, Marketta
2018-01-01
Researchers for long have hypothesized relationships between mobility, urban context, and health. Despite the ample amount of discussions, the empirical findings corroborating such associations remain to be marginal in the literature. It is growingly believed that the weakness of the observed associations can be largely explained by the common misspecification of the geographical context. Researchers coming from different fields have developed a wide range of methods for estimating the extents of these geographical contexts. In this article, we argue that no single approach yet has sufficiently been capable of capturing the complexity of human mobility patterns. Subsequently, we discuss that reaching a better understanding of individual activity spaces can be possible through a spatially sensitive estimation of place exposure. Following this discussion, we take an integrative person and place-based approach to create an individualized residential exposure model (IREM) to estimate the local activity spaces (LAS) of the individuals. This model is created using data collected through public participation GIS. Following a brief comparison of IREM with other commonly used LAS models, the article continues by presenting an empirical study of aging citizens in Helsinki area to demonstrate the usability of the proposed framework. In this study, we identify the main dimensions of LASs and seek their associations with socio-demographic characteristics of individuals and their location in the region. The promising results from comparisons and the interesting findings from the empirical part suggest both a methodological and conceptual improvement in capturing the complexity of local activity spaces.
2013-01-01
Background Insect diversity typically declines with increasing latitude, but previous studies have shown conflicting latitude-richness gradients for some hymenopteran parasitoids. However, historical estimates of insect diversity and species richness can be difficult to confirm or compare, because they may be based upon dissimilar methods. As a proxy for species identification, we used DNA barcoding to identify molecular operational taxonomic units (MOTUs) for 7870 Hymenoptera specimens collected near Churchill, Manitoba, from 2004 through 2010. Results We resolved 1630 MOTUs for this collection, of which 75% (1228) were ichneumonoids (Ichneumonidae + Braconidae) and 91% (1484) were parasitoids. We estimate the total number of Hymenoptera MOTUs in this region at 2624-2840. Conclusions The diversity of parasitoids in this sub-Arctic environment implies a high diversity of potential host species throughout the same range. We discuss these results in the contexts of resolving interspecific interactions that may include cryptic species, and developing reproducible methods to estimate and compare species richness across sites and between surveys, especially when morphological specialists are not available to identify every specimen. PMID:23351160
A Modularized Efficient Framework for Non-Markov Time Series Estimation
NASA Astrophysics Data System (ADS)
Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.
2018-06-01
We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.
The effect of model uncertainty on some optimal routing problems
NASA Technical Reports Server (NTRS)
Mohanty, Bibhu; Cassandras, Christos G.
1991-01-01
The effect of model uncertainties on optimal routing in a system of parallel queues is examined. The uncertainty arises in modeling the service time distribution for the customers (jobs, packets) to be served. For a Poisson arrival process and Bernoulli routing, the optimal mean system delay generally depends on the variance of this distribution. However, as the input traffic load approaches the system capacity the optimal routing assignment and corresponding mean system delay are shown to converge to a variance-invariant point. The implications of these results are examined in the context of gradient-based routing algorithms. An example of a model-independent algorithm using online gradient estimation is also included.
Evaluation of alternative model-data fusion approaches in water balance estimation across Australia
NASA Astrophysics Data System (ADS)
van Dijk, A. I. J. M.; Renzullo, L. J.
2009-04-01
Australia's national agencies are developing a continental modelling system to provide a range of water information services. It will include rolling water balance estimation to underpin national water accounts, water resources assessments that interpret current water resources availability and trends in a historical context, and water resources predictions coupled to climate and weather forecasting. The nation-wide coverage, currency, accuracy, and consistency required means that remote sensing will need to play an important role along with in-situ observations. Different approaches to blending models and observations can be considered. Integration of on-ground and remote sensing data into land surface models in atmospheric applications often involves state updating through model-data assimilation techniques. By comparison, retrospective water balance estimation and hydrological scenario modelling to date has mostly relied on static parameter fitting against observations and has made little use of earth observation. The model-data fusion approach most appropriate for a continental water balance estimation system will need to consider the trade-off between computational overhead and the accuracy gains achieved when using more sophisticated synthesis techniques and additional observations. This trade-off was investigated using a landscape hydrological model and satellite-based estimates of soil moisture and vegetation properties for aseveral gauged test catchments in southeast Australia.
Context-Based Questions: Optics in Animal Eyes
ERIC Educational Resources Information Center
Kaltakci, Derya; Eryilmaz, Ali
2011-01-01
Context is important as a motivational factor for student involvement with physics. The diversity in the types and the functions of animal eyes is an excellent context in which to achieve this goal. There exists a range of subtopics in optics including pinhole, reflection, refraction, and superposition that can be discussed in the context of the…
ERIC Educational Resources Information Center
Guilherme, Elsa; Faria, Cláudia; Boaventura, Diana
2016-01-01
The purpose of the study was to investigate how young students engage in an inquiry-based project driven by real-life contexts. Elementary school children were engaged in a small inquiry project centred on marine biodiversity and species adaptations. All activities included the exploration of an out-of-school setting as a learning context. A total…
Context-Aided Sensor Fusion for Enhanced Urban Navigation
Martí, Enrique David; Martín, David; García, Jesús; de la Escalera, Arturo; Molina, José Manuel; Armingol, José María
2012-01-01
The deployment of Intelligent Vehicles in urban environments requires reliable estimation of positioning for urban navigation. The inherent complexity of this kind of environments fosters the development of novel systems which should provide reliable and precise solutions to the vehicle. This article details an advanced GNSS/IMU fusion system based on a context-aided Unscented Kalman filter for navigation in urban conditions. The constrained non-linear filter is here conditioned by a contextual knowledge module which reasons about sensor quality and driving context in order to adapt it to the situation, while at the same time it carries out a continuous estimation and correction of INS drift errors. An exhaustive analysis has been carried out with available data in order to characterize the behavior of available sensors and take it into account in the developed solution. The performance is then analyzed with an extensive dataset containing representative situations. The proposed solution suits the use of fusion algorithms for deploying Intelligent Transport Systems in urban environments. PMID:23223080
Context-aided sensor fusion for enhanced urban navigation.
Martí, Enrique David; Martín, David; García, Jesús; de la Escalera, Arturo; Molina, José Manuel; Armingol, José María
2012-12-06
The deployment of Intelligent Vehicles in urban environments requires reliable estimation of positioning for urban navigation. The inherent complexity of this kind of environments fosters the development of novel systems which should provide reliable and precise solutions to the vehicle. This article details an advanced GNSS/IMU fusion system based on a context-aided Unscented Kalman filter for navigation in urban conditions. The constrained non-linear filter is here conditioned by a contextual knowledge module which reasons about sensor quality and driving context in order to adapt it to the situation, while at the same time it carries out a continuous estimation and correction of INS drift errors. An exhaustive analysis has been carried out with available data in order to characterize the behavior of available sensors and take it into account in the developed solution. The performance is then analyzed with an extensive dataset containing representative situations. The proposed solution suits the use of fusion algorithms for deploying Intelligent Transport Systems in urban environments.
Incorporating spatial context into statistical classification of multidimensional image data
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Tilton, J. C.; Swain, P. H.
1981-01-01
Compound decision theory is employed to develop a general statistical model for classifying image data using spatial context. The classification algorithm developed from this model exploits the tendency of certain ground-cover classes to occur more frequently in some spatial contexts than in others. A key input to this contextural classifier is a quantitative characterization of this tendency: the context function. Several methods for estimating the context function are explored, and two complementary methods are recommended. The contextural classifier is shown to produce substantial improvements in classification accuracy compared to the accuracy produced by a non-contextural uniform-priors maximum likelihood classifier when these methods of estimating the context function are used. An approximate algorithm, which cuts computational requirements by over one-half, is presented. The search for an optimal implementation is furthered by an exploration of the relative merits of using spectral classes or information classes for classification and/or context function estimation.
Overdiagnosis across medical disciplines: a scoping review
de Groot, Joris A H; Reitsma, Johannes B; Moons, Karel G M; Hooft, Lotty; Naaktgeboren, Christiana A
2017-01-01
Objective To provide insight into how and in what clinical fields overdiagnosis is studied and give directions for further applied and methodological research. Design Scoping review. Data sources Medline up to August 2017. Study selection All English studies on humans, in which overdiagnosis was discussed as a dominant theme. Data extraction Studies were assessed on clinical field, study aim (ie, methodological or non-methodological), article type (eg, primary study, review), the type and role of diagnostic test(s) studied and the context in which these studies discussed overdiagnosis. Results From 4896 studies, 1851 were included for analysis. Half of all studies on overdiagnosis were performed in the field of oncology (50%). Other prevalent clinical fields included mental disorders, infectious diseases and cardiovascular diseases accounting for 9%, 8% and 6% of studies, respectively. Overdiagnosis was addressed from a methodological perspective in 20% of studies. Primary studies were the most common article type (58%). The type of diagnostic tests most commonly studied were imaging tests (32%), although these were predominantly seen in oncology and cardiovascular disease (84%). Diagnostic tests were studied in a screening setting in 43% of all studies, but as high as 75% of all oncological studies. The context in which studies addressed overdiagnosis related most frequently to its estimation, accounting for 53%. Methodology on overdiagnosis estimation and definition provided a source for extensive discussion. Other contexts of discussion included definition of disease, overdiagnosis communication, trends in increasing disease prevalence, drivers and consequences of overdiagnosis, incidental findings and genomics. Conclusions Overdiagnosis is discussed across virtually all clinical fields and in different contexts. The variability in characteristics between studies and lack of consensus on overdiagnosis definition indicate the need for a uniform typology to improve coherence and comparability of studies on overdiagnosis. PMID:29284720
A Pseudorange Measurement Scheme Based on Snapshot for Base Station Positioning Receivers.
Mo, Jun; Deng, Zhongliang; Jia, Buyun; Bian, Xinmei
2017-12-01
Digital multimedia broadcasting signal is promised to be a wireless positioning signal. This paper mainly studies a multimedia broadcasting technology, named China mobile multimedia broadcasting (CMMB), in the context of positioning. Theoretical and practical analysis on the CMMB signal suggests that the existing CMMB signal does not have the meter positioning capability. So, the CMMB system has been modified to achieve meter positioning capability by multiplexing the CMMB signal and pseudo codes in the same frequency band. The time difference of arrival (TDOA) estimation method is used in base station positioning receivers. Due to the influence of a complex fading channel and the limited bandwidth of receivers, the regular tracking method based on pseudo code ranging is difficult to provide continuous and accurate TDOA estimations. A pseudorange measurement scheme based on snapshot is proposed to solve the problem. This algorithm extracts the TDOA estimation from the stored signal fragments, and utilizes the Taylor expansion of the autocorrelation function to improve the TDOA estimation accuracy. Monte Carlo simulations and real data tests show that the proposed algorithm can significantly reduce the TDOA estimation error for base station positioning receivers, and then the modified CMMB system achieves meter positioning accuracy.
Molecular insights into the colonization and chromosomal diversification of Madeiran house mice.
Förster, D W; Gündüz, I; Nunes, A C; Gabriel, S; Ramalhinho, M G; Mathias, M L; Britton-Davidian, J; Searle, J B
2009-11-01
The colonization history of Madeiran house mice was investigated by analysing the complete mitochondrial (mt) D-loop sequences of 156 mice from the island of Madeira and mainland Portugal, extending on previous studies. The numbers of mtDNA haplotypes from Madeira and mainland Portugal were substantially increased (17 and 14 new haplotypes respectively), and phylogenetic analysis confirmed the previously reported link between the Madeiran archipelago and northern Europe. Sequence analysis revealed the presence of four mtDNA lineages in mainland Portugal, of which one was particularly common and widespread (termed the 'Portugal Main Clade'). There was no support for population bottlenecks during the formation of the six Robertsonian chromosome races on the island of Madeira, and D-loop sequence variation was not found to be structured according to karyotype. The colonization time of the Madeiran archipelago by Mus musculus domesticus was approached using two molecular dating methods (mismatch distribution and Bayesian skyline plot). Time estimates based on D-loop sequence variation at mainland sites (including previously published data from France and Turkey) were evaluated in the context of the zooarchaeological record of M. m. domesticus. A range of values for mutation rate (mu) and number of mouse generations per year was considered in these analyses because of the uncertainty surrounding these two parameters. The colonization of Portugal and Madeira by house mice is discussed in the context of the best-supported parameter values. In keeping with recent studies, our results suggest that mutation rate estimates based on interspecific divergence lead to gross overestimates concerning the timing of recent within-species events.
Powell, Byron J; Mandell, David S; Hadley, Trevor R; Rubin, Ronnie M; Evans, Arthur C; Hurford, Matthew O; Beidas, Rinad S
2017-05-12
Examining the role of modifiable barriers and facilitators is a necessary step toward developing effective implementation strategies. This study examines whether both general (organizational culture, organizational climate, and transformational leadership) and strategic (implementation climate and implementation leadership) organizational-level factors predict therapist-level determinants of implementation (knowledge of and attitudes toward evidence-based practices). Within the context of a system-wide effort to increase the use of evidence-based practices (EBPs) and recovery-oriented care, we conducted an observational, cross-sectional study of 19 child-serving agencies in the City of Philadelphia, including 23 sites, 130 therapists, 36 supervisors, and 22 executive administrators. Organizational variables included characteristics such as EBP initiative participation, program size, and proportion of independent contractor therapists; general factors such as organizational culture and climate (Organizational Social Context Measurement System) and transformational leadership (Multifactor Leadership Questionnaire); and strategic factors such as implementation climate (Implementation Climate Scale) and implementation leadership (Implementation Leadership Scale). Therapist-level variables included demographics, attitudes toward EBPs (Evidence-Based Practice Attitudes Scale), and knowledge of EBPs (Knowledge of Evidence-Based Services Questionnaire). We used linear mixed-effects regression models to estimate the associations between the predictor (organizational characteristics, general and strategic factors) and dependent (knowledge of and attitudes toward EBPs) variables. Several variables were associated with therapists' knowledge of EBPs. Clinicians in organizations with more proficient cultures or higher levels of transformational leadership (idealized influence) had greater knowledge of EBPs; conversely, clinicians in organizations with more resistant cultures, more functional organizational climates, and implementation climates characterized by higher levels of financial reward for EBPs had less knowledge of EBPs. A number of organizational factors were associated with the therapists' attitudes toward EBPs. For example, more engaged organizational cultures, implementation climates characterized by higher levels of educational support, and more proactive implementation leadership were all associated with more positive attitudes toward EBPs. This study provides evidence for the importance of both general and strategic organizational determinants as predictors of knowledge of and attitudes toward EBPs. The findings highlight the need for longitudinal and mixed-methods studies that examine the influence of organizational factors on implementation.
Honest Importance Sampling with Multiple Markov Chains
Tan, Aixin; Doss, Hani; Hobert, James P.
2017-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855
Honest Importance Sampling with Multiple Markov Chains.
Tan, Aixin; Doss, Hani; Hobert, James P
2015-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.
Geography of Adolescent Obesity in the U.S., 2007-2011.
Kramer, Michael R; Raskind, Ilana G; Van Dyke, Miriam E; Matthews, Stephen A; Cook-Smith, Jessica N
2016-12-01
Obesity remains a significant threat to the current and long-term health of U.S. adolescents. The authors developed county-level estimates of adolescent obesity for the contiguous U.S., and then explored the association between 23 conceptually derived area-based correlates of adolescent obesity and ecologic obesity prevalence. Multilevel small area regression methods applied to the 2007 and 2011-2012 National Survey of Children's Health produced county-level obesity prevalence estimates for children aged 10-17 years. Exploratory multivariable Bayesian regression estimated the cross-sectional association between nutrition, activity, and macrosocial characteristics of counties and states, and county-level obesity prevalence. All analyses were conducted in 2015. Adolescent obesity varies geographically with clusters of high prevalence in the Deep South and Southern Appalachian regions. Geographic disparities and clustering in observed data are largely explained by hypothesized area-based variables. In adjusted models, activity environment, but not nutrition environment variables were associated with county-level obesity prevalence. County violent crime was associated with higher obesity, whereas recreational facility density was associated with lower obesity. Measures of the macrosocial and relational domain, including community SES, community health, and social marginalization, were the strongest correlates of county-level obesity. County-level estimates of adolescent obesity demonstrate notable geographic disparities, which are largely explained by conceptually derived area-based contextual measures. This ecologic exploratory study highlights the importance of taking a multidimensional approach to understanding the social and community context in which adolescents make obesity-relevant behavioral choices. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Estimation of Noise Properties for TV-regularized Image Reconstruction in Computed Tomography
Sánchez, Adrian A.
2016-01-01
A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 × 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR. PMID:26308968
Neural control of fast nonlinear systems--application to a turbocharged SI engine with VCT.
Colin, Guillaume; Chamaillard, Yann; Bloch, Gérard; Corde, Gilles
2007-07-01
Today, (engine) downsizing using turbocharging appears as a major way in reducing fuel consumption and pollutant emissions of spark ignition (SI) engines. In this context, an efficient control of the air actuators [throttle, turbo wastegate, and variable camshaft timing (VCT)] is needed for engine torque control. This paper proposes a nonlinear model-based control scheme which combines separate, but coordinated, control modules. Theses modules are based on different control strategies: internal model control (IMC), model predictive control (MPC), and optimal control. It is shown how neural models can be used at different levels and included in the control modules to replace physical models, which are too complex to be online embedded, or to estimate nonmeasured variables. The results obtained from two different test benches show the real-time applicability and good control performance of the proposed methods.
Estimation of noise properties for TV-regularized image reconstruction in computed tomography.
Sánchez, Adrian A
2015-09-21
A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 × 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.
Estimation of noise properties for TV-regularized image reconstruction in computed tomography
NASA Astrophysics Data System (ADS)
Sánchez, Adrian A.
2015-09-01
A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128× 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, Robert N; White, Devin A; Urban, Marie L
2013-01-01
The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort whichmore » considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.« less
Multivariate Longitudinal Analysis with Bivariate Correlation Test
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model’s parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated. PMID:27537692
Multivariate Longitudinal Analysis with Bivariate Correlation Test.
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model's parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated.
Application of neural based estimation algorithm for gait phases of above knee prosthesis.
Tileylioğlu, E; Yilmaz, A
2015-01-01
In this study, two gait phase estimation methods which utilize a rule based quantization and an artificial neural network model respectively are developed and applied for the microcontroller based semi-active knee prosthesis in order to respond user demands and adapt environmental conditions. In this context, an experimental environment in which gait data collected synchronously from both inertial and image based measurement systems has been set up. The inertial measurement system that incorporates MEM accelerometers and gyroscopes is used to perform direct motion measurement through the microcontroller, while the image based measurement system is employed for producing the verification data and assessing the success of the prosthesis. Embedded algorithms dynamically normalize the input data prior to gait phase estimation. The real time analyses of two methods revealed that embedded ANN based approach performs slightly better in comparison with the rule based algorithm and has advantage of being easily-scalable, thus able to accommodate additional input parameters considering the microcontroller constraints.
Simon, Steven L.; Bouville, André; Kleinerman, Ruth
2009-01-01
Biodosimetry measurements can potentially be an important and integral part of the dosimetric methods used in long-term studies of health risk following radiation exposure. Such studies rely on accurate estimation of doses to the whole body or to specific organs of individuals in order to derive reliable estimates of cancer risk. However, dose estimates based on analytical dose reconstruction (i.e., models) or personnel monitoring measurements, e.g., film-badges, can have substantial uncertainty. Biodosimetry can potentially reduce uncertainty in health risk studies by corroboration of model-based dose estimates or by using them to assess bias in dose models. While biodosimetry has begun to play a more significant role in long-term health risk studies, its use is still generally limited in that context due to one or more factors including, inadequate limits of detection, large inter-individual variability of the signal measured, high per-sample cost, and invasiveness. Presently, the most suitable biodosimetry methods for epidemiologic studies are chromosome aberration frequencies from fluorescence in situ hybridization (FISH) of peripheral blood lymphocytes and electron paramagnetic resonance (EPR) measurements made on tooth enamel. Both types of measurements, however, are usually invasive and require difficult to obtain biological samples. Moreover, doses derived from these methods are not always directly relevant to the tissues of interest. To increase the value of biodosimetry to epidemiologic studies, a number of issues need to be considered including limits of detection, effects of inhomogenous exposure of the body, how to extrapolate from the tissue sampled to the tissues of interest, and how to adjust dosimetry models applied to large populations based on sparse biodosimetry measurements. The requirements of health risk studies suggest a set of characteristics that, if satisfied by new biodosimetry methods, would increase the overall usefulness of biodosimetry to determining radiation health risks. PMID:20065672
Age diagnosis based on incremental lines in dental cementum: a critical reflection.
Grosskopf, Birgit; McGlynn, George
2011-01-01
Age estimation based on the counting of incremental lines in dental cementum is a method frequently used for the estimation of the age at death for humans in bioarchaeology, and increasingly, forensic anthropology. Assessment of applicability, precision, and method reproducibility continue to be the focus of research in this area, and are occasionally accompanied by significant controversy. Differences in methodological techniques for data collection (e.g. number of sections, factor of magnification for counting or interpreting "outliers") are presented. Potential influences on method reliability are discussed, especially for their applicability in forensic contexts.
Towards a Context-Aware Proactive Decision Support Framework
2013-11-15
initiative that has developed text analytic technology that crosses the semantic gap into the area of event recognition and representation. The...recognizing operational context, and techniques for recognizing context shift. Additional research areas include: • Adequately capturing users...Universal Interaction Context Ontology [12] might serve as a foundation • Instantiating formal models of decision making based on information seeking
Likelihood-Based Random-Effect Meta-Analysis of Binary Events.
Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D
2015-01-01
Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.
Estimation of the cost of large-scale school deworming programmes with benzimidazoles
Montresor, A.; Gabrielli, A.F.; Engels, D.
2017-01-01
Summary This study estimates the cost of distributing benzimidazole tablets in the context of school deworming programmes: we analysed studies reporting the cost of school deworming from seven countries in four WHO regions. The estimated cost for drug procurement to cover one million children (including customs clearance and international transport) is approximately US$20 000. The estimated financial costs (including the cost of training of personnel, drug transport, social mobilization and monitoring) is, on average, equivalent to US$33 000 per million school-age children with minimal variation in different countries and continents. The estimated economic costs of distribution (including the time spent by teachers, and health personnel at central, provincial and district level) to cover one million children approximately corresponds to US$19 000. This study shows the minimal cost of school deworming activities, but also shows the significant contribution (corresponding to a quarter of the entire cost of the programme) provided by health and education systems in endemic countries even in the case of drug donations and donor support of distribution costs. PMID:19926104
Model-Based IN SITU Parameter Estimation of Ultrasonic Guided Waves in AN Isotropic Plate
NASA Astrophysics Data System (ADS)
Hall, James S.; Michaels, Jennifer E.
2010-02-01
Most ultrasonic systems employing guided waves for flaw detection require information such as dispersion curves, transducer locations, and expected propagation loss. Degraded system performance may result if assumed parameter values do not accurately reflect the actual environment. By characterizing the propagating environment in situ at the time of test, potentially erroneous a priori estimates are avoided and performance of ultrasonic guided wave systems can be improved. A four-part model-based algorithm is described in the context of previous work that estimates model parameters whereby an assumed propagation model is used to describe the received signals. This approach builds upon previous work by demonstrating the ability to estimate parameters for the case of single mode propagation. Performance is demonstrated on signals obtained from theoretical dispersion curves, finite element modeling, and experimental data.
Mahboub-Ahari, Alireza; Pourreza, Abolghasem; Sari, Ali Akbari; Rahimi Foroushani, Abbas; Heydari, Hassan
2014-01-01
The present study aimed to provide better insight on methodological issues related to time preference studies, and to estimate private and social discount rates, using a rigorous systematic review and meta-analysis. We searched PubMed, EMBASE and Proquest databases in June 2013. All studies had estimated private and social time preference rates for health outcomes through stated preference approach, recognized eligible for inclusion. We conducted both fixed and random effect meta-analyses using mean discount rate and standard deviation of the included studies. I-square statistics was used for testing heterogeneity of the studies. Private and social discount rates were estimated separately via Stata11 software. Out of 44 screened full texts, 8 population-based empirical studies were included in qualitative synthesis. Reported time preference rates for own health were from 0.036 to 0.07 and for social health from 0.04 to 0.2. Private and social discount rates were estimated at 0.056 (95% CI: 0.038, 0.074) and 0.066 (95% CI: 0.064, 0.068), respectively. Considering the impact of time preference on healthy behaviors and because of timing issues, individual's time preference as a key determinant of policy making should be taken into account. Direct translation of elicited discount rates to the official discount rates has been remained questionable. Decisions about the proper discount rate for health context, may need a cross-party consensus among health economists and policy makers.
Vitezica, Zulma G; Varona, Luis; Legarra, Andres
2013-12-01
Genomic evaluation models can fit additive and dominant SNP effects. Under quantitative genetics theory, additive or "breeding" values of individuals are generated by substitution effects, which involve both "biological" additive and dominant effects of the markers. Dominance deviations include only a portion of the biological dominant effects of the markers. Additive variance includes variation due to the additive and dominant effects of the markers. We describe a matrix of dominant genomic relationships across individuals, D, which is similar to the G matrix used in genomic best linear unbiased prediction. This matrix can be used in a mixed-model context for genomic evaluations or to estimate dominant and additive variances in the population. From the "genotypic" value of individuals, an alternative parameterization defines additive and dominance as the parts attributable to the additive and dominant effect of the markers. This approach underestimates the additive genetic variance and overestimates the dominance variance. Transforming the variances from one model into the other is trivial if the distribution of allelic frequencies is known. We illustrate these results with mouse data (four traits, 1884 mice, and 10,946 markers) and simulated data (2100 individuals and 10,000 markers). Variance components were estimated correctly in the model, considering breeding values and dominance deviations. For the model considering genotypic values, the inclusion of dominant effects biased the estimate of additive variance. Genomic models were more accurate for the estimation of variance components than their pedigree-based counterparts.
An ordination of life histories using morphological proxies: capital vs. income breeding in insects.
Davis, Robert B; Javoiš, Juhan; Kaasik, Ants; Õunap, Erki; Tammaru, Toomas
2016-08-01
Predictive classifications of life histories are essential for evolutionary ecology. While attempts to apply a single approach to all organisms may be overambitious, recent advances suggest that more narrow ordination schemes can be useful. However, these schemes mostly lack easily observable proxies of the position of a species on respective axes. It has been proposed that, in insects, the degree of capital (vs. income) breeding, reflecting the importance of adult feeding for reproduction, correlates with various ecological traits at the level of among-species comparison. We sought to prove these ideas via rigorous phylogenetic comparative analyses. We used experimentally derived life-history data for 57 species of European Geometridae (Lepidoptera), and an original phylogenetic reconstruction. The degree of capital breeding was estimated based on morphological proxies, including relative abdomen size of females. Applying Brownian-motion-based comparative analyses (with an original update to include error estimates), we demonstrated the associations between the degree of capital breeding and larval diet breadth, sexual size dimorphism, and reproductive season. Ornstein-Uhlenbeck model based phylogenetic analysis suggested a causal relationship between the degree of capital breeding and diet breadth. Our study indicates that the gradation from capital to income breeding is an informative axis to ordinate life-history strategies in flying insects which are affected by the fecundity vs. mobility trade off, with the availability of easy to record proxies contributing to its predictive power in practical contexts. © 2016 by the Ecological Society of America.
Alcohol demand and risk preference.
Dave, Dhaval; Saffer, Henry
2008-12-01
Both economists and psychologists have studied the concept of risk preference. Economists categorize individuals as more or less risk-tolerant based on the marginal utility of income. Psychologists categorize individuals' propensity towards risk based on harm avoidance, novelty seeking and reward dependence traits. The two concepts of risk are related, although the instruments used for empirical measurement are quite different. Psychologists have found risk preference to be an important determinant of alcohol consumption; however economists have not included risk preference in studies of alcohol demand. This is the first study to examine the effect of risk preference on alcohol consumption in the context of a demand function. The specifications employ multiple waves from the Panel Study of Income Dynamics (PSID) and the Health and Retirement Study (HRS), which permit the estimation of age-specific models based on nationally representative samples. Both of these data sets include a unique and consistent survey instrument designed to directly measure risk preference in accordance with the economist's definition. This study estimates the direct impact of risk preference on alcohol demand and also explores how risk preference affects the price elasticity of demand. The empirical results indicate that risk preference has a significant negative effect on alcohol consumption, with the prevalence and consumption among risk-tolerant individuals being 6-8% higher. Furthermore, the tax elasticity is similar across both risk-averse and risk-tolerant individuals. This suggests that tax policies are as equally effective in deterring alcohol consumption among those who have a higher versus a lower propensity for alcohol use.
Using SAS PROC MCMC for Item Response Theory Models
Samonte, Kelli
2014-01-01
Interest in using Bayesian methods for estimating item response theory models has grown at a remarkable rate in recent years. This attentiveness to Bayesian estimation has also inspired a growth in available software such as WinBUGS, R packages, BMIRT, MPLUS, and SAS PROC MCMC. This article intends to provide an accessible overview of Bayesian methods in the context of item response theory to serve as a useful guide for practitioners in estimating and interpreting item response theory (IRT) models. Included is a description of the estimation procedure used by SAS PROC MCMC. Syntax is provided for estimation of both dichotomous and polytomous IRT models, as well as a discussion on how to extend the syntax to accommodate more complex IRT models. PMID:29795834
Agile science: creating useful products for behavior change in the real world.
Hekler, Eric B; Klasnja, Predrag; Riley, William T; Buman, Matthew P; Huberty, Jennifer; Rivera, Daniel E; Martin, Cesar A
2016-06-01
Evidence-based practice is important for behavioral interventions but there is debate on how best to support real-world behavior change. The purpose of this paper is to define products and a preliminary process for efficiently and adaptively creating and curating a knowledge base for behavior change for real-world implementation. We look to evidence-based practice suggestions and draw parallels to software development. We argue to target three products: (1) the smallest, meaningful, self-contained, and repurposable behavior change modules of an intervention; (2) "computational models" that define the interaction between modules, individuals, and context; and (3) "personalization" algorithms, which are decision rules for intervention adaptation. The "agile science" process includes a generation phase whereby contender operational definitions and constructs of the three products are created and assessed for feasibility and an evaluation phase, whereby effect size estimates/casual inferences are created. The process emphasizes early-and-often sharing. If correct, agile science could enable a more robust knowledge base for behavior change.
Cross-correlating Planck tSZ with RCSLenS weak lensing: implications for cosmology and AGN feedback
NASA Astrophysics Data System (ADS)
Hojjati, Alireza; Tröster, Tilman; Harnois-Déraps, Joachim; McCarthy, Ian G.; van Waerbeke, Ludovic; Choi, Ami; Erben, Thomas; Heymans, Catherine; Hildebrandt, Hendrik; Hinshaw, Gary; Ma, Yin-Zhe; Miller, Lance; Viola, Massimo; Tanimura, Hideki
2017-10-01
We present measurements of the spatial mapping between (hot) baryons and the total matter in the Universe, via the cross-correlation between the thermal Sunyaev-Zeldovich (tSZ) map from Planck and the weak gravitational lensing maps from the Red Cluster Sequence Lensing Survey (RCSLenS). The cross-correlations are performed on the map level where all the sources (including diffuse intergalactic gas) contribute to the signal. We consider two configuration-space correlation function estimators, ξy-κ and ξ ^ {y-γ t}, and a Fourier-space estimator, C_{ℓ}^{y-κ}, in our analysis. We detect a significant correlation out to 3° of angular separation on the sky. Based on statistical noise only, we can report 13σ and 17σ detections of the cross-correlation using the configuration-space y-κ and y-γt estimators, respectively. Including a heuristic estimate of the sampling variance yields a detection significance of 7σ and 8σ, respectively. A similar level of detection is obtained from the Fourier-space estimator, C_{ℓ}^{y-κ}. As each estimator probes different dynamical ranges, their combination improves the significance of the detection. We compare our measurements with predictions from the cosmo-OverWhelmingly Large Simulations suite of cosmological hydrodynamical simulations, where different galactic feedback models are implemented. We find that a model with considerable active galactic nuclei (AGN) feedback that removes large quantities of hot gas from galaxy groups and Wilkinson Microwave Anisotropy Probe 7-yr best-fitting cosmological parameters provides the best match to the measurements. All baryonic models in the context of a Planck cosmology overpredict the observed signal. Similar cosmological conclusions are drawn when we employ a halo model with the observed 'universal' pressure profile.
Perquin, Magali; Diederich, Nico; Pastore, Jessica; Lair, Marie-Lise; Stranges, Saverio; Vaillant, Michel
2015-01-01
This study aimed to assess the prevalence of dementia and cognitive complaints in a cross-sectional sample of Luxembourg seniors, and to discuss the results in the societal context of high cognitive reserve resulting from multilingualism. A population sample of 1,377 people representative of Luxembourg residents aged over 64 years was initially identified via the national social insurance register. There were three different levels of contribution: full participation in the study, partial participation, and non-participation. We examined the profiles of these three different samples so that we could infer the prevalence estimates in the Luxembourgish senior population as a whole using the prevalence estimates obtained in this study. After careful attention to the potential bias and of the possibility of underestimation, we considered the obtained prevalence estimates of 3.8% for dementia (with corresponding 95% confidence limits (CL) of 2.8% and 4.8%) and 26.1% for cognitive complaints (CL = [17.8-34.3]) as trustworthy. Based on these findings, we postulate that high cognitive reserve may result in surprisingly low prevalence estimates of cognitive complaints and dementia in adults over the age of 64 years, which thereby corroborates the longer disability-free life expectancy observed in the Luxembourg population. To the best of our knowledge, this study is the first to report such Luxembourgish public health data.
Perquin, Magali; Diederich, Nico; Pastore, Jessica; Lair, Marie-Lise; Stranges, Saverio; Vaillant, Michel
2015-01-01
Objectives This study aimed to assess the prevalence of dementia and cognitive complaints in a cross-sectional sample of Luxembourg seniors, and to discuss the results in the societal context of high cognitive reserve resulting from multilingualism. Methods A population sample of 1,377 people representative of Luxembourg residents aged over 64 years was initially identified via the national social insurance register. There were three different levels of contribution: full participation in the study, partial participation, and non-participation. We examined the profiles of these three different samples so that we could infer the prevalence estimates in the Luxembourgish senior population as a whole using the prevalence estimates obtained in this study. Results After careful attention to the potential bias and of the possibility of underestimation, we considered the obtained prevalence estimates of 3.8% for dementia (with corresponding 95% confidence limits (CL) of 2.8% and 4.8%) and 26.1% for cognitive complaints (CL = [17.8–34.3]) as trustworthy. Conclusion Based on these findings, we postulate that high cognitive reserve may result in surprisingly low prevalence estimates of cognitive complaints and dementia in adults over the age of 64 years, which thereby corroborates the longer disability-free life expectancy observed in the Luxembourg population. To the best of our knowledge, this study is the first to report such Luxembourgish public health data. PMID:26390288
Coding efficiency of AVS 2.0 for CBAC and CABAC engines
NASA Astrophysics Data System (ADS)
Cui, Jing; Choi, Youngkyu; Chae, Soo-Ik
2015-12-01
In this paper we compare the coding efficiency of AVS 2.0[1] for engines of the Context-based Binary Arithmetic Coding (CBAC)[2] in the AVS 2.0 and the Context-Adaptive Binary Arithmetic Coder (CABAC)[3] in the HEVC[4]. For fair comparison, the CABAC is embedded in the reference code RD10.1 because the CBAC is in the HEVC in our previous work[5]. The rate estimation table is employed only for RDOQ in the RD code. To reduce the computation complexity of the video encoder, therefore we modified the RD code so that the rate estimation table is employed for all RDO decision. Furthermore, we also simplify the complexity of rate estimation table by reducing the bit depth of its fractional part to 2 from 8. The simulation result shows that the CABAC has the BD-rate loss of about 0.7% compared to the CBAC. It seems that the CBAC is a little more efficient than that the CABAC in the AVS 2.0.
Estimating post-marketing exposure to pharmaceutical products using ex-factory distribution data.
Telfair, Tamara; Mohan, Aparna K; Shahani, Shalini; Klincewicz, Stephen; Atsma, Willem Jan; Thomas, Adrian; Fife, Daniel
2006-10-01
The pharmaceutical industry has an obligation to identify adverse reactions to drug products during all phases of drug development, including the post-marketing period. Estimates of population exposure to pharmaceutical products are important to the post-marketing surveillance of drugs, and provide a context for assessing the various risks and benefits, including drug safety, associated with drug treatment. This paper describes a systematic approach to estimating post-marketing drug exposure using ex-factory shipment data to estimate the quantity of medication available, and dosage information (stratified by indication or other factors as appropriate) to convert the quantity of medication to person time of exposure. Unlike the non-standardized methods often used to estimate exposure, this approach provides estimates whose calculations are explicit, documented, and consistent across products and over time. The methods can readily be carried out by an individual or small group specializing in this function, and lend themselves to automation. The present estimation approach is practical and relatively uncomplicated to implement. We believe it is a useful innovation. Copyright 2006 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Harvey, Roger; Averill, Robin
2012-01-01
The importance of using real-life contexts in teaching mathematics is emphasised in many policy and curriculum statements. The literature indicates using contexts to teach mathematics can be difficult and few detailed exemplars exist. This article describes the use of real-life contexts in one New Zealand Year 11 algebra lesson. Data included a…
Multilevel analyses of school and children's characteristics associated with physical activity.
Gomes, Thayse Natacha; dos Santos, Fernanda K; Zhu, Weimo; Eisenmann, Joey; Maia, José A R
2014-10-01
Children spend most of their awake time at school, and it is important to identify individual and school-level correlates of their physical activity (PA) levels. This study aimed to identify the between-school variability in Portuguese children PA and to investigate student and school PA correlates using multilevel modeling. The sample included 1075 Portuguese children of both sexes, aged 6-10 years, from 24 schools. Height and weight were measured and body mass index (BMI) was estimated. Physical activity was estimated using the Godin and Shephard questionnaire (total PA score was used); cardiorespiratory fitness was estimated with the 1-mile run/walk test. A structured inventory was used to access information about the school environment. A multilevel analysis (level-1: student-level; level-2: school-level) was used. Student-level variables (age, sex, 1-mile run/walk test) explained 7% of the 64% variance fraction of the individual-level PA; however, school context explained approximately 36% of the total PA variance. Variables included in the model (school size, school setting, playground area, frequency and duration of physical education class, and qualification of physical education teacher) are responsible for 80% of the context variance. School environment is an important correlate of PA among children, enhancing children's opportunities for being active and healthy. © 2014, American School Health Association.
Yamada, Janet; Squires, Janet E; Estabrooks, Carole A; Victor, Charles; Stevens, Bonnie
2017-01-23
Despite substantial research on pediatric pain assessment and management, health care professionals do not adequately incorporate this knowledge into clinical practice. Organizational context (work environment) is a significant factor in influencing outcomes; however, the nature of the mechanisms are relatively unknown. The objective of this study was to assess how organizational context moderates the effect of research use and pain outcomes in hospitalized children. A cross-sectional survey was undertaken with 779 nurses in 32 patient care units in 8 Canadian pediatric hospitals, following implementation of a multifaceted knowledge translation intervention, Evidence-based Practice for Improving Quality (EPIQ). The influence of organizational context was assessed in relation to pain process (assessment and management) and clinical (pain intensity) outcomes. Organizational context was measured using the Alberta Context Tool that includes: leadership, culture, evaluation, social capital, informal interactions, formal interactions, structural and electronic resources, and organizational slack (staff, space, and time). Marginal modeling estimated the effects of instrumental research use (direct use of research knowledge) and conceptual research use (indirect use of research knowledge) on pain outcomes while examining the effects of context. Six of the 10 organizational context factors (culture, social capital, informal interactions, resources, and organizational slack [space and time]) significantly moderated the effect of instrumental research use on pain assessment; four factors (culture, social capital, resources and organizational slack time) moderated the effect of conceptual research use and pain assessment. Only two factors (evaluation and formal interactions) moderated the effect of instrumental research use on pain management. All organizational factors except slack space significantly moderated the effect of instrumental research use on pain intensity; informal interactions and organizational slack space moderated the effect of conceptual research use and pain intensity. Many aspects of organizational context consistently moderated the effects of instrumental research use on pain assessment and pain intensity, while only a few influenced conceptual use of research on pain outcomes. Organizational context factors did not generally influence the effect of research use on pain management. Further research is required to further explore the relationships between organizational context and pain management outcomes.
Exploring the Potential of TanDEM-X Data in Rice Monitoring
NASA Astrophysics Data System (ADS)
Erten, E.
2015-12-01
In this work, phenological parameters such as growth stage, calendar estimation, crop density and yield estimation for rice fields are estimated employing TanDEM-X data. Currently, crop monitoring is country-dependent. Most countries have databases based on cadastral information and annual farmer inputs. Inaccuracies are coming from wrong or missing farmer declarations and/or coarsely updated cadastral boundary definitions. This leads to inefficient regulation of the market, frauds as well as to ecological risks. An accurate crop calendar is also missing, since farmers provide estimations in advance and there is no efficient way to know the growth status over large plantations. SAR data is of particular interest for these purposes. The proposed method includes two step approach including field detection and phenological state estimation. In the context of precise farming it is substantial to define field borders which are usually changing every cultivation period. Linking the SAR inherit properties to transplanting practice such as irrigation, the spatial database of rice-planted agricultural crops can be updated. Boundaries of agricultural fields will be defined in the database, and assignments of crops and sowing dates will be continuously updated by our monitoring system considering that sowing practice variously changes depending on the field owner decision. To define and segment rice crops, the system will make use of the fact that rice fields are characterized as flooded parcels separated by path networks composed by soil or rare grass. This natural segmentation is well detectable by inspecting low amplitude and coherence values of bistatic acquisitions. Once the field borders are defined, the phenology estimation of crops monitored at any time is the key point of monitoring. In this aspect the wavelength and the polarization option of TanDEM-X are enough to characterize the small phenological changes. The combination of bistatic interferometry and Radiative Transfer Theory (RTT) with different polarization provides a realistic description of plants including their full morphology (stalks, tillers, leaves and panicles).
FleXConf: A Flexible Conference Assistant Using Context-Aware Notification Services
NASA Astrophysics Data System (ADS)
Armenatzoglou, Nikos; Marketakis, Yannis; Kriara, Lito; Apostolopoulos, Elias; Papavasiliou, Vicky; Kampas, Dimitris; Kapravelos, Alexandros; Kartsonakis, Eythimis; Linardakis, Giorgos; Nikitaki, Sofia; Bikakis, Antonis; Antoniou, Grigoris
Integrating context-aware notification services to ubiquitous computing systems aims at the provision of the right information to the right users, at the right time, in the right place, and on the right device, and constitutes a significant step towards the realization of the Ambient Intelligence vision. In this paper, we present FlexConf, a semantics-based system that supports location-based, personalized notification services for the assistance of conference attendees. Its special features include an ontology-based representation model, rule-based context-aware reasoning, and a novel positioning system for indoor environments.
Do U.S. states' socioeconomic and policy contexts shape adult disability?
Montez, Jennifer Karas; Hayward, Mark D; Wolf, Douglas A
2017-04-01
Growing disparities in adult mortality across U.S. states point to the importance of assessing disparities in other domains of health. Here, we estimate state-level differences in disability, and draw on the WHO socio-ecological framework to assess the role of ecological factors in explaining these differences. Our study is based on data from 5.5 million adults aged 25-94 years in the 2010-2014 waves of the American Community Survey. Disability is defined as difficulty with mobility, independent living, self-care, vision, hearing, or cognition. We first provide estimates of age-standardized and age-specific disability prevalence by state. We then estimate multilevel models to assess how states' socioeconomic and policy contexts shape the probability of having a disability. Age-standardized disability prevalence differs markedly by state, from 12.9% in North Dakota and Minnesota to 23.5% in West Virginia. Disability was lower in states with stronger economic output, more income equality, longer histories of tax credits for low-income workers, and higher cigarette taxes (for middle-age women), net of individuals' socio-demographic characteristics. States' socioeconomic and policy contexts appear particularly important for older adults. Findings underscore the importance of socio-ecological influences on disability. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Schuback, N.; Schallenberg, C.; Duckham, C.; Flecken, M.; Maldonado, M. T.; Tortell, P. D.
2016-02-01
Active chlorophyll a fluorescence approaches, including fast repetition rate fluorometry (FRRF), have the potential to provide estimates of phytoplankton primary productivity at unprecedented spatial and temporal resolution. FRRF-derived productivity rates are based on estimates of charge separation in photosystem II (ETRRCII), which must be converted into ecologically relevant units of carbon fixation. Understanding sources of variability in the coupling of ETRRCII and carbon fixation provides important physiological insight into phytoplankton photosynthesis, and is critical for the application of FRRF as a primary productivity measurement tool. We present data from a series of experiments during which we simultaneously measured phytoplankton carbon fixation and ETRRCII in the iron-limited NE subarctic Pacific. Our results show significant variability of the derived conversion factor (Ve:C/nPSII), with highest values observed under conditions of excess excitation pressure at the level of photosystem II, caused by high light and/or low iron. Our results will be discussed in the context of metabolic plasticity, which evolved in phytoplankton to simultaneously maximize growth and provide photoprotection under fluctuating light and limiting nutrient availabilities. Because the derived conversion factor is associated with conditions of excess light, it correlates with the expression of non-photochemical quenching (NPQ) in the pigment antenna, also derived from FRRF measurements. Our results demonstrate a significant correlation between NPQ and the conversion factor Ve:C/nPSII, and the potential of this relationship to improve FRRF-based estimates of phytoplankton carbon fixation rates is discussed.
NASA Astrophysics Data System (ADS)
Lanorte, Antonio; Desantis, Fortunato; Aromando, Angelo; Lasaponara, Rosa
2013-04-01
This paper presents the results we obtained in the context of the FIRE-SAT project during the 2012 operative application of the satellite based tools for fire monitoring. FIRE_SAT project has been funded by the Civil Protection of the Basilicata Region in order to set up a low cost methodology for fire danger monitoring and fire effect estimation based on satellite Earth Observation techniques. To this aim, NASA Moderate Resolution Imaging Spectroradiometer (MODIS), ASTER, Landsat TM data were used. Novel data processing techniques have been developed by researchers of the ARGON Laboratory of the CNR-IMAA for the operative monitoring of fire. In this paper we only focus on the danger estimation model which has been fruitfully used since 2008 to 2012 as an reliable operative tool to support and optimize fire fighting strategies from the alert to the management of resources including fire attacks. The daily updating of fire danger is carried out using satellite MODIS images selected for their spectral capability and availability free of charge from NASA web site. This makes these data sets very suitable for an effective systematic (daily) and sustainable low-cost monitoring of large areas. The preoperative use of the integrated model, pointed out that the system properly monitor spatial and temporal variations of fire susceptibility and provide useful information of both fire severity and post fire regeneration capability.
Farooqui, Habib; Jit, Mark; Heymann, David L.; Zodpey, Sanjay
2015-01-01
The burden of severe pneumonia in terms of morbidity and mortality is unknown in India especially at sub-national level. In this context, we aimed to estimate the number of severe pneumonia episodes, pneumococcal pneumonia episodes and pneumonia deaths in children younger than 5 years in 2010. We adapted and parameterized a mathematical model based on the epidemiological concept of potential impact fraction developed CHERG for this analysis. The key parameters that determine the distribution of severe pneumonia episode across Indian states were state-specific under-5 population, state-specific prevalence of selected definite pneumonia risk factors and meta-estimates of relative risks for each of these risk factors. We applied the incidence estimates and attributable fraction of risk factors to population estimates for 2010 of each Indian state. We then estimated the number of pneumococcal pneumonia cases by applying the vaccine probe methodology to an existing trial. We estimated mortality due to severe pneumonia and pneumococcal pneumonia by combining incidence estimates with case fatality ratios from multi-centric hospital-based studies. Our results suggest that in 2010, 3.6 million (3.3–3.9 million) episodes of severe pneumonia and 0.35 million (0.31–0.40 million) all cause pneumonia deaths occurred in children younger than 5 years in India. The states that merit special mention include Uttar Pradesh where 18.1% children reside but contribute 24% of pneumonia cases and 26% pneumonia deaths, Bihar (11.3% children, 16% cases, 22% deaths) Madhya Pradesh (6.6% children, 9% cases, 12% deaths), and Rajasthan (6.6% children, 8% cases, 11% deaths). Further, we estimated that 0.56 million (0.49–0.64 million) severe episodes of pneumococcal pneumonia and 105 thousand (92–119 thousand) pneumococcal deaths occurred in India. The top contributors to India’s pneumococcal pneumonia burden were Uttar Pradesh, Bihar, Madhya Pradesh and Rajasthan in that order. Our results highlight the need to improve access to care and increase coverage and equity of pneumonia preventing vaccines in states with high pneumonia burden. PMID:26086700
Monitoring Exchange of CO2 - A KISS Workshop Report 2009
NASA Technical Reports Server (NTRS)
Miller, Charles; Wennberg, Paul
2009-01-01
The problem and context: Can top-down estimates of carbon dioxide (CO2) fluxes resolve the anthropogenic emissions of China, India, the United States, and the European Union with an accuracy of +/-10% or better?The workshop "Monitoring Exchange of Carbon Dioxide" was convened at the Keck Institute for Space Studies in Pasadena, California in February 2010 to address this question. The Workshop brought together an international, interdisciplinary group of 24 experts in carbon cycle science, remote sensing, emissions inventory estimation, and inverse modeling. The participants reviewed the potential of space-based and sub-orbital observational and modeling approaches to monitor anthropogenic CO2 emissions in the presence of much larger natural fluxes from the exchange of CO2 between the land, atmosphere, and ocean. This particular challenge was motivated in part by the NRC Report "Verifying Greenhouse Gas Emissions" [Pacala et al., 2010]. This workshop report includes several recommendations for improvements to observing strategies and modeling frameworks for optimal and cost-effective monitoring of carbon exchange
An experimental result of estimating an application volume by machine learning techniques.
Hasegawa, Tatsuhito; Koshino, Makoto; Kimura, Haruhiko
2015-01-01
In this study, we improved the usability of smartphones by automating a user's operations. We developed an intelligent system using machine learning techniques that periodically detects a user's context on a smartphone. We selected the Android operating system because it has the largest market share and highest flexibility of its development environment. In this paper, we describe an application that automatically adjusts application volume. Adjusting the volume can be easily forgotten because users need to push the volume buttons to alter the volume depending on the given situation. Therefore, we developed an application that automatically adjusts the volume based on learned user settings. Application volume can be set differently from ringtone volume on Android devices, and these volume settings are associated with each specific application including games. Our application records a user's location, the volume setting, the foreground application name and other such attributes as learning data, thereby estimating whether the volume should be adjusted using machine learning techniques via Weka.
Cost analysis of school-based intermittent screening and treatment of malaria in Kenya
2011-01-01
Background The control of malaria in schools is receiving increasing attention, but there remains currently no consensus as to the optimal intervention strategy. This paper analyses the costs of intermittent screening and treatment (IST) of malaria in schools, implemented as part of a cluster-randomized controlled trial on the Kenyan coast. Methods Financial and economic costs were estimated using an ingredients approach whereby all resources required in the delivery of IST are quantified and valued. Sensitivity analysis was conducted to investigate how programme variation affects costs and to identify potential cost savings in the future implementation of IST. Results The estimated financial cost of IST per child screened is US$ 6.61 (economic cost US$ 6.24). Key contributors to cost were salary costs (36%) and malaria rapid diagnostic tests (RDT) (22%). Almost half (47%) of the intervention cost comprises redeployment of existing resources including health worker time and use of hospital vehicles. Sensitivity analysis identified changes to intervention delivery that can reduce programme costs by 40%, including use of alternative RDTs and removal of supervised treatment. Cost-effectiveness is also likely to be highly sensitive to the proportion of children found to be RDT-positive. Conclusion In the current context, school-based IST is a relatively expensive malaria intervention, but reducing the complexity of delivery can result in considerable savings in the cost of intervention. (Costs are reported in US$ 2010). PMID:21933376
Survey-based socio-economic data from slums in Bangalore, India
Roy, Debraj; Palavalli, Bharath; Menon, Niveditha; King, Robin; Pfeffer, Karin; Lees, Michael; Sloot, Peter M. A.
2018-01-01
In 2010, an estimated 860 million people were living in slums worldwide, with around 60 million added to the slum population between 2000 and 2010. In 2011, 200 million people in urban Indian households were considered to live in slums. In order to address and create slum development programmes and poverty alleviation methods, it is necessary to understand the needs of these communities. Therefore, we require data with high granularity in the Indian context. Unfortunately, there is a paucity of highly granular data at the level of individual slums. We collected the data presented in this paper in partnership with the slum dwellers in order to overcome the challenges such as validity and efficacy of self reported data. Our survey of Bangalore covered 36 slums across the city. The slums were chosen based on stratification criteria, which included geographical location of the slum, whether the slum was resettled or rehabilitated, notification status of the slum, the size of the slum and the religious profile. This paper describes the relational model of the slum dataset, the variables in the dataset, the variables constructed for analysis and the issues identified with the dataset. The data collected includes around 267,894 data points spread over 242 questions for 1,107 households. The dataset can facilitate interdisciplinary research on spatial and temporal dynamics of urban poverty and well-being in the context of rapid urbanization of cities in developing countries. PMID:29313840
O'Sullivan, F; Kirrane, J; Muzi, M; O'Sullivan, J N; Spence, A M; Mankoff, D A; Krohn, K A
2010-03-01
Kinetic quantitation of dynamic positron emission tomography (PET) studies via compartmental modeling usually requires the time-course of the radio-tracer concentration in the arterial blood as an arterial input function (AIF). For human and animal imaging applications, significant practical difficulties are associated with direct arterial sampling and as a result there is substantial interest in alternative methods that require no blood sampling at the time of the study. A fixed population template input function derived from prior experience with directly sampled arterial curves is one possibility. Image-based extraction, including requisite adjustment for spillover and recovery, is another approach. The present work considers a hybrid statistical approach based on a penalty formulation in which the information derived from a priori studies is combined in a Bayesian manner with information contained in the sampled image data in order to obtain an input function estimate. The absolute scaling of the input is achieved by an empirical calibration equation involving the injected dose together with the subject's weight, height and gender. The technique is illustrated in the context of (18)F -Fluorodeoxyglucose (FDG) PET studies in humans. A collection of 79 arterially sampled FDG blood curves are used as a basis for a priori characterization of input function variability, including scaling characteristics. Data from a series of 12 dynamic cerebral FDG PET studies in normal subjects are used to evaluate the performance of the penalty-based AIF estimation technique. The focus of evaluations is on quantitation of FDG kinetics over a set of 10 regional brain structures. As well as the new method, a fixed population template AIF and a direct AIF estimate based on segmentation are also considered. Kinetics analyses resulting from these three AIFs are compared with those resulting from radially sampled AIFs. The proposed penalty-based AIF extraction method is found to achieve significant improvements over the fixed template and the segmentation methods. As well as achieving acceptable kinetic parameter accuracy, the quality of fit of the region of interest (ROI) time-course data based on the extracted AIF, matches results based on arterially sampled AIFs. In comparison, significant deviation in the estimation of FDG flux and degradation in ROI data fit are found with the template and segmentation methods. The proposed AIF extraction method is recommended for practical use.
Two-step estimation in ratio-of-mediator-probability weighted causal mediation analysis.
Bein, Edward; Deutsch, Jonah; Hong, Guanglei; Porter, Kristin E; Qin, Xu; Yang, Cheng
2018-04-15
This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score-based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio-of-mediator-probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score-based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2-step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio-of-mediator-probability weighting analysis a solution to the 2-step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance-covariance matrix for the indirect effect and direct effect 2-step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score-based weighting. Copyright © 2018 John Wiley & Sons, Ltd.
On the Analysis of Case-Control Studies in Cluster-correlated Data Settings.
Haneuse, Sebastien; Rivera-Rodriguez, Claudia
2018-01-01
In resource-limited settings, long-term evaluation of national antiretroviral treatment (ART) programs often relies on aggregated data, the analysis of which may be subject to ecological bias. As researchers and policy makers consider evaluating individual-level outcomes such as treatment adherence or mortality, the well-known case-control design is appealing in that it provides efficiency gains over random sampling. In the context that motivates this article, valid estimation and inference requires acknowledging any clustering, although, to our knowledge, no statistical methods have been published for the analysis of case-control data for which the underlying population exhibits clustering. Furthermore, in the specific context of an ongoing collaboration in Malawi, rather than performing case-control sampling across all clinics, case-control sampling within clinics has been suggested as a more practical strategy. To our knowledge, although similar outcome-dependent sampling schemes have been described in the literature, a case-control design specific to correlated data settings is new. In this article, we describe this design, discuss balanced versus unbalanced sampling techniques, and provide a general approach to analyzing case-control studies in cluster-correlated settings based on inverse probability-weighted generalized estimating equations. Inference is based on a robust sandwich estimator with correlation parameters estimated to ensure appropriate accounting of the outcome-dependent sampling scheme. We conduct comprehensive simulations, based in part on real data on a sample of N = 78,155 program registrants in Malawi between 2005 and 2007, to evaluate small-sample operating characteristics and potential trade-offs associated with standard case-control sampling or when case-control sampling is performed within clusters.
Hierarchical Context Modeling for Video Event Recognition.
Wang, Xiaoyang; Ji, Qiang
2016-10-11
Current video event recognition research remains largely target-centered. For real-world surveillance videos, targetcentered event recognition faces great challenges due to large intra-class target variation, limited image resolution, and poor detection and tracking results. To mitigate these challenges, we introduced a context-augmented video event recognition approach. Specifically, we explicitly capture different types of contexts from three levels including image level, semantic level, and prior level. At the image level, we introduce two types of contextual features including the appearance context features and interaction context features to capture the appearance of context objects and their interactions with the target objects. At the semantic level, we propose a deep model based on deep Boltzmann machine to learn event object representations and their interactions. At the prior level, we utilize two types of prior-level contexts including scene priming and dynamic cueing. Finally, we introduce a hierarchical context model that systematically integrates the contextual information at different levels. Through the hierarchical context model, contexts at different levels jointly contribute to the event recognition. We evaluate the hierarchical context model for event recognition on benchmark surveillance video datasets. Results show that incorporating contexts in each level can improve event recognition performance, and jointly integrating three levels of contexts through our hierarchical model achieves the best performance.
Multimedia data from two probability-based exposure studies were investigated in terms of how censoring of non-detects affected estimation of population parameters and associations. Appropriate methods for handling censored below-detection-limit (BDL) values in this context were...
Context Aware Middleware Architectures: Survey and Challenges
Li, Xin; Eckert, Martina; Martinez, José-Fernán; Rubio, Gregorio
2015-01-01
Context aware applications, which can adapt their behaviors to changing environments, are attracting more and more attention. To simplify the complexity of developing applications, context aware middleware, which introduces context awareness into the traditional middleware, is highlighted to provide a homogeneous interface involving generic context management solutions. This paper provides a survey of state-of-the-art context aware middleware architectures proposed during the period from 2009 through 2015. First, a preliminary background, such as the principles of context, context awareness, context modelling, and context reasoning, is provided for a comprehensive understanding of context aware middleware. On this basis, an overview of eleven carefully selected middleware architectures is presented and their main features explained. Then, thorough comparisons and analysis of the presented middleware architectures are performed based on technical parameters including architectural style, context abstraction, context reasoning, scalability, fault tolerance, interoperability, service discovery, storage, security & privacy, context awareness level, and cloud-based big data analytics. The analysis shows that there is actually no context aware middleware architecture that complies with all requirements. Finally, challenges are pointed out as open issues for future work. PMID:26307988
Sutherland, Chris; Royle, Andy
2016-01-01
This chapter provides a non-technical overview of ‘closed population capture–recapture’ models, a class of well-established models that are widely applied in ecology, such as removal sampling, covariate models, and distance sampling. These methods are regularly adopted for studies of reptiles, in order to estimate abundance from counts of marked individuals while accounting for imperfect detection. Thus, the chapter describes some classic closed population models for estimating abundance, with considerations for some recent extensions that provide a spatial context for the estimation of abundance, and therefore density. Finally, the chapter suggests some software for use in data analysis, such as the Windows-based program MARK, and provides an example of estimating abundance and density of reptiles using an artificial cover object survey of Slow Worms (Anguis fragilis).
Estimating abundance: Chapter 27
Royle, J. Andrew
2016-01-01
This chapter provides a non-technical overview of ‘closed population capture–recapture’ models, a class of well-established models that are widely applied in ecology, such as removal sampling, covariate models, and distance sampling. These methods are regularly adopted for studies of reptiles, in order to estimate abundance from counts of marked individuals while accounting for imperfect detection. Thus, the chapter describes some classic closed population models for estimating abundance, with considerations for some recent extensions that provide a spatial context for the estimation of abundance, and therefore density. Finally, the chapter suggests some software for use in data analysis, such as the Windows-based program MARK, and provides an example of estimating abundance and density of reptiles using an artificial cover object survey of Slow Worms (Anguis fragilis).
Fenner, Jack N
2005-10-01
The length of the human generation interval is a key parameter when using genetics to date population divergence events. However, no consensus exists regarding the generation interval length, and a wide variety of interval lengths have been used in recent studies. This makes comparison between studies difficult, and questions the accuracy of divergence date estimations. Recent genealogy-based research suggests that the male generation interval is substantially longer than the female interval, and that both are greater than the values commonly used in genetics studies. This study evaluates each of these hypotheses in a broader cross-cultural context, using data from both nation states and recent hunter-gatherer societies. Both hypotheses are supported by this study; therefore, revised estimates of male, female, and overall human generation interval lengths are proposed. The nearly universal, cross-cultural nature of the evidence justifies using these proposed estimates in Y-chromosomal, mitochondrial, and autosomal DNA-based population divergence studies.
Applications of physiological bases of ageing to forensic sciences. Estimation of age-at-death.
C Zapico, Sara; Ubelaker, Douglas H
2013-03-01
Age-at-death estimation is one of the main challenges in forensic sciences since it contributes to the identification of individuals. There are many anthropological techniques to estimate the age at death in children and adults. However, in adults this methodology is less accurate and requires population specific references. For that reason, new methodologies have been developed. Biochemical methods are based on the natural process of ageing, which induces different biochemical changes that lead to alterations in cells and tissues. In this review, we describe different attempts to estimate the age in adults based on these changes. Chemical approaches imply modifications in molecules or accumulation of some products. Molecular biology approaches analyze the modifications in DNA and chromosomes. Although the most accurate technique appears to be aspartic acid racemization, it is important to take into account the other techniques because the forensic context and the human remains available will determine the possibility to apply one or another methodology. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hoare, John L.
2014-07-01
The original choice of particulate matter mass (PM) as a realistic surrogate for gross air pollution has gradually evolved into routine use nowadays of epidemiologically-based estimates of the monetary and other benefits expected from regulating urban air quality. Unfortunately, the statistical associations facilitating such calculations usually are based on single indices of air pollution whereas the health effects themselves are more broadly based causally. For this and other reasons the economic benefits of control tend to be exaggerated. Primarily because of their assumed inherently inferior respirability, particles ≥10 μm are generally excluded from such considerations. Where the particles themselves are chemically heterogeneous, as in an urban context, this may be inappropriate. Clearly all air-borne particles, whether coarse or fine, are susceptible to inhalation. Hence, the possibility exists for any adhering potentially harmful semi-volatile substances to be subsequently de-sorbed in vivo thereby facilitating their transport deeper into the lungs. Consequently, this alone may be a sufficient reason for including rather than rejecting during air quality monitoring the relatively coarse 10-100 μm particle fraction, ideally in conjunction with routine estimation of the gaseous co-pollutants thereby facilitating a multi-pollutant approach apropos regulation.
Estimating Demand for Industrial and Commercial Land Use Given Economic Forecasts
Batista e Silva, Filipe; Koomen, Eric; Diogo, Vasco; Lavalle, Carlo
2014-01-01
Current developments in the field of land use modelling point towards greater level of spatial and thematic resolution and the possibility to model large geographical extents. Improvements are taking place as computational capabilities increase and socioeconomic and environmental data are produced with sufficient detail. Integrated approaches to land use modelling rely on the development of interfaces with specialized models from fields like economy, hydrology, and agriculture. Impact assessment of scenarios/policies at various geographical scales can particularly benefit from these advances. A comprehensive land use modelling framework includes necessarily both the estimation of the quantity and the spatial allocation of land uses within a given timeframe. In this paper, we seek to establish straightforward methods to estimate demand for industrial and commercial land uses that can be used in the context of land use modelling, in particular for applications at continental scale, where the unavailability of data is often a major constraint. We propose a set of approaches based on ‘land use intensity’ measures indicating the amount of economic output per existing areal unit of land use. A base model was designed to estimate land demand based on regional-specific land use intensities; in addition, variants accounting for sectoral differences in land use intensity were introduced. A validation was carried out for a set of European countries by estimating land use for 2006 and comparing it to observations. The models’ results were compared with estimations generated using the ‘null model’ (no land use change) and simple trend extrapolations. Results indicate that the proposed approaches clearly outperformed the ‘null model’, but did not consistently outperform the linear extrapolation. An uncertainty analysis further revealed that the models’ performances are particularly sensitive to the quality of the input land use data. In addition, unknown future trends of regional land use intensity widen considerably the uncertainty bands of the predictions. PMID:24647587
Kalman filter data assimilation: targeting observations and parameter estimation.
Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex
2014-06-01
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
Kalman filter data assimilation: Targeting observations and parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bellsky, Thomas, E-mail: bellskyt@asu.edu; Kostelich, Eric J.; Mahalov, Alex
2014-06-15
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly locatedmore » observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.« less
Composition, Context, and Endogeneity in School and Teacher Comparisons
ERIC Educational Resources Information Center
Castellano, Katherine E.; Rabe-Hesketh, Sophia; Skrondal, Anders
2014-01-01
Investigations of the effects of schools (or teachers) on student achievement focus on either (1) individual school effects, such as value-added analyses, or (2) school-type effects, such as comparisons of charter and public schools. Controlling for school composition by including student covariates is critical for valid estimation of either kind…
The U.S. EPA, under its ExpoCast program, is developing high-throughput near-field modeling methods to estimate human chemical exposure and to provide real-world context to high-throughput screening (HTS) hazard data. These novel modeling methods include reverse methods to infer ...
DEM Based Modeling: Grid or TIN? The Answer Depends
NASA Astrophysics Data System (ADS)
Ogden, F. L.; Moreno, H. A.
2015-12-01
The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.
Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam
2016-01-01
Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255
Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam
2016-01-01
Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.
García-Ubaque, César A; García-Ubaque, Juan C; Vaca-Bohórquez, Martha L
2015-12-01
Objective To estimate the economic benefits related to environment and health in the context of the implementation of the Stockholm Convention for the control of Persistent Organic Pollutants in the country. The estimation was conducted based on two scenarios: non-compliance with the agreement and compliance with the Convention. Gross profit was derived from the difference in present value between the health and environmental costs that are assumed in each scenario. Results Gross profit by decreasing health costs arising from the implementation of the Convention was estimated at USD $ 511 and USD $ 501 million. By introducing variables such as management costs and agreement on potential benefits for access to international markets, the benefits to the country were estimated at between USD $1 631 and USD $ 3 118 million. Discussion Despite the economic benefits generated by lower expenditure on health for the Convention implementation, the costs associated with reducing pollutant emissions generated a negative balance, compensated only by the expectation of higher revenues for international market access. We consider this initial economic assessment an important contribution, but it should be reviewed to include valuation methodologies involving other social profitability variables and different scenarios for emerging technologies, new scientific knowledge about these pollutants, changes in legislation and / or changes in trade agreement conditions, among others.
Vitali, Rachel V.; Cain, Stephen M.; Zaferiou, Antonia M.; Ojeda, Lauro V.; Perkins, Noel C.
2017-01-01
Three-dimensional rotations across the human knee serve as important markers of knee health and performance in multiple contexts including human mobility, worker safety and health, athletic performance, and warfighter performance. While knee rotations can be estimated using optical motion capture, that method is largely limited to the laboratory and small capture volumes. These limitations may be overcome by deploying wearable inertial measurement units (IMUs). The objective of this study is to present a new IMU-based method for estimating 3D knee rotations and to benchmark the accuracy of the results using an instrumented mechanical linkage. The method employs data from shank- and thigh-mounted IMUs and a vector constraint for the medial-lateral axis of the knee during periods when the knee joint functions predominantly as a hinge. The method is carefully validated using data from high precision optical encoders in a mechanism that replicates 3D knee rotations spanning (1) pure flexion/extension, (2) pure internal/external rotation, (3) pure abduction/adduction, and (4) combinations of all three rotations. Regardless of the movement type, the IMU-derived estimates of 3D knee rotations replicate the truth data with high confidence (RMS error < 4° and correlation coefficient r≥0.94). PMID:28846613
Consistent Parameter and Transfer Function Estimation using Context Free Grammars
NASA Astrophysics Data System (ADS)
Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten
2017-04-01
This contribution presents a method for the inference of transfer functions for rainfall-runoff models. Here, transfer functions are defined as parametrized (functional) relationships between a set of spatial predictors (e.g. elevation, slope or soil texture) and model parameters. They are ultimately used for estimation of consistent, spatially distributed model parameters from a limited amount of lumped global parameters. Additionally, they provide a straightforward method for parameter extrapolation from one set of basins to another and can even be used to derive parameterizations for multi-scale models [see: Samaniego et al., 2010]. Yet, currently an actual knowledge of the transfer functions is often implicitly assumed. As a matter of fact, for most cases these hypothesized transfer functions can rarely be measured and often remain unknown. Therefore, this contribution presents a general method for the concurrent estimation of the structure of transfer functions and their respective (global) parameters. Note, that by consequence an estimation of the distributed parameters of the rainfall-runoff model is also undertaken. The method combines two steps to achieve this. The first generates different possible transfer functions. The second then estimates the respective global transfer function parameters. The structural estimation of the transfer functions is based on the context free grammar concept. Chomsky first introduced context free grammars in linguistics [Chomsky, 1956]. Since then, they have been widely applied in computer science. But, to the knowledge of the authors, they have so far not been used in hydrology. Therefore, the contribution gives an introduction to context free grammars and shows how they can be constructed and used for the structural inference of transfer functions. This is enabled by new methods from evolutionary computation, such as grammatical evolution [O'Neill, 2001], which make it possible to exploit the constructed grammar as a search space for equations. The parametrization of the transfer functions is then achieved through a second optimization routine. The contribution explores different aspects of the described procedure through a set of experiments. These experiments can be divided into three categories: (1) The inference of transfer functions from directly measurable parameters; (2) The estimation of global parameters for given transfer functions from runoff data; and (3) The estimation of sets of completely unknown transfer functions from runoff data. The conducted tests reveal different potentials and limits of the procedure. In concrete it is shown that example (1) and (2) work remarkably well. Example (3) is much more dependent on the setup. In general, it can be said that in that case much more data is needed to derive transfer function estimations, even for simple models and setups. References: - Chomsky, N. (1956): Three Models for the Description of Language. IT IRETr. 2(3), p 113-124 - O'Neil, M. (2001): Grammatical Evolution. IEEE ToEC, Vol.5, No. 4 - Samaniego, L.; Kumar, R.; Attinger, S. (2010): Multiscale parameter regionalization of a grid-based hydrologic model at the mesoscale. WWR, Vol. 46, W05523, doi:10.1029/2008WR007327
Ryder, Robert T.; Milici, Robert C.; Swezey, Christopher S.; Trippi, Michael H.; Ruppert, Leslie F.; Ryder, Robert T.
2014-01-01
The most recent U.S. Geological Survey (USGS) assessment of undiscovered oil and gas resources of the Appalachian basin was completed in 2002 (Milici and others, 2003). This assessment was based on the total petroleum system (TPS), a concept introduced by Magoon and Dow (1994) and developed during subsequent studies such as those by the U.S. Geological Survey World Energy Assessment Team (2000) and by Biteau and others (2003a,b). Each TPS is based on specific geologic elements that include source rocks, traps and seals, reservoir rocks, and the generation and migration of hydrocarbons. This chapter identifies the TPSs defined in the 2002 Appalachian basin oil and gas assessment and places them in the context of the stratigraphic framework associated with regional geologic cross sections D–D′ (Ryder and others, 2009, which was re-released in this volume, chap. E.4.1) and E–E′ (Ryder and others, 2008, which was re-released in this volume, chap. E.4.2). Furthermore, the chapter presents a recent estimate of the ultimate recoverable oil and natural gas in the basin.
NASA Astrophysics Data System (ADS)
Langeveld, Willem G. J.
The most widely used technology for the non-intrusive active inspection of cargo containers and trucks is x-ray radiography at high energies (4-9 MeV). Technologies such as dual-energy imaging, spectroscopy, and statistical waveform analysis can be used to estimate the effective atomic number (Zeff) of the cargo from the x-ray transmission data, because the mass attenuation coefficient depends on energy as well as atomic number Z. The estimated effective atomic number, Zeff, of the cargo then leads to improved detection capability of contraband and threats, including special nuclear materials (SNM) and shielding. In this context, the exact meaning of effective atomic number (for mixtures and compounds) is generally not well-defined. Physics-based parameterizations of the mass attenuation coefficient have been given in the past, but usually for a limited low-energy range. Definitions of Zeff have been based, in part, on such parameterizations. Here, we give an improved parameterization at low energies (20-1000 keV) which leads to a well-defined Zeff. We then extend this parameterization up to energies relevant for cargo inspection (10 MeV), and examine what happens to the Zeff definition at these higher energies.
Combined monitoring, decision and control model for the human operator in a command and control desk
NASA Technical Reports Server (NTRS)
Muralidharan, R.; Baron, S.
1978-01-01
A report is given on the ongoing efforts to mode the human operator in the context of the task during the enroute/return phases in the ground based control of multiple flights of remotely piloted vehicles (RPV). The approach employed here uses models that have their analytical bases in control theory and in statistical estimation and decision theory. In particular, it draws heavily on the modes and the concepts of the optimal control model (OCM) of the human operator. The OCM is being extended into a combined monitoring, decision, and control model (DEMON) of the human operator by infusing decision theoretic notions that make it suitable for application to problems in which human control actions are infrequent and in which monitoring and decision-making are the operator's main activities. Some results obtained with a specialized version of DEMON for the RPV control problem are included.
Pyke, Graham H; Ehrlich, Paul R
2010-05-01
Housed worldwide, mostly in museums and herbaria, is a vast collection of biological specimens developed over centuries. These biological collections, and associated taxonomic and systematic research, have received considerable long-term public support. The work remaining in systematics has been expanding as the estimated total number of species of organisms on Earth has risen over recent decades, as have estimated numbers of undescribed species. Despite this increasing task, support for taxonomic and systematic research, and biological collections upon which such research is based, has declined over the last 30-40 years, while other areas of biological research have grown considerably, especially those that focus on environmental issues. Reflecting increases in research that deals with ecological questions (e.g. what determines species distribution and abundance) or environmental issues (e.g. toxic pollution), the level of research attempting to use biological collections in museums or herbaria in an ecological/environmental context has risen dramatically during about the last 20 years. The perceived relevance of biological collections, and hence the support they receive, should be enhanced if this trend continues and they are used prominently regarding such environmental issues as anthropogenic loss of biodiversity and associated ecosystem function, global climate change, and decay of the epidemiological environment. It is unclear, however, how best to use biological collections in the context of such ecological/environmental issues or how best to manage collections to facilitate such use. We demonstrate considerable and increasingly realized potential for research based on biological collections to contribute to ecological/environmental understanding. However, because biological collections were not originally intended for use regarding such issues and have inherent biases and limitations, they are proving more useful in some contexts than in others. Biological collections have, for example, been particularly useful as sources of information regarding variation in attributes of individuals (e.g. morphology, chemical composition) in relation to environmental variables, and provided important information in relation to species' distributions, but less useful in the contexts of habitat associations and population sizes. Changes to policies, strategies and procedures associated with biological collections could mitigate these biases and limitations, and hence make such collections more useful in the context of ecological/environmental issues. Haphazard and opportunistic collecting could be replaced with strategies for adding to existing collections that prioritize projects that use biological collections and include, besides taxonomy and systematics, a focus on significant environmental/ecological issues. Other potential changes include increased recording of the nature and extent of collecting effort and information associated with each specimen such as nearby habitat and other individuals observed but not collected. Such changes have begun to occur within some institutions. Institutions that house biological collections should, we think, pursue a mission of 'understanding the life of the planet to inform its stewardship' (Krishtalka & Humphrey, 2000), as such a mission would facilitate increased use of biological collections in an ecological/environmental context and hence lead to increased appreciation, encouragement and support from the public for these collections, their associated research, and the institutions that house them.
Teutsch, T; Mesch, M; Giessen, H; Tarin, C
2015-01-01
In this contribution, a method to select discrete wavelengths that allow an accurate estimation of the glucose concentration in a biosensing system based on metamaterials is presented. The sensing concept is adapted to the particular application of ophthalmic glucose sensing by covering the metamaterial with a glucose-sensitive hydrogel and the sensor readout is performed optically. Due to the fact that in a mobile context a spectrometer is not suitable, few discrete wavelengths must be selected to estimate the glucose concentration. The developed selection methods are based on nonlinear support vector regression (SVR) models. Two selection methods are compared and it is shown that wavelengths selected by a sequential forward feature selection algorithm achieves an estimation improvement. The presented method can be easily applied to different metamaterial layouts and hydrogel configurations.
Making self-care a priority for women at risk of breast cancer-related lymphedema.
Radina, M Elise; Armer, Jane M; Stewart, Bob R
2014-05-01
Estimates suggest that between 41% and 94% of breast cancer survivors may develop the chronic condition of secondary lymphedema at some point during their lifetimes. Self-care is critical for effective lymphedema management and risk-reduction. At the same time, women in general have been characterized as engaging in self-sacrificing behaviors in which they choose other-care over self-care. This study explored the self-care experiences of women with breast cancer within the contexts of complex and demanding familial and work-related responsibilities. Participants (N=14) were enrolled in a behavioral-educational intervention aimed at lymphedema risk-reduction. This feminist family theory-informed secondary analysis of qualitative data focused on women's familial roles and the balance or lack of balance between self-sacrifice and self-care. Findings included participants' struggles with time management and prioritizing self-care over care of others as well as making a commitment to self-care. Findings have implications for patient and family-level education and research with regard to gender role-based barriers to self-care and self-care within complex social contexts.
Using Novel Word Context Measures to Predict Human Ratings of Lexical Proficiency
ERIC Educational Resources Information Center
Berger, Cynthia M.; Crossley, Scott A.; Kyle, Kristopher
2017-01-01
This study introduces a model of lexical proficiency based on novel computational indices related to word context. The indices come from an updated version of the Tool for the Automatic Analysis of Lexical Sophistication (TAALES) and include associative, lexical, and semantic measures of word context. Human ratings of holistic lexical proficiency…
Linn, Kristin A; Gaonkar, Bilwaj; Satterthwaite, Theodore D; Doshi, Jimit; Davatzikos, Christos; Shinohara, Russell T
2016-05-15
Normalization of feature vector values is a common practice in machine learning. Generally, each feature value is standardized to the unit hypercube or by normalizing to zero mean and unit variance. Classification decisions based on support vector machines (SVMs) or by other methods are sensitive to the specific normalization used on the features. In the context of multivariate pattern analysis using neuroimaging data, standardization effectively up- and down-weights features based on their individual variability. Since the standard approach uses the entire data set to guide the normalization, it utilizes the total variability of these features. This total variation is inevitably dependent on the amount of marginal separation between groups. Thus, such a normalization may attenuate the separability of the data in high dimensional space. In this work we propose an alternate approach that uses an estimate of the control-group standard deviation to normalize features before training. We study our proposed approach in the context of group classification using structural MRI data. We show that control-based normalization leads to better reproducibility of estimated multivariate disease patterns and improves the classifier performance in many cases. Copyright © 2016 Elsevier Inc. All rights reserved.
[Can the degree of renal artery stenosis be automatically quantified?].
Cherrak, I; Jaulent, M C; Azizi, M; Plouin, P F; Degoulet, P; Chatellier, G
2000-08-01
The objective of the reported study is to validate a computer system, QUASAR, dedicated to the quantification of renal artery stenoses. This system estimates automatically the reference diameter and calculates the minimum diameter to compute a degree of stenosis. A hundred and eighty images of atheromatous stenoses between 10% and 80% were collected from two French independent protocols. For the 49 images of the EMMA protocol, the results from QUASAR were compared with the visual estimation of an initial investigator and with the results from a reference method based on a panel of fixe experienced experts. For the 131 images of the ASTARTE protocol, the results from QUASAR were compared with those from a semi-automatic quantification system and with those from a system based on densitometric analysis. The present work validates QUASAR in a population of narrow atheromatous stenoses (> 50%). In the context of the EMMA protocol, QUASAR is not significantly different from the mean of the fixe experts. It is unbiased and more precise than the estimation of a single investigator. In the context of the ASTARTE protocol, there is no significant difference between the three methods for the stenoses higher than 50%, however, globally, QUASAR surestimates significantly (up to 10%) the degree of stenosis.
Doum, Dyna; Keo, Vanney; Sokha, Ly; Sam, BunLeng; Chan, Vibol; Alexander, Neal; Bradley, John; Liverani, Marco; Prasetyo, Didot Budi; Rachmat, Agus; Lopes, Sergio; Hii, Jeffrey; Rithea, Leang; Shafique, Muhammad; Hustedt, John
2018-01-01
Background Globally there are an estimated 390 million dengue infections per year, of which 96 million are clinically apparent. In Cambodia, estimates suggest as many as 185,850 cases annually. The World Health Organization global strategy for dengue prevention aims to reduce mortality rates by 50% and morbidity by 25% by 2020. The adoption of integrated vector management approach using community-based methods tailored to the local context is one of the recommended strategies to achieve these objectives. Understanding local knowledge, attitudes and practices is therefore essential to designing suitable strategies to fit each local context. Methods and findings A Knowledge, Attitudes and Practices survey in 600 randomly chosen households was administered in 30 villages in Kampong Cham which is one of the most populated provinces of Cambodia. KAP surveys were administered to a sub-sample of households where an entomology survey was conducted (1200 households), during which Aedes larval/pupae and adult female Aedes mosquito densities were recorded. Participants had high levels of knowledge regarding the transmission of dengue, Aedes breeding, and biting prevention methods; the majority of participants believed they were at risk and that dengue transmission is preventable. However, self-reported vector control practices did not match observed practices recorded in our surveys. No correlation was found between knowledge and observed practices either. Conclusion An education campaign regarding dengue prevention in this setting with high knowledge levels is unlikely to have any significant effect on practices unless it is incorporated in a more comprehensive strategy for behavioural change, such a COMBI method, which includes behavioural models as well as communication and marketing theory and practice. Trial registration ISRCTN85307778. PMID:29451879
Kumaran, Emmanuelle; Doum, Dyna; Keo, Vanney; Sokha, Ly; Sam, BunLeng; Chan, Vibol; Alexander, Neal; Bradley, John; Liverani, Marco; Prasetyo, Didot Budi; Rachmat, Agus; Lopes, Sergio; Hii, Jeffrey; Rithea, Leang; Shafique, Muhammad; Hustedt, John
2018-02-01
Globally there are an estimated 390 million dengue infections per year, of which 96 million are clinically apparent. In Cambodia, estimates suggest as many as 185,850 cases annually. The World Health Organization global strategy for dengue prevention aims to reduce mortality rates by 50% and morbidity by 25% by 2020. The adoption of integrated vector management approach using community-based methods tailored to the local context is one of the recommended strategies to achieve these objectives. Understanding local knowledge, attitudes and practices is therefore essential to designing suitable strategies to fit each local context. A Knowledge, Attitudes and Practices survey in 600 randomly chosen households was administered in 30 villages in Kampong Cham which is one of the most populated provinces of Cambodia. KAP surveys were administered to a sub-sample of households where an entomology survey was conducted (1200 households), during which Aedes larval/pupae and adult female Aedes mosquito densities were recorded. Participants had high levels of knowledge regarding the transmission of dengue, Aedes breeding, and biting prevention methods; the majority of participants believed they were at risk and that dengue transmission is preventable. However, self-reported vector control practices did not match observed practices recorded in our surveys. No correlation was found between knowledge and observed practices either. An education campaign regarding dengue prevention in this setting with high knowledge levels is unlikely to have any significant effect on practices unless it is incorporated in a more comprehensive strategy for behavioural change, such a COMBI method, which includes behavioural models as well as communication and marketing theory and practice. ISRCTN85307778.
A Cyber-ITS Framework for Massive Traffic Data Analysis Using Cyber Infrastructure
Fontaine, Michael D.
2013-01-01
Traffic data is commonly collected from widely deployed sensors in urban areas. This brings up a new research topic, data-driven intelligent transportation systems (ITSs), which means to integrate heterogeneous traffic data from different kinds of sensors and apply it for ITS applications. This research, taking into consideration the significant increase in the amount of traffic data and the complexity of data analysis, focuses mainly on the challenge of solving data-intensive and computation-intensive problems. As a solution to the problems, this paper proposes a Cyber-ITS framework to perform data analysis on Cyber Infrastructure (CI), by nature parallel-computing hardware and software systems, in the context of ITS. The techniques of the framework include data representation, domain decomposition, resource allocation, and parallel processing. All these techniques are based on data-driven and application-oriented models and are organized as a component-and-workflow-based model in order to achieve technical interoperability and data reusability. A case study of the Cyber-ITS framework is presented later based on a traffic state estimation application that uses the fusion of massive Sydney Coordinated Adaptive Traffic System (SCATS) data and GPS data. The results prove that the Cyber-ITS-based implementation can achieve a high accuracy rate of traffic state estimation and provide a significant computational speedup for the data fusion by parallel computing. PMID:23766690
NASA Astrophysics Data System (ADS)
Lysak, Y. V.; Klimanov, V. A.; Narkevich, B. Ya
2017-01-01
One of the most difficult problems of modern radionuclide therapy (RNT) is control of the absorbed dose in pathological volume. This research presents new approach based on estimation of radiopharmaceutical (RP) accumulated activity value in tumor volume, based on planar scintigraphic images of the patient and calculated radiation transport using Monte Carlo method, including absorption and scattering in biological tissues of the patient, and elements of gamma camera itself. In our research, to obtain the data, we performed modeling scintigraphy of the vial with administered to the patient activity of RP in gamma camera, the vial was placed at the certain distance from the collimator, and the similar study was performed in identical geometry, with the same values of activity of radiopharmaceuticals in the pathological target in the body of the patient. For correct calculation results, adapted Fisher-Snyder human phantom was simulated in MCNP program. In the context of our technique, calculations were performed for different sizes of pathological targets and various tumors deeps inside patient’s body, using radiopharmaceuticals based on a mixed β-γ-radiating (131I, 177Lu), and clear β- emitting (89Sr, 90Y) therapeutic radionuclides. Presented method can be used for adequate implementing in clinical practice estimation of absorbed doses in the regions of interest on the basis of planar scintigraphy of the patient with sufficient accuracy.
A Cyber-ITS framework for massive traffic data analysis using cyber infrastructure.
Xia, Yingjie; Hu, Jia; Fontaine, Michael D
2013-01-01
Traffic data is commonly collected from widely deployed sensors in urban areas. This brings up a new research topic, data-driven intelligent transportation systems (ITSs), which means to integrate heterogeneous traffic data from different kinds of sensors and apply it for ITS applications. This research, taking into consideration the significant increase in the amount of traffic data and the complexity of data analysis, focuses mainly on the challenge of solving data-intensive and computation-intensive problems. As a solution to the problems, this paper proposes a Cyber-ITS framework to perform data analysis on Cyber Infrastructure (CI), by nature parallel-computing hardware and software systems, in the context of ITS. The techniques of the framework include data representation, domain decomposition, resource allocation, and parallel processing. All these techniques are based on data-driven and application-oriented models and are organized as a component-and-workflow-based model in order to achieve technical interoperability and data reusability. A case study of the Cyber-ITS framework is presented later based on a traffic state estimation application that uses the fusion of massive Sydney Coordinated Adaptive Traffic System (SCATS) data and GPS data. The results prove that the Cyber-ITS-based implementation can achieve a high accuracy rate of traffic state estimation and provide a significant computational speedup for the data fusion by parallel computing.
Reassessment of the potential economic impact of cattle parasites in Brazil.
Grisi, Laerte; Leite, Romário Cerqueira; Martins, João Ricardo de Souza; Barros, Antonio Thadeu Medeiros de; Andreotti, Renato; Cançado, Paulo Henrique Duarte; León, Adalberto Angel Pérez de; Pereira, Jairo Barros; Villela, Humberto Silva
2014-01-01
The profitability of livestock activities can be diminished significantly by the effects of parasites. Economic losses caused by cattle parasites in Brazil were estimated on an annual basis, considering the total number of animals at risk and the potential detrimental effects of parasitism on cattle productivity. Estimates in U.S. dollars (USD) were based on reported yield losses among untreated animals and reflected some of the effects of parasitic diseases. Relevant parasites that affect cattle productivity in Brazil, and their economic impact in USD billions include: gastrointestinal nematodes - $7.11; cattle tick (Rhipicephalus (Boophilus) microplus) - $3.24; horn fly (Haematobia irritans) - $2.56; cattle grub (Dermatobia hominis) - $0.38; New World screwworm fly (Cochliomyia hominivorax) - $0.34; and stable fly (Stomoxys calcitrans) - $0.34. The combined annual economic loss due to internal and external parasites of cattle in Brazil considered here was estimated to be at least USD 13.96 billion. These findings are discussed in the context of methodologies and research that are required in order to improve the accuracy of these economic impact assessments. This information needs to be taken into consideration when developing sustainable policies for mitigating the impact of parasitism on the profitability of Brazilian cattle producers.
Incorporating parametric uncertainty into population viability analysis models
McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.
2011-01-01
Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.
Progress and limitations on quantifying nutrient and carbon loading to coastal waters
NASA Astrophysics Data System (ADS)
Stets, E.; Oelsner, G. P.; Stackpoole, S. M.
2017-12-01
Riverine export of nutrients and carbon to estuarine and coastal waters are important determinants of coastal ecosystem health and provide necessary insight into global biogeochemical cycles. Quantification of coastal solute loads typically relies upon modeling based on observations of concentration and discharge from selected rivers draining to the coast. Most large-scale river export models require unidirectional flow and thus are referenced to monitoring locations at the head of tide, which can be located far inland. As a result, the contributions of the coastal plain, tidal wetlands, and concentrated coastal development are often poorly represented in regional and continental-scale estimates of solute delivery to coastal waters. However, site-specific studies have found that these areas are disproportionately active in terms of nutrient and carbon export. Modeling efforts to upscale fluxes from these areas, while not common, also suggest an outsized importance to coastal flux estimates. This presentation will focus on illustrating how the problem of under-representation of near-shore environments impacts large-scale coastal flux estimates in the context of recent regional and continental-scale assessments. Alternate approaches to capturing the influence of the near-coastal terrestrial inputs including recent data aggregation efforts and modeling approaches will be discussed.
Almiron-Roig, Eva; Aitken, Amanda; Galloway, Catherine
2017-01-01
Context: Dietary assessment in minority ethnic groups is critical for surveillance programs and for implementing effective interventions. A major challenge is the accurate estimation of portion sizes for traditional foods and dishes. Objective: The aim of this systematic review was to assess records published up to 2014 describing a portion-size estimation element (PSEE) applicable to the dietary assessment of UK-residing ethnic minorities. Data sources, selection, and extraction: Electronic databases, internet sites, and theses repositories were searched, generating 5683 titles, from which 57 eligible full-text records were reviewed. Data analysis: Forty-two publications about minority ethnic groups (n = 20) or autochthonous populations (n = 22) were included. The most common PSEEs (47%) were combination tools (eg, food models and portion-size lists), followed by portion-size lists in questionnaires/guides (19%) and image-based and volumetric tools (17% each). Only 17% of PSEEs had been validated against weighed data. Conclusions: When developing ethnic-specific dietary assessment tools, it is important to consider customary portion sizes by sex and age, traditional household utensil usage, and population literacy levels. Combining multiple PSEEs may increase accuracy, but such methods require validation. PMID:28340101
A Lessons Learned Knowledge Warehouse to Support the Army Knowledge Management Command-Centric
2004-03-01
Warehouse to Support the Army Knowledge Management Command-Centric increase the quality and availability of information in context ( knowledge ) to the... information , geographical information , knowledge base, Intelligence data (HUMINT, SIGINT, etc.); and • • Human Computer Interaction (HCI): allows...the Data Fusion Process from the HCI point of view? Can the LL Knowledge Base provide any valuable information to achieve better estimates of the
Maximum likelihood estimation for Cox's regression model under nested case-control sampling.
Scheike, Thomas H; Juul, Anders
2004-04-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.
Script-theory virtual case: A novel tool for education and research.
Hayward, Jake; Cheung, Amandy; Velji, Alkarim; Altarejos, Jenny; Gill, Peter; Scarfe, Andrew; Lewis, Melanie
2016-11-01
Context/Setting: The script theory of diagnostic reasoning proposes that clinicians evaluate cases in the context of an "illness script," iteratively testing internal hypotheses against new information eventually reaching a diagnosis. We present a novel tool for teaching diagnostic reasoning to undergraduate medical students based on an adaptation of script theory. We developed a virtual patient case that used clinically authentic audio and video, interactive three-dimensional (3D) body images, and a simulated electronic medical record. Next, we used interactive slide bars to record respondents' likelihood estimates of diagnostic possibilities at various stages of the case. Responses were dynamically compared to data from expert clinicians and peers. Comparative frequency distributions were presented to the learner and final diagnostic likelihood estimates were analyzed. Detailed student feedback was collected. Over two academic years, 322 students participated. Student diagnostic likelihood estimates were similar year to year, but were consistently different from expert clinician estimates. Student feedback was overwhelmingly positive: students found the case was novel, innovative, clinically authentic, and a valuable learning experience. We demonstrate the successful implementation of a novel approach to teaching diagnostic reasoning. Future study may delineate reasoning processes associated with differences between novice and expert responses.
Parental Perceptions of Life Context Variables for Involvement in Their Young Children's Education
ERIC Educational Resources Information Center
Tekin, Ali Kemal
2016-01-01
The purpose of this study was to discover Turkish parents' perceptions of life context variables, including personal knowledge and skills and personal time and energy for involvement activities in their young children's education. The scales used in this study were based on parents' self-report, and included: (1) Parental Perceptions of Personal…
Flood-formed dunes in Athabasca Valles, Mars: Morphology, modeling, and implications
Burr, D.M.; Carling, P.A.; Beyer, R.A.; Lancaster, N.
2004-01-01
Estimates of discharge for martian outflow channels have spanned orders of magnitude due in part to uncertainties in floodwater height. A methodology of estimating discharge based on bedforms would reduce some of this uncertainty. Such a methodology based on the morphology and granulometry of flood-formed ('diluvial') dunes has been developed by Carling (1996b, in: Branson, J., Brown, A.G., Gregory, K.J. (Eds.), Global Continental Changes: The Context of Palaeohydrology. Geological Society Special Publication No. 115, London, UK, 165-179) and applied to Pleistocene flood-formed dunes in Siberia. Transverse periodic dune-like bedforms in Athabasca Valles, Mars, have previously been classified both as flood-formed dunes and as antidunes. Either interpretation is important, as they both imply substantial quantities of water, but each has different hydraulic implications. We undertook photoclinometric measurements of these forms, and compared them with data from flood-formed dunes in Siberia. Our analysis of those data shows their morphology to be more consistent with dunes than antidunes, thus providing the first documentation of flood-formed dunes on Mars. Other reasoning based on context and likely hydraulics also supports the bedforms' classification as dunes. Evidence does not support the dunes being aeolian, although a conclusive determination cannot be made with present data. Given the preponderance of evidence that the features are flood-formed instead of aeolian, we applied Carling's (1996b, in: Branson, J., Brown, A.G., Gregory, K.J. (Eds.), Global Continental Changes: The Context of Palaeohydrology. Geological Society Special Publication No. 115, London, UK, 165-179) dune-flow model to derive the peak discharge of the flood flow that formed them. The resultant estimate is approximately 2??106 m3/s, similar to previous estimates. The size of the Athabascan dunes' in comparison with that of terrestrial dunes suggests that these martian dunes took at least 1-2 days to grow. Their flattened morphology implies that they were formed at high subcritical flow and that the flood flow that formed them receded very quickly. ?? 2004 Elsevier Inc. All rights reserved.
Body mass and stature estimation based on the first metatarsal in humans.
De Groote, Isabelle; Humphrey, Louise T
2011-04-01
Archaeological assemblages often lack the complete long bones needed to estimate stature and body mass. The most accurate estimates of body mass and stature are produced using femoral head diameter and femur length. Foot bones including the first metatarsal preserve relatively well in a range of archaeological contexts. In this article we present regression equations using the first metatarsal to estimate femoral head diameter, femoral length, and body mass in a diverse human sample. The skeletal sample comprised 87 individuals (Andamanese, Australasians, Africans, Native Americans, and British). Results show that all first metatarsal measurements correlate moderately to highly (r = 0.62-0.91) with femoral head diameter and length. The proximal articular dorsoplantar diameter is the best single measurement to predict both femoral dimensions. Percent standard errors of the estimate are below 5%. Equations using two metatarsal measurements show a small increase in accuracy. Direct estimations of body mass (calculated from measured femoral head diameter using previously published equations) have an error of just over 7%. No direct stature estimation equations were derived due to the varied linear body proportions represented in the sample. The equations were tested on a sample of 35 individuals from Christ Church Spitalfields. Percentage differences in estimated and measured femoral head diameter and length were less than 1%. This study demonstrates that it is feasible to use the first metatarsal in the estimation of body mass and stature. The equations presented here are particularly useful for assemblages where the long bones are either missing or fragmented, and enable estimation of these fundamental population parameters in poorly preserved assemblages. Copyright © 2011 Wiley-Liss, Inc.
Nakanishi, Allen S.; Lilly, Michael R.
1998-01-01
MODFLOW, a finite-difference model of ground-water flow, was used to simulate the flow of water between the aquifer and the Chena River at Fort Wainwright, Alaska. The model was calibrated by comparing simulated ground-water hydrographs to those recorded in wells during periods of fluctuating river levels. The best fit between simulated and observed hydrographs occurred for the following: 20 feet per day for vertical hydraulic conductivity, 400 feet per day for horizontal hydraulic conductivity, 1:20 for anisotropy (vertical to horizontal hydraulic conductivity), and 350 per feet for riverbed conductance. These values include a 30 percent adjustment for geometry effects. The estimated values for hydraulic conductivities of the alluvium are based on assumed values of 0.25 for specific yield and 0.000001 per foot for specific storage of the alluvium; the values assumed for bedrock are 0.1 foot per day horizontal hydraulic conductivity, 0.005 foot per day vertical hydraulic conductivity, and 0.0000001 per foot for specific storage. The resulting diffusivity for the alluvial aquifer is 1,600 feet per day. The estimated values of these hydraulic properties are nearly proportional to the assumed value of specific yield. These values were not found to be sensitive to the assumed values for bedrock. The hydrologic parameters estimated using the cross-sectional model are only valid when taken in context with the other values (both estimated and assumed) used in this study. The model simulates horizontal and vertical flow directions near the river during periods of varying river stage. This information is useful for interpreting bank-storage effects, including the flow of contaminants in the aquifer near the river.
A time-and-motion approach to micro-costing of high-throughput genomic assays
Costa, S.; Regier, D.A.; Meissner, B.; Cromwell, I.; Ben-Neriah, S.; Chavez, E.; Hung, S.; Steidl, C.; Scott, D.W.; Marra, M.A.; Peacock, S.J.; Connors, J.M.
2016-01-01
Background Genomic technologies are increasingly used to guide clinical decision-making in cancer control. Economic evidence about the cost-effectiveness of genomic technologies is limited, in part because of a lack of published comprehensive cost estimates. In the present micro-costing study, we used a time-and-motion approach to derive cost estimates for 3 genomic assays and processes—digital gene expression profiling (gep), fluorescence in situ hybridization (fish), and targeted capture sequencing, including bioinformatics analysis—in the context of lymphoma patient management. Methods The setting for the study was the Department of Lymphoid Cancer Research laboratory at the BC Cancer Agency in Vancouver, British Columbia. Mean per-case hands-on time and resource measurements were determined from a series of direct observations of each assay. Per-case cost estimates were calculated using a bottom-up costing approach, with labour, capital and equipment, supplies and reagents, and overhead costs included. Results The most labour-intensive assay was found to be fish at 258.2 minutes per case, followed by targeted capture sequencing (124.1 minutes per case) and digital gep (14.9 minutes per case). Based on a historical case throughput of 180 cases annually, the mean per-case cost (2014 Canadian dollars) was estimated to be $1,029.16 for targeted capture sequencing and bioinformatics analysis, $596.60 for fish, and $898.35 for digital gep with an 807-gene code set. Conclusions With the growing emphasis on personalized approaches to cancer management, the need for economic evaluations of high-throughput genomic assays is increasing. Through economic modelling and budget-impact analyses, the cost estimates presented here can be used to inform priority-setting decisions about the implementation of such assays in clinical practice. PMID:27803594
Multilevel context of depression in two American Indian tribes.
Kaufman, Carol E; Beals, Janette; Croy, Calvin; Jiang, Luohua; Novins, Douglas K
2013-12-01
Depression is a major debilitating disease. For American Indians living in tribal reservations, who endure disproportionately high levels of stress and poverty often associated with depression, determining the patterns and correlates is key to appropriate clinical assessment and intervention development. Yet little attention has been given to the cultural context of correlates for depression, including the influence of family, cultural traditions or practices, or community conditions. We used data from a large representative psychiatric epidemiological study among American Indians in 2 reservation communities to estimate nested individual and multilevel models of past-year major depressive episode (MDE) accounting for family, cultural, and community conditions. We found that models including culturally informed individual-level measures significantly improved the model fit over demographics alone. We found significant community-level variation in the probability of past-year MDE diagnosis in 1 tribe even after accounting for individual-level characteristics. Accounting for culture, family, and community context will facilitate research, clinician assessment, and treatment of depression in diverse settings.
Multilevel Context of Depression in Two American Indian Tribes
Kaufman, Carol E.; Beals, Janette; Croy, Calvin; Jiang, Luohua; Novins, Douglas K.
2015-01-01
Objective Depression is a major debilitating disease. For American Indians living in tribal reservations, who endure disproportionately high levels of stress and poverty often associated with depression, determining the patterns and correlates is key to appropriate clinical assessment and intervention development. Yet, little attention has been given to the cultural context of correlates for depression, including the influence of family, cultural traditions or practices, or community conditions. Method We used data from a large representative psychiatric epidemiological study among American Indians in two reservation communities to estimate nested individual and multilevel models of past-year Major Depressive Episode (MDE) accounting for family, cultural, and community conditions. Results We found that models including culturally informed individual-level measures significantly improved the model fit over demographics alone. We found significant community-level variation in the probability of past-year MDE diagnosis in one tribe even after accounting for individual-level characteristics. Conclusions Accounting for culture, family, and community context will facilitate research, clinician assessment, and treatment of depression in diverse settings. PMID:24016293
Semiparametric Item Response Functions in the Context of Guessing
ERIC Educational Resources Information Center
Falk, Carl F.; Cai, Li
2016-01-01
We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood-based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…
UQ for Decision Making: How (at least five) Kinds of Probability Might Come Into Play
NASA Astrophysics Data System (ADS)
Smith, L. A.
2013-12-01
In 1959 IJ Good published the discussion "Kinds of Probability" in Science. Good identified (at least) five kinds. The need for (at least) a sixth kind of probability when quantifying uncertainty in the context of climate science is discussed. This discussion brings out the differences in weather-like forecasting tasks and climate-links tasks, with a focus on the effective use both of science and of modelling in support of decision making. Good also introduced the idea of a "Dynamic probability" a probability one expects to change without any additional empirical evidence; the probabilities assigned by a chess playing program when it is only half thorough its analysis being an example. This case is contrasted with the case of "Mature probabilities" where a forecast algorithm (or model) has converged on its asymptotic probabilities and the question hinges in whether or not those probabilities are expected to change significantly before the event in question occurs, even in the absence of new empirical evidence. If so, then how might one report and deploy such immature probabilities in scientific-support of decision-making rationally? Mature Probability is suggested as a useful sixth kind, although Good would doubtlessly argue that we can get by with just one, effective communication with decision makers may be enhanced by speaking as if the others existed. This again highlights the distinction between weather-like contexts and climate-like contexts. In the former context one has access to a relevant climatology (a relevant, arguably informative distribution prior to any model simulations), in the latter context that information is not available although one can fall back on the scientific basis upon which the model itself rests, and estimate the probability that the model output is in fact misinformative. This subjective "probability of a big surprise" is one way to communicate the probability of model-based information holding in practice, the probability that the information the model-based probability is conditioned on holds. It is argued that no model-based climate-like probability forecast is complete without a quantitative estimate of its own irrelevance, and that the clear identification of model-based probability forecasts as mature or immature, are critical elements for maintaining the credibility of science-based decision support, and can shape uncertainty quantification more widely.
Approximation of epidemic models by diffusion processes and their statistical inference.
Guy, Romain; Larédo, Catherine; Vergu, Elisabeta
2015-02-01
Multidimensional continuous-time Markov jump processes [Formula: see text] on [Formula: see text] form a usual set-up for modeling [Formula: see text]-like epidemics. However, when facing incomplete epidemic data, inference based on [Formula: see text] is not easy to be achieved. Here, we start building a new framework for the estimation of key parameters of epidemic models based on statistics of diffusion processes approximating [Formula: see text]. First, previous results on the approximation of density-dependent [Formula: see text]-like models by diffusion processes with small diffusion coefficient [Formula: see text], where [Formula: see text] is the population size, are generalized to non-autonomous systems. Second, our previous inference results on discretely observed diffusion processes with small diffusion coefficient are extended to time-dependent diffusions. Consistent and asymptotically Gaussian estimates are obtained for a fixed number [Formula: see text] of observations, which corresponds to the epidemic context, and for [Formula: see text]. A correction term, which yields better estimates non asymptotically, is also included. Finally, performances and robustness of our estimators with respect to various parameters such as [Formula: see text] (the basic reproduction number), [Formula: see text], [Formula: see text] are investigated on simulations. Two models, [Formula: see text] and [Formula: see text], corresponding to single and recurrent outbreaks, respectively, are used to simulate data. The findings indicate that our estimators have good asymptotic properties and behave noticeably well for realistic numbers of observations and population sizes. This study lays the foundations of a generic inference method currently under extension to incompletely observed epidemic data. Indeed, contrary to the majority of current inference techniques for partially observed processes, which necessitates computer intensive simulations, our method being mostly an analytical approach requires only the classical optimization steps.
ERIC Educational Resources Information Center
van der Wende, Marijk
2015-01-01
The global competition and related international academic mobility in science and research is rising. Within this context, Europe faces quantitative skills shortages, including an estimate of between 800,000 and one million researchers. Within Europe skills imbalances and mismatches increase, with a growing divergence between countries and…
ERIC Educational Resources Information Center
Soares, Julia S.; Polack, Cody W.; Miller, Ralph R.
2016-01-01
Retrieval-induced forgetting (RIF) is the observation that retrieval of target information causes forgetting of related nontarget information. A number of accounts of this phenomenon have been proposed, including a context-shift-based account (Jonker, Seli, & Macleod, 2013). This account proposes that RIF occurs as a result of the context…
Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods.
Rad, Kamiar Rahnama; Paninski, Liam
2010-01-01
Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian process nonparametric Bayesian techniques for estimating these two-dimensional rate maps. These techniques offer a number of advantages: the estimates may be computed efficiently, come equipped with natural errorbars, adapt their smoothness automatically to the local density and informativeness of the observed data, and permit direct fitting of the model hyperparameters (e.g., the prior smoothness of the rate map) via maximum marginal likelihood. We illustrate the method's flexibility and performance on a variety of simulated and real data.
NASA Astrophysics Data System (ADS)
Michell, Herman Jeremiah
This study was guided by the following research questions: What do the stories of teachers in Nihithewak (Woodlands Cree) school contexts reveal about their experiences and tendencies towards cultural and linguistic-based pedagogical practices and actions in K-12 classrooms? How did these teachers come to teach this way? How do their beliefs and values from their experiences in science education and cultural heritage influence their teaching? Why do these teachers do what they do in their science classroom and instructional practices? The research explores Indigenous-based science education from the perspectives and experiences of science teachers in Nihithewak school contexts. Narrative methodology (Clandinin & Connelly, 2000) was used as a basis for collecting and analyzing data emerging from the research process. The results included thematic portraits and stories of science teaching that is connected to Nihithewak and Nihithewatisiwin (Woodlands Cree Way of Life). Major data sources included conversational interviews, out-of-class observations and occasional in-class observations, field notes, and a research journal. An interview guide with a set of open-ended and semi-structured questions was used to direct the interviews. My role as researcher included participation in storied conversations with ten selected volunteer teachers to document the underlying meanings behind the ways they teach science in Nihithewak contexts. This research is grounded in socio-cultural theory commonly used to support the examination and development of school science in Indigenous cultural contexts (Lemke, 2001; O'Loughlin, 1992). Socio-cultural theory is a framework that links education, language, literacy, and culture (Nieto, 2002). The research encapsulates a literature review that includes the history of Aboriginal education in Canada (Battiste & Barman, 1995; Kirkness, 1992; Perley, 1993), Indigenous-based science education (Cajete, 2000; Aikenhead, 2006a), multi-cultural science education (Hines, 2003), worldview theory (Cobern, 1996), personal practical knowledge (Clandinin, 1986), and narrative discourse as a way of knowing (Bruner, 1996) as the basis for examining the nature of science education in Nihithewak cultural contexts. Analysis of the data was compared to the literature under the rubric of Indigenous-based science education. The experiences of teachers and their patterns of responses in the interviews indicate teaching approaches used in Nihithewak cultural contexts are congruent with Indigenous-based science education discourse. In this study, teaching science revolves around connecting students with Nihithewatisiwin , Nihithewak Ithiniwak and their worldview, ways of knowing, culture, values, language, and traditional practices. Teachers shared the importance of connecting school science with the everyday world of students including with Khitiyak, the land, natural seasonal cycles/activities, the animals, and plants, and traditional technologies used for survival. This study is significant because it is the first to explore teacher stories in relation to Indigenous-based science education with a specific focus on the experiences of teachers in Nihithewak contexts. The findings have implications for (pre)(post) service teacher education as well as those who play a supportive role in the development of Indigenous-based science curriculum from place. Although the study revealed patterns of Indigenous based science education in Nihithewak contexts, the goal of narrative research is not to seek generalizations, nor to analyze teachers or the approaches they use.
Mansour, Hussam; Fuhrmann, Andreas; Paradowski, Ioana; van Well, Eilin Jopp; Püschel, Klaus
2017-03-01
Age estimation represents one of the primary responsibilities of forensic medicine and forensic dentistry. It is an integral procedure aiming to estimate the chronological age of an individual, whose age is either unknown or doubtful, by means of assessing the stage of dental, skeletal, and physical development. The present publication reviews the methods and procedures used in estimating the age of young living individuals as well as the experiences of the Institute of Legal Medicine in Hamburg-Eppendorf, Germany, during the last 25 years. From 1990 to 2015, 4223 age estimations were carried out in Hamburg. During this time, forensic age estimation was requested by different concerned authorities including courts, the foreigners' registration office (Zentrale Ausländerbehörde), and the state office of education and consultation (Landesbetrieb Erziehung und Beratung). In the context of judicial proceedings, orthopantomograms, as well as X-ray examinations of both the left hand and the medial clavicular epiphyses were carried out in accordance with AGFAD recommendations. For investigations not associated with judicial proceedings, orthopantomogram examinations play a key role in the process of age estimation, due to their high diagnostic value and low radiation exposure. Since 2009, mainly unaccompanied young refugees were examined for age estimation. Orthopantomograms and clinical-physical examinations have been used as essential steps in this context to determine whether an individual is 18 years or less. Additional X-ray examinations of the left hand and the medial clavicular epiphyses have been used less frequently.
An Emerging Integrated Middle-Range Theory on Asian Women's Leadership in Nursing.
Im, Eun-Ok; Broome, Marion E; Inouye, Jillian; Kunaviktikul, Wipada; Oh, Eui Geum; Sakashita, Reiko; Yi, Myungsun; Huang, Lian-Hua; Tsai, Hsiu-Min; Wang, Hsiu-Hung
2018-02-01
Asian cultures reflect patriarchal cultural values and attitudes, which likely have influenced women leaders in their countries differently from women in Western cultures. However, virtually no leadership theories have been developed to reflect the experiences and development of nursing leaders from Asian cultures. The purpose of this article is to present an emerging integrated middle-range theory on Asian women's leadership in nursing. Using an integrative approach, the theory was developed based on three major sources: the leadership frames of Bolman and Deal, literature reviews, and exemplars/cases from five different countries. The theory includes two main domains (leadership frames and leadership contexts). The domain of leadership frames includes human resources/networks, structure/organization, national/international politics, and symbols. The domain of leadership contexts includes cultural contexts, sociopolitical contexts, and gendered contexts. This theory will help understand nursing leadership in Asian cultures and provide directions for future nurse leaders in this ever-changing globalized world.
Gay male attraction toward muscular men: does mating context matter?
Varangis, Eleanna; Lanzieri, Nicholas; Hildebrandt, Tom; Feldman, Matthew
2012-03-01
The purpose of this study was to examine gay men's perceived attractiveness of male figures based on short-term and long-term partner contexts. A sample of 190 gay adult men rated the attractiveness of line-drawings depicting male figures varying systematically in muscularity and body fat percentage in both short-term and long-term dating contexts. Mixed effects modeling was used to estimate the effects of figure (muscularity and body fat), dating context (short-term vs. long-term), and individual rater characteristics on attractiveness ratings. Results indicated that figure muscularity and body-fat had significant non-linear (i.e., quadratic) relationships with attractiveness ratings, and short-term dating context was associated with more discriminating ratings of attractiveness. Interactions between individual characteristics and figure characteristics indicated that the more available the individual and lower body fat, the more discriminating they were in ratings of attractiveness. The implications for future investigations considering both object and observer characteristics of attractiveness preferences are discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.
Alcohol demand and risk preference
Dave, Dhaval; Saffer, Henry
2008-01-01
Both economists and psychologists have studied the concept of risk preference. Economists categorize individuals as more or less risk-tolerant based on the marginal utility of income. Psychologists categorize individuals' propensity towards risk based on harm avoidance, novelty seeking and reward dependence traits. The two concepts of risk are related, although the instruments used for empirical measurement are quite different. Psychologists have found risk preference to be an important determinant of alcohol consumption; however economists have not included risk preference in studies of alcohol demand. This is the first study to examine the effect of risk preference on alcohol consumption in the context of a demand function. The specifications employ multiple waves from the Panel Study of Income Dynamics (PSID) and the Health and Retirement Study (HRS), which permit the estimation of age-specific models based on nationally representative samples. Both of these data sets include a unique and consistent survey instrument designed to directly measure risk preference in accordance with the economist's definition. This study estimates the direct impact of risk preference on alcohol demand and also explores how risk preference affects the price elasticity of demand. The empirical results indicate that risk preference has a significant negative effect on alcohol consumption, with the prevalence and consumption among risk-tolerant individuals being 6–8% higher. Furthermore, the tax elasticity is similar across both risk-averse and risk-tolerant individuals. This suggests that tax policies are as equally effective in deterring alcohol consumption among those who have a higher versus a lower propensity for alcohol use. PMID:19956353
Hierarchical video summarization based on context clustering
NASA Astrophysics Data System (ADS)
Tseng, Belle L.; Smith, John R.
2003-11-01
A personalized video summary is dynamically generated in our video personalization and summarization system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order to maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. Besides finding the desired contents, the objective is to present a coherent summary. There are diverse methods for creating summaries, and we focus on the challenges of generating a hierarchical video summary based on context information. In our summarization algorithm, three inputs are used to generate the hierarchical video summary output. These inputs are (1) MPEG-7 metadata descriptions of the contents in the server, (2) user preference and usage environment declarations from the user client, and (3) context information including MPEG-7 controlled term list and classification scheme. In a video sequence, descriptions and relevance scores are assigned to each shot. Based on these shot descriptions, context clustering is performed to collect consecutively similar shots to correspond to hierarchical scene representations. The context clustering is based on the available context information, and may be derived from domain knowledge or rules engines. Finally, the selection of structured video segments to generate the hierarchical summary efficiently balances between scene representation and shot selection.
Family context and the physical activity of adolescents: comparing differences.
Ramos, Cynthia Graciane Carvalho; Andrade, Roseli Gomes de; Andrade, Amanda Cristina de Souza; Fernandes, Amanda Paula; Costa, Dário Alves da Silva; Xavier, César Coelho; Proietti, Fernando Augusto; Caiaffa, Waleska Teixeira
2017-01-01
Family context plays an important role with regard to the physical activity (PA) of adolescents. Intense changes in family composition, including an increase of single-parent structures can affect behavior. To estimate the prevalence of PA, between boys and girls of 11-17 years old, and investigate its association with family context variables. A cross-sectional population-based study "The BH Health Study" was conducted in two health districts of Belo Horizonte. The outcome was PA (≥ 300 minutes/week), which was created from a score that combined time and frequency of cycling and walking to school and leisure time. The independent variables were family context, sociodemographic characteristics and nutritional status. Poisson regression was used with a robust variance and was stratified by gender. 1,015 adolescents participated, 52.8% of whom were male, with a mean age of 14 (± 1.9) years old. The prevalence of PA was 38.8% for girls and 54.5% for boys. Among girls, the family context variables were not significantly associated with PA. Boys were more active when there was an adult in the household reported who did PA (PR = 1.26; 95%CI 1.02 - 1.55) and when living with a single mother (PR = 1.63; 95%CI 1.01 - 2.63). It was also observed that boys that live with their mother and father (PR=1.90; 95%CI 1.06 - 3.41) or only with their mother (PR = 1.82; 95%CI 1.01 - 3.27) reported did PA more frequently in their free time. The presence of an active adult in the household, mainly the mother, appears to be an important factor associated with boys' PA.
Boehler, Christian E. H.; Lord, Joanne
2016-01-01
Background. Published cost-effectiveness estimates can vary considerably, both within and between countries. Despite extensive discussion, little is known empirically about factors relating to these variations. Objectives. To use multilevel statistical modeling to integrate cost-effectiveness estimates from published economic evaluations to investigate potential causes of variation. Methods. Cost-effectiveness studies of statins for cardiovascular disease prevention were identified by systematic review. Estimates of incremental costs and effects were extracted from reported base case, sensitivity, and subgroup analyses, with estimates grouped in studies and in countries. Three bivariate models were developed: a cross-classified model to accommodate data from multinational studies, a hierarchical model with multinational data allocated to a single category at country level, and a hierarchical model excluding multinational data. Covariates at different levels were drawn from a long list of factors suggested in the literature. Results. We found 67 studies reporting 2094 cost-effectiveness estimates relating to 23 countries (6 studies reporting for more than 1 country). Data and study-level covariates included patient characteristics, intervention and comparator cost, and some study methods (e.g., discount rates and time horizon). After adjusting for these factors, the proportion of variation attributable to countries was negligible in the cross-classified model but moderate in the hierarchical models (14%−19% of total variance). Country-level variables that improved the fit of the hierarchical models included measures of income and health care finance, health care resources, and population risks. Conclusions. Our analysis suggested that variability in published cost-effectiveness estimates is related more to differences in study methods than to differences in national context. Multinational studies were associated with much lower country-level variation than single-country studies. These findings are for a single clinical question and may be atypical. PMID:25878194
Overdiagnosis across medical disciplines: a scoping review.
Jenniskens, Kevin; de Groot, Joris A H; Reitsma, Johannes B; Moons, Karel G M; Hooft, Lotty; Naaktgeboren, Christiana A
2017-12-27
To provide insight into how and in what clinical fields overdiagnosis is studied and give directions for further applied and methodological research. Scoping review. Medline up to August 2017. All English studies on humans, in which overdiagnosis was discussed as a dominant theme. Studies were assessed on clinical field, study aim (ie, methodological or non-methodological), article type (eg, primary study, review), the type and role of diagnostic test(s) studied and the context in which these studies discussed overdiagnosis. From 4896 studies, 1851 were included for analysis. Half of all studies on overdiagnosis were performed in the field of oncology (50%). Other prevalent clinical fields included mental disorders, infectious diseases and cardiovascular diseases accounting for 9%, 8% and 6% of studies, respectively. Overdiagnosis was addressed from a methodological perspective in 20% of studies. Primary studies were the most common article type (58%). The type of diagnostic tests most commonly studied were imaging tests (32%), although these were predominantly seen in oncology and cardiovascular disease (84%). Diagnostic tests were studied in a screening setting in 43% of all studies, but as high as 75% of all oncological studies. The context in which studies addressed overdiagnosis related most frequently to its estimation, accounting for 53%. Methodology on overdiagnosis estimation and definition provided a source for extensive discussion. Other contexts of discussion included definition of disease, overdiagnosis communication, trends in increasing disease prevalence, drivers and consequences of overdiagnosis, incidental findings and genomics. Overdiagnosis is discussed across virtually all clinical fields and in different contexts. The variability in characteristics between studies and lack of consensus on overdiagnosis definition indicate the need for a uniform typology to improve coherence and comparability of studies on overdiagnosis. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Fuady, Ahmad; Houweling, Tanja A; Mansyur, Muchtaruddin; Richardus, Jan H
2018-01-01
Indonesia is the second-highest country for tuberculosis (TB) incidence worldwide. Hence, it urgently requires improvements and innovations beyond the strategies that are currently being implemented throughout the country. One fundamental step in monitoring its progress is by preparing a validated tool to measure total patient costs and catastrophic total costs. The World Health Organization (WHO) recommends using a version of the generic questionnaire that has been adapted to the local cultural context in order to interpret findings correctly. This study is aimed to adapt the Tool to Estimate Patient Costs questionnaire into the Indonesian context, which measures total costs and catastrophic total costs for tuberculosis-affected households. the tool was adapted using best-practice guidelines. On the basis of a pre-test performed in a previous study (referred to as Phase 1 Study), we refined the adaptation process by comparing it with the generic tool introduced by the WHO. We also held an expert committee review and performed pre-testing by interviewing 30 TB patients. After pre-testing, the tool was provided with complete explanation sheets for finalization. seventy-two major changes were made during the adaptation process including changing the answer choices to match the Indonesian context, refining the flow of questions, deleting questions, changing some words and restoring original questions that had been changed in Phase 1 Study. Participants indicated that most questions were clear and easy to understand. To address recall difficulties by the participants, we made some adaptations to obtain data that might be missing, such as tracking data to medical records, developing a proxy of costs and guiding interviewers to ask for a specific value when participants were uncertain about the estimated market value of property they had sold. the adapted Tool to Estimate Patient Costs in Bahasa Indonesia is comprehensive and ready for use in future studies on TB-related catastrophic costs and is suitable for monitoring progress to achieve the target of the End TB Strategy.
Nilsson, Lena Maria; Berner, James; Dudarev, Alexey A; Mulvad, Gert; Odland, Jon Øyvind; Parkinson, Alan; Rautio, Arja; Tikhonov, Constantine; Evengård, Birgitta
2013-01-01
In August 2012, a literature search with the aim of describing indicators on food and water security in an Arctic health context was initialized in collaboration between the Arctic Human Health Expert Group, SDWG/AHHEG and the AMAP (Arctic Monitoring and Assessment Programme within the Arctic Council) Human Health Assessment Group, AMAP/HHAG. In December 2012, workshop discussions were performed with representatives from both of these organizations, including 7 Arctic countries. The aim of this article is to describe the workshop discussions and the rational for the 12 indicators selected and the 9 rejected and to discuss the potential feasibility of these. Advantages and disadvantages of candidate indicators were listed. Informative value and costs for collecting were estimated separately on a 3-level scale: low, medium and high. Based on these reviews, the final selection of promoted and rejected indicators was performed and summarized in tables. Among 10 suggested indicators of food security, 6 were promoted: healthy weight, traditional food proportion in diet, monetary food costs, non-monetary food accessibility, food-borne diseases and food-related contaminants. Four were rejected: per-person dietary energy supply, food security modules, self-estimated food safety and healthy eating. Among 10 suggested indicators of water security, 6 were promoted: per-capita renewable water, accessibility of running water, waterborne diseases, drinking-water-related contaminants, authorized water quality assurance and water safety plans. Four were rejected: water consumption, types of water sources, periodic water shortages and household water costs.
Nilsson, Lena Maria; Berner, James; Dudarev, Alexey A.; Mulvad, Gert; Odland, Jon Øyvind; Parkinson, Alan; Rautio, Arja; Tikhonov, Constantine; Evengård, Birgitta
2013-01-01
In August 2012, a literature search with the aim of describing indicators on food and water security in an Arctic health context was initialized in collaboration between the Arctic Human Health Expert Group, SDWG/AHHEG and the AMAP (Arctic Monitoring and Assessment Programme within the Arctic Council) Human Health Assessment Group, AMAP/HHAG. In December 2012, workshop discussions were performed with representatives from both of these organizations, including 7 Arctic countries. The aim of this article is to describe the workshop discussions and the rational for the 12 indicators selected and the 9 rejected and to discuss the potential feasibility of these. Advantages and disadvantages of candidate indicators were listed. Informative value and costs for collecting were estimated separately on a 3-level scale: low, medium and high. Based on these reviews, the final selection of promoted and rejected indicators was performed and summarized in tables. Among 10 suggested indicators of food security, 6 were promoted: healthy weight, traditional food proportion in diet, monetary food costs, non-monetary food accessibility, food-borne diseases and food-related contaminants. Four were rejected: per-person dietary energy supply, food security modules, self-estimated food safety and healthy eating. Among 10 suggested indicators of water security, 6 were promoted: per-capita renewable water, accessibility of running water, waterborne diseases, drinking-water-related contaminants, authorized water quality assurance and water safety plans. Four were rejected: water consumption, types of water sources, periodic water shortages and household water costs. PMID:23940840
Petterson, S R
2016-02-01
The aim of this study was to develop a modified quantitative microbial risk assessment (QMRA) framework that could be applied as a decision support tool to choose between alternative drinking water interventions in the developing context. The impact of different household water treatment (HWT) interventions on the overall incidence of diarrheal disease and disability adjusted life years (DALYs) was estimated, without relying on source water pathogen concentration as the starting point for the analysis. A framework was developed and a software tool constructed and then implemented for an illustrative case study for Nepal based on published scientific data. Coagulation combined with free chlorine disinfection provided the greatest estimated health gains in the short term; however, when long-term compliance was incorporated into the calculations, the preferred intervention was porous ceramic filtration. The model demonstrates how the QMRA framework can be used to integrate evidence from different studies to inform management decisions, and in particular to prioritize the next best intervention with respect to estimated reduction in diarrheal incidence. This study only considered HWT interventions; it is recognized that a systematic consideration of sanitation, recreation, and drinking water pathways is important for effective management of waterborne transmission of pathogens, and the approach could be expanded to consider the broader water-related context. © 2015 Society for Risk Analysis.
1981-12-01
Gisler, William S. Jewell* 1. Motivation In Ratemaking and in Experience Rating one is often confronted with the dileina of whether or not to fully...are greatly indebted to R. Schnieper who did all the numerical work on the ETE computer. 4 -, -2- 2. The Basic Model Throughout the paper we work...l )~1) f - (1-i0)po (xl) + ipe(X) 3. The Basic Problem As always in the credibility context our aim is to estimate ua(s) based on the observations of
Human Reliability Assessments: Using the Past (Shuttle) to Predict the Future (ORION)
NASA Technical Reports Server (NTRS)
Mott, Diana L.; Bigler, Mark A.
2017-01-01
NASA uses two HRA assessment methodologies. The first is a simplified method which is based on how much time is available to complete the action, with consideration included for environmental and personal factors that could influence the human's reliability. This method is expected to provide a conservative value or placeholder as a preliminary estimate. This preliminary estimate is used to determine which placeholder needs a more detailed assessment. The second methodology is used to develop a more detailed human reliability assessment on the performance of critical human actions. This assessment needs to consider more than the time available, this would include factors such as: the importance of the action, the context, environmental factors, potential human stresses, previous experience, training, physical design interfaces, available procedures/checklists and internal human stresses. The more detailed assessment is still expected to be more realistic than that based primarily on time available. When performing an HRA on a system or process that has an operational history, we have information specific to the task based on this history and experience. In the case of a PRA model that is based on a new design and has no operational history, providing a "reasonable" assessment of potential crew actions becomes more problematic. In order to determine what is expected of future operational parameters, the experience from individuals who had relevant experience and were familiar with the system and process previously implemented by NASA was used to provide the "best" available data. Personnel from Flight Operations, Flight Directors, Launch Test Directors, Control Room Console Operators and Astronauts were all interviewed to provide a comprehensive picture of previous NASA operations. Verification of the assumptions and expectations expressed in the assessments will be needed when the procedures, flight rules and operational requirements are developed and then finalized.
Design of a digital phantom population for myocardial perfusion SPECT imaging research.
Ghaly, Michael; Du, Yong; Fung, George S K; Tsui, Benjamin M W; Links, Jonathan M; Frey, Eric
2014-06-21
Digital phantoms and Monte Carlo (MC) simulations have become important tools for optimizing and evaluating instrumentation, acquisition and processing methods for myocardial perfusion SPECT (MPS). In this work, we designed a new adult digital phantom population and generated corresponding Tc-99m and Tl-201 projections for use in MPS research. The population is based on the three-dimensional XCAT phantom with organ parameters sampled from the Emory PET Torso Model Database. Phantoms included three variations each in body size, heart size, and subcutaneous adipose tissue level, for a total of 27 phantoms of each gender. The SimSET MC code and angular response functions were used to model interactions in the body and the collimator-detector system, respectively. We divided each phantom into seven organs, each simulated separately, allowing use of post-simulation summing to efficiently model uptake variations. Also, we adapted and used a criterion based on the relative Poisson effective count level to determine the required number of simulated photons for each simulated organ. This technique provided a quantitative estimate of the true noise in the simulated projection data, including residual MC simulation noise. Projections were generated in 1 keV wide energy windows from 48-184 keV assuming perfect energy resolution to permit study of the effects of window width, energy resolution, and crosstalk in the context of dual isotope MPS. We have developed a comprehensive method for efficiently simulating realistic projections for a realistic population of phantoms in the context of MPS imaging. The new phantom population and realistic database of simulated projections will be useful in performing mathematical and human observer studies to evaluate various acquisition and processing methods such as optimizing the energy window width, investigating the effect of energy resolution on image quality and evaluating compensation methods for degrading factors such as crosstalk in the context of single and dual isotope MPS.
Design of a digital phantom population for myocardial perfusion SPECT imaging research
NASA Astrophysics Data System (ADS)
Ghaly, Michael; Du, Yong; Fung, George S. K.; Tsui, Benjamin M. W.; Links, Jonathan M.; Frey, Eric
2014-06-01
Digital phantoms and Monte Carlo (MC) simulations have become important tools for optimizing and evaluating instrumentation, acquisition and processing methods for myocardial perfusion SPECT (MPS). In this work, we designed a new adult digital phantom population and generated corresponding Tc-99m and Tl-201 projections for use in MPS research. The population is based on the three-dimensional XCAT phantom with organ parameters sampled from the Emory PET Torso Model Database. Phantoms included three variations each in body size, heart size, and subcutaneous adipose tissue level, for a total of 27 phantoms of each gender. The SimSET MC code and angular response functions were used to model interactions in the body and the collimator-detector system, respectively. We divided each phantom into seven organs, each simulated separately, allowing use of post-simulation summing to efficiently model uptake variations. Also, we adapted and used a criterion based on the relative Poisson effective count level to determine the required number of simulated photons for each simulated organ. This technique provided a quantitative estimate of the true noise in the simulated projection data, including residual MC simulation noise. Projections were generated in 1 keV wide energy windows from 48-184 keV assuming perfect energy resolution to permit study of the effects of window width, energy resolution, and crosstalk in the context of dual isotope MPS. We have developed a comprehensive method for efficiently simulating realistic projections for a realistic population of phantoms in the context of MPS imaging. The new phantom population and realistic database of simulated projections will be useful in performing mathematical and human observer studies to evaluate various acquisition and processing methods such as optimizing the energy window width, investigating the effect of energy resolution on image quality and evaluating compensation methods for degrading factors such as crosstalk in the context of single and dual isotope MPS.
Kim, Haesook T; Armand, Philippe
2013-06-01
When designing a study for allogeneic hematopoietic stem cell transplantation (HSCT), many choices must be made, including conditioning regimen, stem cell source, and graft-versus-host disease (GVHD) prevention method. For each of these, there are a growing number of options, which can be combined into a bewildering number of possible HSCT protocols. To properly interpret the results of a given strategy and compare them with others, it is essential that there be agreement on the definitions and estimation methods of HSCT endpoints. We report a survey of the recent HSCT literature that confirms the heterogeneity of endpoint definitions and estimation methods used. Unfortunately, this heterogeneity may lead to significant biases in the estimates of key endpoints, including nonrelapse mortality, relapse, GVHD, or engraftment. This can preclude adequate comparisons among studies, even though such comparisons are the major tool with which to improve HSCT outcome. In the context of our survey, we discuss some of the statistical issues that arise when dealing with HSCT endpoints and the ramifications of the choice of endpoint definition, when the endpoint occurs in the context of competing risks. Our hope is to generate discussion and motivate a search for consensus among those who perform transplantations and statisticians. Copyright © 2013 American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.
A New Approach To Secure Federated Information Bases Using Agent Technology.
ERIC Educational Resources Information Center
Weippi, Edgar; Klug, Ludwig; Essmayr, Wolfgang
2003-01-01
Discusses database agents which can be used to establish federated information bases by integrating heterogeneous databases. Highlights include characteristics of federated information bases, including incompatible database management systems, schemata, and frequently changing context; software agent technology; Java agents; system architecture;…
Perak, Amanda M; Opotowsky, Alexander R; Walsh, Brian K; Esch, Jesse J; DiNardo, James A; Kussman, Barry D; Porras, Diego; Rhodes, Jonathan
2016-10-01
To assess the feasibility and accuracy of inert gas rebreathing (IGR) pulmonary blood flow (Qp) estimation in mechanically ventilated pediatric patients, potentially providing real-time noninvasive estimates of cardiac output. In mechanically ventilated patients in the pediatric catheterization laboratory, we compared IGR Qp with Qp estimates based upon the Fick equation using measured oxygen consumption (VO2) (FickTrue); for context, we compared FickTrue with a standard clinical short-cut, replacing measured with assumed VO2 in the Fick equation (FickLaFarge, FickLundell, FickSeckeler). IGR Qp and breath-by-breath VO2 were measured using the Innocor device. Sampled pulmonary arterial and venous saturations and hemoglobin concentration were used for Fick calculations. Qp estimates were compared using Bland-Altman agreement and Spearman correlation. The final analysis included 18 patients aged 4-23 years with weight >15 kg. Compared with the reference FickTrue, IGR Qp estimates correlated best and had the least systematic bias and narrowest 95% limits of agreement (results presented as mean bias ±95% limits of agreement): IGR -0.2 ± 1.1 L/min, r = 0.90; FickLaFarge +0.7 ± 2.2 L/min, r = 0.80; FickLundell +1.6 ± 2.9 L/min, r = 0.83; FickSeckeler +0.8 ± 2.5 L/min, r = 0.83. IGR estimation of Qp is feasible in mechanically ventilated patients weighing >15 kg, and agreement with FickTrue Qp estimates is better for IGR than for other Fick Qp estimates commonly used in pediatric catheterization. IGR is an attractive option for bedside monitoring of Qp in mechanically ventilated children. Copyright © 2016 Elsevier Inc. All rights reserved.
Andreuccetti, Gabriel; Leyton, Vilma; Lemos, Nikolas P; Miziara, Ivan Dieb; Ye, Yu; Takitane, Juliana; Munoz, Daniel Romero; Reingold, Arthur L; Cherpitel, Cheryl J; de Carvalho, Heraclito Barbosa
2017-04-01
Most studies reporting alcohol use among fatally injured victims are subject to bias, particularly those related to sample selection and to absence of injury context data. We developed a research method to estimate the prevalence of alcohol consumption and test correlates of alcohol use prior to fatal injuries. Cross-sectional study based on a probability sample of fatally injured adult victims (n = 365) autopsied in São Paulo, Brazil. Victims were sampled within systematically selected 8-hour sampling blocks, generating a representative sample of fatal injuries occurring during all hours of the day for each day of the week between June 2014 and December 2015. The presence of alcohol and blood alcohol concentration (BAC) were the primary outcomes evaluated according to victims' socio-demographic, injury context data (type, day, time and injury place) and criminal history characteristics. Alcohol was detected in 30.1% [95% confidence interval (CI) = 25.6-35.1)] of the victims, with a mean blood alcohol level (BAC) level of 0.11% w/v (95% CI = 0.09-0.13) among alcohol-positive cases. Black and mixed race victims presented a higher mean BAC than white victims (P = 0.03). Fewer than one in every six suicides tested positive for alcohol, while almost half of traffic-related casualties were alcohol-positive. Having suffered traffic-related injuries, particularly those involving vehicle crashes, and injuries occurring during weekends and at night were associated significantly with alcohol use before injury (P < 0.05). Nearly one-third of fatal injuries in São Paulo between June 2014 and December 2015 were alcohol-related, with traffic accidents showing a greater association with alcohol use than other injuries. The sampling methodology tested here, including the possibility of adding injury context data to improve population-based estimates of alcohol use before fatal injury, appears to be a reliable and lower-cost strategy for avoiding biases common in death investigations. © 2016 Society for the Study of Addiction.
Andreuccetti, Gabriel; Leyton, Vilma; Lemos, Nikolas P.; Miziara, Ivan Dieb; Ye, Yu; Takitane, Juliana; Munoz, Daniel Romero; Reingold, Arthur L.; Cherpitel, Cheryl J.; de Carvalho, Heraclito Barbosa
2016-01-01
Background and aims Most studies reporting alcohol use among fatally injured victims are subject to bias, particularly that related to sample selection and to absence of injury context data. We developed a research method to estimate the prevalence of alcohol consumption and test correlates of alcohol use prior to fatal injuries, using the city of Sao Paulo, Brazil as a model. Design, Setting and Participants Cross-sectional study based on a probability sample of fatally injured adult victims (N=365) autopsied in Sao Paulo, Brazil. Victims were sampled within systematically selected 8-hour sampling blocks, generating a representative sample of fatal injuries occurring during all hours of the day for each day of the week between June 2014 and December 2015. Measurements The presence of alcohol and blood alcohol concentration (BAC) were the primary outcomes evaluated according to victims’ socio-demographic, injury context data (type, day, time and injury place), and criminal history characteristics. Findings Alcohol was detected in 30.1% (CI 95%; 25.6–35.1) of the victims, with a mean BAC level of 0.11% w/v (CI 95%; 0.09–0.13) among alcohol-positive cases. Those black and mixed race presented a higher mean BAC than white victims (p=0.03). Less than one in every six suicides tested positive for alcohol, while almost half of traffic-related casualties were alcohol-positive. Having suffered traffic-related injuries, particularly those involving vehicle crashes, and injuries occurring during weekends and at night were significantly associated with alcohol use before injury (p<0.05). Conclusions Nearly one third of fatal injuries in Sao Paulo were alcohol-related, with traffic accidents showing a greater association with alcohol use than other injuries. The sampling methodology tested here, including the possibility of adding injury context data to improve population-based estimates of alcohol use before fatal injury, was found to be a reliable and lower cost strategy for avoiding biases common in death investigations. PMID:28044383
Temporal context processing within hippocampal subfields.
Wang, Fang; Diana, Rachel A
2016-07-01
The episodic memory system can differentiate similar events based on the temporal information associated with the events. Temporal context, which is at least partially determined by the events that precede or follow the critical event, may be a cue to differentiate events. The purpose of the present study is to investigate whether the hippocampal dentate gyrus (DG)/CA3 and CA1 subfields are sensitive to changes in temporal context and, if so, whether the subregions show a linear or threshold-like response to similar temporal contexts. Participants incidentally encoded a series of object picture triplets and 20 of them were included in final analyses. The third picture in each triplet was operationally defined as the target and the first two pictures served as temporal context for the target picture. Each target picture was presented twice with temporal context manipulated to be either repeated, high similarity, low similarity, or new on the second presentation. We extracted beta parameters for the repeated target as a function of the type of temporal context. We expected to see repetition suppression, a reduction in the beta values, in response to repetition of the target. If temporal context information is included in the representation of the target within a given region, this repetition suppression should be greater for target images that were preceded by their original context than for target images preceded by a new context. Neuroimaging results showed that CA1, but not DG/CA3, modifies the target's representation based on its temporal context. Right CA1 did not distinguish high similarity temporal context from repeated context but did distinguish low similarity temporal context from repeated context. These results indicate that CA1 is sensitive to temporal context and suggest that it does not differentiate between a substantially similar temporal context and an identical temporal context. In contrast, DG/CA3 does not appear to process temporal context as defined in the current experiment. Copyright © 2016 Elsevier Inc. All rights reserved.
Zioupos, P; Williams, A; Christodoulou, G; Giles, R
2014-05-01
Determination of age-at-death (AAD) is an important and frequent requirement in contemporary forensic science and in the reconstruction of past populations and societies from their remains. Its estimation is relatively straightforward and accurate (±3yr) for immature skeletons by using morphological features and reference tables within the context of forensic anthropology. However, after skeletal maturity (>35yr) estimates become inaccurate, particularly in the legal context. In line with the general migration of all the forensic sciences from reliance upon empirical criteria to those which are more evidence-based, AAD determination should rely more-and-more upon more quantitative methods. We explore here whether well-known changes in the biomechanical properties of bone and the properties of bone matrix, which have been seen to change with age even after skeletal maturity in a traceable manner, can be used to provide a reliable estimate of AAD. This method charts a combination of physical characteristics some of which are measured at a macroscopic level (wet & dry apparent density, porosity, organic/mineral/water fractions, collagen thermal degradation properties, ash content) and others at the microscopic level (Ca/P ratios, osteonal and matrix microhardness, image analysis of sections). This method produced successful age estimates on a cohort of 12 donors of age 53-85yr (7 male, 5 female), where the age of the individual could be approximated within less than ±1yr. This represents a vastly improved level of accuracy than currently extant age estimation techniques. It also presents: (1) a greater level of reliability and objectivity as the results are not dependent on the experience and expertise of the observer, as is so often the case in forensic skeletal age estimation methods; (2) it is purely laboratory-based analytical technique which can be carried out by someone with technical skills and not the specialised forensic anthropology experience; (3) it can be applied worldwide following stringent laboratory protocols. As such, this technique contributes significantly to improving age estimation and therefore identification methods for forensic and other purposes. © 2013 Elsevier Ltd. All rights reserved.
Porto, Paolo; Walling, Des E
2012-04-01
Soil erosion represents an important threat to the long-term sustainability of agriculture and forestry in many areas of the world, including southern Italy. Numerous models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution, based on the local topography, hydrometeorology, soil type and land management. However, there remains an important need for empirical measurements to provide a basis for validating and calibrating such models and prediction procedures as well as to support specific investigations and experiments. In this context, erosion plots provide useful information on gross rates of soil loss, but are unable to document the efficiency of the onward transfer of the eroded sediment within a field and towards the stream system, and thus net rates of soil loss from larger areas. The use of environmental radionuclides, particularly caesium-137 ((137)Cs) and excess lead-210 ((210)Pb(ex)), as a means of estimating rates of soil erosion and deposition has attracted increasing attention in recent years and the approach has now been recognised as possessing several important advantages. In order to provide further confirmation of the validity of the estimates of longer-term erosion and soil redistribution rates provided by (137)Cs and (210)Pb(ex) measurements, there is a need for studies aimed explicitly at validating the results obtained. In this context, the authors directed attention to the potential offered by a set of small erosion plots located near Reggio Calabria in southern Italy, for validating estimates of soil loss provided by (137)Cs and (210)Pb(ex) measurements. A preliminary assessment suggested that, notwithstanding the limitations and constraints involved, a worthwhile investigation aimed at validating the use of (137)Cs and (210)Pb(ex) measurements to estimate rates of soil loss from cultivated land could be undertaken. The results demonstrate a close consistency between the measured rates of soil loss and the estimates provided by the (137)Cs and (210)Pb(ex) measurements and can therefore been seen as validating the use of these fallout radionuclides to document soil erosion rates in that environment. Further studies are clearly required to exploit other opportunities for validation in contrasting environments and under different land use conditions. Copyright © 2011 Elsevier Ltd. All rights reserved.
Stillbirth With Group B Streptococcus Disease Worldwide: Systematic Review and Meta-analyses
Seale, Anna C; Blencowe, Hannah; Bianchi-Jassir, Fiorella; Embleton, Nicholas; Bassat, Quique; Ordi, Jaume; Menéndez, Clara; Cutland, Clare; Briner, Carmen; Berkley, James A; Lawn, Joy E; Baker, Carol J; Bartlett, Linda; Gravett, Michael G; Heath, Paul T; Ip, Margaret; Le Doare, Kirsty; Rubens, Craig E; Saha, Samir K; Schrag, Stephanie; Meulen, Ajoke Sobanjo-ter; Vekemans, Johan; Madhi, Shabir A
2017-01-01
Abstract Background There are an estimated 2.6 million stillbirths each year, many of which are due to infections, especially in low- and middle-income contexts. This paper, the eighth in a series on the burden of group B streptococcal (GBS) disease, aims to estimate the percentage of stillbirths associated with GBS disease. Methods We conducted systematic literature reviews (PubMed/Medline, Embase, Literatura Latino-Americana e do Caribe em Ciências da Saúde, World Health Organization Library Information System, and Scopus) and sought unpublished data from investigator groups. Studies were included if they reported original data on stillbirths (predominantly ≥28 weeks’ gestation or ≥1000 g, with GBS isolated from a sterile site) as a percentage of total stillbirths. We did meta-analyses to derive pooled estimates of the percentage of GBS-associated stillbirths, regionally and worldwide for recent datasets. Results We included 14 studies from any period, 5 with recent data (after 2000). There were no data from Asia. We estimated that 1% (95% confidence interval [CI], 0–2%) of all stillbirths in developed countries and 4% (95% CI, 2%–6%) in Africa were associated with GBS. Conclusions GBS is likely an important cause of stillbirth, especially in Africa. However, data are limited in terms of geographic spread, with no data from Asia, and cases worldwide are probably underestimated due to incomplete case ascertainment. More data, using standardized, systematic methods, are critical, particularly from low- and middle-income contexts where the highest burden of stillbirths occurs. These data are essential to inform interventions, such as maternal GBS vaccination. PMID:29117322
Instant Messaging in a Context of Virtual Schooling: Balancing the Affordances and Challenges
ERIC Educational Resources Information Center
Murphy, Elizabeth; Manzanares, Maria A. Rodriguez
2008-01-01
This article reports on a case study of Instant Messaging (IM) in a context of high-school virtual schooling. Data collection relied on interviews conducted with 20 participants in a context of high-school web-based instruction in Newfoundland and Labrador, Canada. The participants included e-teachers as well as other distance education personnel.…
A review of methods to estimate cause-specific mortality in presence of competing risks
Heisey, Dennis M.; Patterson, Brent R.
2006-01-01
Estimating cause-specific mortality is often of central importance for understanding the dynamics of wildlife populations. Despite such importance, methodology for estimating and analyzing cause-specific mortality has received little attention in wildlife ecology during the past 20 years. The issue of analyzing cause-specific, mutually exclusive events in time is not unique to wildlife. In fact, this general problem has received substantial attention in human biomedical applications within the context of biostatistical survival analysis. Here, we consider cause-specific mortality from a modern biostatistical perspective. This requires carefully defining what we mean by cause-specific mortality and then providing an appropriate hazard-based representation as a competing risks problem. This leads to the general solution of cause-specific mortality as the cumulative incidence function (CIF). We describe the appropriate generalization of the fully nonparametric staggered-entry Kaplan–Meier survival estimator to cause-specific mortality via the nonparametric CIF estimator (NPCIFE), which in many situations offers an attractive alternative to the Heisey–Fuller estimator. An advantage of the NPCIFE is that it lends itself readily to risk factors analysis with standard software for Cox proportional hazards model. The competing risks–based approach also clarifies issues regarding another intuitive but erroneous "cause-specific mortality" estimator based on the Kaplan–Meier survival estimator and commonly seen in the life sciences literature.
NASA Astrophysics Data System (ADS)
Kiafar, Hamed; Babazadeh, Hosssien; Marti, Pau; Kisi, Ozgur; Landeras, Gorka; Karimi, Sepideh; Shiri, Jalal
2017-10-01
Evapotranspiration estimation is of crucial importance in arid and hyper-arid regions, which suffer from water shortage, increasing dryness and heat. A modeling study is reported here to cross-station assessment between hyper-arid and humid conditions. The derived equations estimate ET0 values based on temperature-, radiation-, and mass transfer-based configurations. Using data from two meteorological stations in a hyper-arid region of Iran and two meteorological stations in a humid region of Spain, different local and cross-station approaches are applied for developing and validating the derived equations. The comparison of the gene expression programming (GEP)-based-derived equations with corresponding empirical-semi empirical ET0 estimation equations reveals the superiority of new formulas in comparison with the corresponding empirical equations. Therefore, the derived models can be successfully applied in these hyper-arid and humid regions as well as similar climatic contexts especially in data-lack situations. The results also show that when relying on proper input configurations, cross-station might be a promising alternative for locally trained models for the stations with data scarcity.
Peixoto, Henry Maia; Brito, Marcelo Augusto Mota; Romero, Gustavo Adolfo Sierra; Monteiro, Wuelton Marcelo; Lacerda, Marcus Vinícius Guimarães de; Oliveira, Maria Regina Fernandes de
2017-10-05
The aim of this study has been to study whether the top-down method, based on the average value identified in the Brazilian Hospitalization System (SIH/SUS), is a good estimator of the cost of health professionals per patient, using the bottom-up method for comparison. The study has been developed from the context of hospital care offered to the patient carrier of glucose-6-phosphate dehydrogenase (G6PD) deficiency with severe adverse effect because of the use of primaquine, in the Brazilian Amazon. The top-down method based on the spending with SIH/SUS professional services, as a proxy for this cost, corresponded to R$60.71, and the bottom-up, based on the salaries of the physician (R$30.43), nurse (R$16.33), and nursing technician (R$5.93), estimated a total cost of R$52.68. The difference was only R$8.03, which shows that the amounts paid by the Hospital Inpatient Authorization (AIH) are estimates close to those obtained by the bottom-up technique for the professionals directly involved in the care.
Trajectories of Delinquency among Puerto Rican Children and Adolescents at Two Sites
ERIC Educational Resources Information Center
Maldonado-Molina, Mildred M.; Piquero, Alex R.; Jennings, Wesley G.; Bird, Hector; Canino, Glorisa
2009-01-01
This study examined the trajectories of delinquency among Puerto Rican children and adolescents in two cultural contexts. Relying on data from the Boricua Youth Study, a longitudinal study of children and youth from Bronx, New York, and San Juan, Puerto Rico, a group-based trajectory procedure estimated the number of delinquency trajectories,…
Semi-Parametric Item Response Functions in the Context of Guessing. CRESST Report 844
ERIC Educational Resources Information Center
Falk, Carl F.; Cai, Li
2015-01-01
We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…
Adaptive Statistical Language Modeling; A Maximum Entropy Approach
1994-04-19
models exploit the immediate past only. To extract information from further back in the document’s history , I use trigger pairs as the basic information...9 2.2 Context-Free Estimation (Unigram) ...... .................... 12 2.3 Short-Term History (Conventional N-gram...12 2.4 Short-term Class History (Class-Based N-gram) ................... 14 2.5 Intermediate Distance ........ ........................... 16
Long-term morbidity, mortality, and economics of rheumatoid arthritis.
Wong, J B; Ramey, D R; Singh, G
2001-12-01
To estimate the morbidity, mortality, and lifetime costs of care for rheumatoid arthritis (RA). We developed a Markov model based on the Arthritis, Rheumatism, and Aging Medical Information System Post-Marketing Surveillance Program cohort, involving 4,258 consecutively enrolled RA patients who were followed up for 17,085 patient-years. Markov states of health were based on drug treatment and Health Assessment Questionnaire scores. Costs were based on resource utilization, and utilities were based on visual analog scale-based general health scores. The cohort had a mean age of 57 years, 76.4% were women, and the mean duration of disease was 11.8 years. Compared with a life expectancy of 22.0 years for the general population, this cohort had a life expectancy of 18.6 years and 11.3 quality-adjusted life years. Lifetime direct medical care costs were estimated to be $93,296. Higher costs were associated with higher disability scores. A Markov model can be used to estimate lifelong morbidity, mortality, and costs associated with RA, providing a context in which to consider the potential value of new therapies for the disease.
Bernstein, Leslie R; Trahiotis, Constantine
2017-02-01
Interaural cross-correlation-based models of binaural processing have accounted successfully for a wide variety of binaural phenomena, including binaural detection, binaural discrimination, and measures of extents of laterality based on interaural temporal disparities, interaural intensitive disparities, and their combination. This report focuses on quantitative accounts of data obtained from binaural detection experiments published over five decades. Particular emphasis is placed on stimulus contexts for which commonly used correlation-based approaches fail to provide adequate explanations of the data. One such context concerns binaural detection of signals masked by certain noises that are narrow-band and/or interaurally partially correlated. It is shown that a cross-correlation-based model that includes stages of peripheral auditory processing can, when coupled with an appropriate decision variable, account well for a wide variety of classic and recently published binaural detection data including those that have, heretofore, proven to be problematic.
A Pattern-Based Definition of Urban Context Using Remote Sensing and GIS
Benza, Magdalena; Weeks, John R.; Stow, Douglas A.; López-Carr, David; Clarke, Keith C.
2016-01-01
In Sub-Saharan Africa rapid urban growth combined with rising poverty is creating diverse urban environments, the nature of which are not adequately captured by a simple urban-rural dichotomy. This paper proposes an alternative classification scheme for urban mapping based on a gradient approach for the southern portion of the West African country of Ghana. Landsat Enhanced Thematic Mapper Plus (ETM+) and European Remote Sensing Satellite-2 (ERS-2) synthetic aperture radar (SAR) imagery are used to generate a pattern based definition of the urban context. Spectral mixture analysis (SMA) is used to classify a Landsat scene into Built, Vegetation and Other land covers. Landscape metrics are estimated for Built and Vegetation land covers for a 450 meter uniform grid covering the study area. A measure of texture is extracted from the SAR imagery and classified as Built/Non-built. SMA based measures of Built and Vegetation fragmentation are combined with SAR texture based Built/Non-built maps through a decision tree classifier to generate a nine class urban context map capturing the transition from unsettled land at one end of the gradient to the compact urban core at the other end. Training and testing of the decision tree classifier was done using very high spatial resolution reference imagery from Google Earth. An overall classification agreement of 77% was determined for the nine-class urban context map, with user’s accuracy (commission errors) being lower than producer’s accuracy (omission errors). Nine urban contexts were classified and then compared with data from the 2000 Census of Ghana. Results suggest that the urban classes appropriately differentiate areas along the urban gradient. PMID:27867227
A Pattern-Based Definition of Urban Context Using Remote Sensing and GIS.
Benza, Magdalena; Weeks, John R; Stow, Douglas A; López-Carr, David; Clarke, Keith C
2016-09-15
In Sub-Saharan Africa rapid urban growth combined with rising poverty is creating diverse urban environments, the nature of which are not adequately captured by a simple urban-rural dichotomy. This paper proposes an alternative classification scheme for urban mapping based on a gradient approach for the southern portion of the West African country of Ghana. Landsat Enhanced Thematic Mapper Plus (ETM+) and European Remote Sensing Satellite-2 (ERS-2) synthetic aperture radar (SAR) imagery are used to generate a pattern based definition of the urban context. Spectral mixture analysis (SMA) is used to classify a Landsat scene into Built, Vegetation and Other land covers. Landscape metrics are estimated for Built and Vegetation land covers for a 450 meter uniform grid covering the study area. A measure of texture is extracted from the SAR imagery and classified as Built/Non-built. SMA based measures of Built and Vegetation fragmentation are combined with SAR texture based Built/Non-built maps through a decision tree classifier to generate a nine class urban context map capturing the transition from unsettled land at one end of the gradient to the compact urban core at the other end. Training and testing of the decision tree classifier was done using very high spatial resolution reference imagery from Google Earth. An overall classification agreement of 77% was determined for the nine-class urban context map, with user's accuracy (commission errors) being lower than producer's accuracy (omission errors). Nine urban contexts were classified and then compared with data from the 2000 Census of Ghana. Results suggest that the urban classes appropriately differentiate areas along the urban gradient.
A Tale of Two “Forests”: Random Forest Machine Learning Aids Tropical Forest Carbon Mapping
Mascaro, Joseph; Asner, Gregory P.; Knapp, David E.; Kennedy-Bowdoin, Ty; Martin, Roberta E.; Anderson, Christopher; Higgins, Mark; Chadwick, K. Dana
2014-01-01
Accurate and spatially-explicit maps of tropical forest carbon stocks are needed to implement carbon offset mechanisms such as REDD+ (Reduced Deforestation and Degradation Plus). The Random Forest machine learning algorithm may aid carbon mapping applications using remotely-sensed data. However, Random Forest has never been compared to traditional and potentially more reliable techniques such as regionally stratified sampling and upscaling, and it has rarely been employed with spatial data. Here, we evaluated the performance of Random Forest in upscaling airborne LiDAR (Light Detection and Ranging)-based carbon estimates compared to the stratification approach over a 16-million hectare focal area of the Western Amazon. We considered two runs of Random Forest, both with and without spatial contextual modeling by including—in the latter case—x, and y position directly in the model. In each case, we set aside 8 million hectares (i.e., half of the focal area) for validation; this rigorous test of Random Forest went above and beyond the internal validation normally compiled by the algorithm (i.e., called “out-of-bag”), which proved insufficient for this spatial application. In this heterogeneous region of Northern Peru, the model with spatial context was the best preforming run of Random Forest, and explained 59% of LiDAR-based carbon estimates within the validation area, compared to 37% for stratification or 43% by Random Forest without spatial context. With the 60% improvement in explained variation, RMSE against validation LiDAR samples improved from 33 to 26 Mg C ha−1 when using Random Forest with spatial context. Our results suggest that spatial context should be considered when using Random Forest, and that doing so may result in substantially improved carbon stock modeling for purposes of climate change mitigation. PMID:24489686
Adolescent judgments and reasoning about the failure to include peers with social disabilities.
Bottema-Beutel, Kristen; Li, Zhushan
2015-06-01
Adolescents with autism spectrum disorder often do not have access to crucial peer social activities. This study examines how typically developing adolescents evaluate decisions not to include a peer based on disability status, and the justifications they apply to these decisions. A clinical interview methodology was used to elicit judgments and justifications across four contexts. We found adolescents are more likely to judge the failure to include as acceptable in personal as compared to public contexts. Using logistic regression, we found that adolescents are more likely to provide moral justifications as to why failure to include is acceptable in a classroom as compared to home, lab group, and soccer practice contexts. Implications for intervention are also discussed.
Psychometric Properties of IRT Proficiency Estimates
ERIC Educational Resources Information Center
Kolen, Michael J.; Tong, Ye
2010-01-01
Psychometric properties of item response theory proficiency estimates are considered in this paper. Proficiency estimators based on summed scores and pattern scores include non-Bayes maximum likelihood and test characteristic curve estimators and Bayesian estimators. The psychometric properties investigated include reliability, conditional…
Lyon, Aaron R; Whitaker, Kelly; Locke, Jill; Cook, Clayton R; King, Kevin M; Duong, Mylien; Davis, Chayna; Weist, Mark D; Ehrhart, Mark G; Aarons, Gregory A
2018-02-07
Integrated healthcare delivered by work groups in nontraditional service settings is increasingly common, yet contemporary implementation frameworks typically assume a single organization-or organizational unit-within which system-level processes influence service quality and implementation success. Recent implementation frameworks predict that inter-organizational alignment (i.e., similarity in values, characteristics, activities related to implementation across organizations) may facilitate the implementation of evidence-based practices (EBP), but few studies have evaluated this premise. This study's aims examine the impact of overlapping organizational contexts by evaluating the implementation contexts of externally employed mental health clinicians working in schools-the most common integrated service delivery setting for children and adolescents. Aim 1 is to estimate the effects of unique intra-organizational implementation contexts and combined inter-organizational alignment on implementation outcomes. Aim 2 is to examine the underlying mechanisms through which inter-organizational alignment facilitates or hinders EBP implementation. This study will conduct sequential, exploratory mixed-methods research to evaluate the intra- and inter-organizational implementation contexts of schools and the external community-based organizations that most often employ school-based mental health clinicians, as they relate to mental health EBP implementation. Aim 1 will involve quantitative surveys with school-based, externally-employed mental health clinicians, their supervisors, and proximal school-employed staff (total n = 120 participants) to estimate the effects of each organization's general and implementation-specific organizational factors (e.g., climate, leadership) on implementation outcomes (fidelity, acceptability, appropriateness) and assess the moderating role of the degree of clinician embeddedness in the school setting. Aim 2 will explore the mechanisms through which inter-organizational alignment influences implementation outcomes by presenting the results of Aim 1 surveys to school-based clinicians (n = 30) and conducting semi-structured qualitative interviews. Qualitative data will be evaluated using an integrative inductive and deductive approach. The study aims are expected to identify intra- and inter-organizational constructs that are most instrumental to EBP implementation success in school-based integrated care settings and illuminate mechanisms that may account for the influence of inter-organizational alignment. In addition to improving school-based mental health, these findings will spur future implementation science that considers the relationships across organizations and optimize the capacity of implementation science to guide practice in increasingly complex systems of care.
2013-01-01
Background Organizational context is recognized as an important influence on the successful implementation of research by healthcare professionals. However, there is relatively little empirical evidence to support this widely held view. Methods The objective of this study was to identify dimensions of organizational context and individual (nurse) characteristics that influence pediatric nurses’ self-reported use of research. Data on research use, individual, and contextual variables were collected from registered nurses (N = 735) working on 32 medical, surgical and critical care units in eight Canadian pediatric hospitals using an online survey. We used Generalized Estimating Equation modeling to account for the correlated structure of the data and to identify which contextual dimensions and individual characteristics predict two kinds of self-reported research use: instrumental (direct) and conceptual (indirect). Results Significant predictors of instrumental research use included: at the individual level; belief suspension-implement, research use in the past, and at the hospital unit (context) level; culture, and the proportion on nurses possessing a baccalaureate degree or higher. Significant predictors of conceptual research use included: at the individual nurse level; belief suspension-implement, problem solving ability, use of research in the past, and at the hospital unit (context) level; leadership, culture, evaluation, formal interactions, informal interactions, organizational slack-space, and unit specialty. Conclusions Hospitals, by focusing attention on modifiable elements of unit context may positively influence nurses’ reported use of research. This influence of context may extend to the adoption of best practices in general and other innovative or quality interventions. PMID:24034149
On land-use modeling: A treatise of satellite imagery data and misclassification error
NASA Astrophysics Data System (ADS)
Sandler, Austin M.
Recent availability of satellite-based land-use data sets, including data sets with contiguous spatial coverage over large areas, relatively long temporal coverage, and fine-scale land cover classifications, is providing new opportunities for land-use research. However, care must be used when working with these datasets due to misclassification error, which causes inconsistent parameter estimates in the discrete choice models typically used to model land-use. I therefore adapt the empirical correction methods developed for other contexts (e.g., epidemiology) so that they can be applied to land-use modeling. I then use a Monte Carlo simulation, and an empirical application using actual satellite imagery data from the Northern Great Plains, to compare the results of a traditional model ignoring misclassification to those from models accounting for misclassification. Results from both the simulation and application indicate that ignoring misclassification will lead to biased results. Even seemingly insignificant levels of misclassification error (e.g., 1%) result in biased parameter estimates, which alter marginal effects enough to affect policy inference. At the levels of misclassification typical in current satellite imagery datasets (e.g., as high as 35%), ignoring misclassification can lead to systematically erroneous land-use probabilities and substantially biased marginal effects. The correction methods I propose, however, generate consistent parameter estimates and therefore consistent estimates of marginal effects and predicted land-use probabilities.
Quantifying the life-history response to increased male exposure in female Drosophila melanogaster.
Edward, Dominic A; Fricke, Claudia; Gerrard, Dave T; Chapman, Tracey
2011-02-01
Precise estimates of costs and benefits, the fitness economics, of mating are of key importance in understanding how selection shapes the coevolution of male and female mating traits. However, fitness is difficult to define and quantify. Here, we used a novel application of an established analytical technique to calculate individual- and population-based estimates of fitness-including those sensitive to the timing of reproduction-to measure the effects on females of increased exposure to males. Drosophila melanogaster females were exposed to high and low frequencies of contact with males, and life-history traits for each individual female were recorded. We then compared different fitness estimates to determine which of them best described the changes in life histories. We predicted that rate-sensitive estimates would be more accurate, as mating influences the rate of offspring production in this species. The results supported this prediction. Increased exposure to males led to significantly decreased fitness within declining but not stable or increasing populations. There was a net benefit of increased male exposure in expanding populations, despite a significant decrease in lifespan. The study shows how a more accurate description of fitness, and new insights can be achieved by considering individual life-history strategies within the context of population growth. © 2010 The Author(s). Evolution© 2010 The Society for the Study of Evolution.
Duintjer Tebbens, Radboud J; Thompson, Kimberly M
2017-07-05
Recognizing that infectious agents readily cross international borders, the International Health Regulations Emergency Committee issues Temporary Recommendations (TRs) that include vaccination of travelers from countries affected by public health emergencies, including serotype 1 wild polioviruses (WPV1s). This analysis estimates the costs and benefits of TRs implemented by countries with reported WPV1 during 2014-2016 while accounting for numerous uncertainties. We estimate the TR costs based on programmatic data and prior economic analyses and TR benefits by simulating potential WPV1 outbreaks in the absence of the TRs using the rate and extent of WPV1 importation outbreaks per reported WPV1 case during 2004-2013 and the number of reported WPV1 cases that occurred in countries with active TRs. The benefits of TRs outweigh the costs in 77% of model iterations, resulting in expected incremental net economic benefits of $210 million. Inclusion of indirect costs increases the costs by 13%, the expected savings from prevented outbreaks by 4%, and the expected incremental net benefits by 3%. Despite the considerable costs of implementing TRs, this study provides health and economic justification for these investments in the context of managing a disease in advanced stages of its global eradication. Copyright © 2017 The Auhors. Published by Elsevier Ltd.. All rights reserved.
ERIC Educational Resources Information Center
Iordanou, Kalypso; Constantinou, Costas P.
2015-01-01
The aim of this study was to examine how students used evidence in argumentation while they engaged in argumentive and reflective activities in the context of a designed learning environment. A Web-based learning environment, SOCRATES, was developed, which included a rich data base on the topic of climate change. Sixteen 11th graders, working with…
The value of value-based insurance design: savings from eliminating drug co-payments.
Maeng, Daniel D; Pitcavage, James M; Snyder, Susan R; Davis, Duane E
2016-02-01
To estimate the cost impact of a $0 co-pay prescription drug program implemented by a large healthcare employer as a part of its employee wellness program. A $0 co-pay program that included approximately 200 antihypertensive, antidiabetic, and antilipid medications was offered to Geisinger Health System (GHS) employees covered by Geisinger Health Plan (GHP) in 2007. Claims data from GHP for the years 2005 to 2011 were obtained. The sample was restricted to continuously enrolled members with Geisinger primary care providers throughout the study period. The intervention group, defined as 2251 GHS employees receiving any of the drugs eligible for $0 co-pay, was propensity score matched based on 2 years of pre-intervention claims data to a comparison group, which was defined as 3857 non-GHS employees receiving the same eligible drugs at the same time. Generalized linear models were used to estimate differences in terms of per-member-per-month (PMPM) claims amounts related to prescription drugs and medical care. Total healthcare spending (medical plus prescription drug spending) among the GHS employees was lower by $144 PMPM (13%; 95% CI, $38-$250) during the months when they were taking any of the eligible drugs. Considering the drug acquisition cost and the forgone co-pay, the estimated return on investment over a 5-year period was 1.8. This finding suggests that VBID implementation within the context of a wider employee wellness program targeting the appropriate population can potentially lead to positive cost savings.
Alonso, Ariel; Laenen, Annouschka
2013-05-01
Laenen, Alonso, and Molenberghs (2007) and Laenen, Alonso, Molenberghs, and Vangeneugden (2009) proposed a method to assess the reliability of rating scales in a longitudinal context. The methodology is based on hierarchical linear models, and reliability coefficients are derived from the corresponding covariance matrices. However, finding a good parsimonious model to describe complex longitudinal data is a challenging task. Frequently, several models fit the data equally well, raising the problem of model selection uncertainty. When model uncertainty is high one may resort to model averaging, where inferences are based not on one but on an entire set of models. We explored the use of different model building strategies, including model averaging, in reliability estimation. We found that the approach introduced by Laenen et al. (2007, 2009) combined with some of these strategies may yield meaningful results in the presence of high model selection uncertainty and when all models are misspecified, in so far as some of them manage to capture the most salient features of the data. Nonetheless, when all models omit prominent regularities in the data, misleading results may be obtained. The main ideas are further illustrated on a case study in which the reliability of the Hamilton Anxiety Rating Scale is estimated. Importantly, the ambit of model selection uncertainty and model averaging transcends the specific setting studied in the paper and may be of interest in other areas of psychometrics. © 2012 The British Psychological Society.
Guiding resource allocations based on terrorism risk.
Willis, Henry H
2007-06-01
Establishing tolerable levels of risk is one of the most contentious and important risk management decisions. With every regulatory or funding decision for a risk management program, society decides whether or not risk is tolerable. The Urban Area Security Initiative (UASI) is a Department of Homeland Security (DHS) grant program designed to enhance security and overall preparedness to prevent, respond to, and recover from acts of terrorism by providing financial assistance for planning, equipment, training, and exercise needs of large urban areas. After briefly reviewing definitions of terrorism risk and rationales for risk-based resource allocation, this article compares estimates of terrorism risk in urban areas that received UASI funding in 2004 to other federal risk management decisions. This comparison suggests that UASI allocations are generally consistent with other federal risk management decisions. However, terrorism risk in several cities that received funding is below levels that are often tolerated in other risk management contexts. There are several reasons why the conclusions about terrorism risk being de minimis in specific cities should be challenged. Some of these surround the means used to estimate terrorism risk for this study. Others involve the comparison that is made to other risk management decisions. However, many of the observations reported are valid even if reported terrorism risk estimates are several orders of magnitude too low. Discussion of resource allocation should be extended to address risk tolerance and include explicit comparisons, like those presented here, to other risk management decisions.
Detailed 3D representations for object recognition and modeling.
Zia, M Zeeshan; Stark, Michael; Schiele, Bernt; Schindler, Konrad
2013-11-01
Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching.
Prediction of Water Binding to Protein Hydration Sites with a Discrete, Semiexplicit Solvent Model.
Setny, Piotr
2015-12-08
Buried water molecules are ubiquitous in protein structures and are found at the interface of most protein-ligand complexes. Determining their distribution and thermodynamic effect is a challenging yet important task, of great of practical value for the modeling of biomolecular structures and their interactions. In this study, we present a novel method aimed at the prediction of buried water molecules in protein structures and estimation of their binding free energies. It is based on a semiexplicit, discrete solvation model, which we previously introduced in the context of small molecule hydration. The method is applicable to all macromolecular structures described by a standard all-atom force field, and predicts complete solvent distribution within a single run with modest computational cost. We demonstrate that it indicates positions of buried hydration sites, including those filled by more than one water molecule, and accurately differentiates them from sterically accessible to water but void regions. The obtained estimates of water binding free energies are in fair agreement with reference results determined with the double decoupling method.
Uhrich, Mark A.; Kolasinac, Jasna; Booth, Pamela L.; Fountain, Robert L.; Spicer, Kurt R.; Mosbrucker, Adam R.
2014-01-01
Researchers at the U.S. Geological Survey, Cascades Volcano Observatory, investigated alternative methods for the traditional sample-based sediment record procedure in determining suspended-sediment concentration (SSC) and discharge. One such sediment-surrogate technique was developed using turbidity and discharge to estimate SSC for two gaging stations in the Toutle River Basin near Mount St. Helens, Washington. To provide context for the study, methods for collecting sediment data and monitoring turbidity are discussed. Statistical methods used include the development of ordinary least squares regression models for each gaging station. Issues of time-related autocorrelation also are evaluated. Addition of lagged explanatory variables was used to account for autocorrelation in the turbidity, discharge, and SSC data. Final regression model equations and plots are presented for the two gaging stations. The regression models support near-real-time estimates of SSC and improved suspended-sediment discharge records by incorporating continuous instream turbidity. Future use of such models may potentially lower the costs of sediment monitoring by reducing time it takes to collect and process samples and to derive a sediment-discharge record.
The costs and cost-efficiency of providing food through schools in areas of high food insecurity.
Gelli, Aulo; Al-Shaiba, Najeeb; Espejo, Francisco
2009-03-01
The provision of food in and through schools has been used to support the education, health, and nutrition of school-aged children. The monitoring of financial inputs into school health and nutrition programs is critical for a number of reasons, including accountability, transparency, and equity. Furthermore, there is a gap in the evidence on the costs, cost-efficiency, and cost-effectiveness of providing food through schools, particularly in areas of high food insecurity. To estimate the programmatic costs and cost-efficiency associated with providing food through schools in food-insecure, developing-country contexts, by analyzing global project data from the World Food Programme (WFP). Project data, including expenditures and number of schoolchildren covered, were collected through project reports and validated through WFP Country Office records. Yearly project costs per schoolchild were standardized over a set number of feeding days and the amount of energy provided by the average ration. Output metrics, such as tonnage, calories, and micronutrient content, were used to assess the cost-efficiency of the different delivery mechanisms. The average yearly expenditure per child, standardized over a 200-day on-site feeding period and an average ration, excluding school-level costs, was US$21.59. The costs varied substantially according to choice of food modality, with fortified biscuits providing the least costly option of about US$11 per year and take-home rations providing the most expensive option at approximately US$52 per year. Comparisons across the different food modalities suggested that fortified biscuits provide the most cost-efficient option in terms of micronutrient delivery (particularly vitamin A and iodine), whereas on-site meals appear to be more efficient in terms of calories delivered. Transportation and logistics costs were the main drivers for the high costs. The choice of program objectives will to a large degree dictate the food modality (biscuits, cooked meals, or take-home rations) and associated implementation costs. Fortified biscuits can provide substantial nutritional inputs at a fraction of the cost of school meals, making them an appealing option for service delivery in food-insecure contexts. Both costs and effects should be considered carefully when designing the appropriate school-based intervention. The costs estimates in this analysis do not include all school-level costs and are therefore lower-bound estimates of full implementation costs.
NASA Technical Reports Server (NTRS)
Toll, David; Doorn, Brad; Lawford, Rick; Anderson, Martha; Allen, Rick; Martin, Timothy; Wood, Eric; Ferguson, Craig
2010-01-01
The amount of evapotranspiration (ET) to the atmosphere can account for 60% or more of the water loss in many semi-arid locations, and can critically affect local economies tied to agriculture, recreation, hydroelectric power, ecosystems, and numerous other water-related areas. NASA supports many activities using satellite and Earth science data to more accurately and cost effectively estimate ET. NASA ET related work includes the research, development and application of techniques. The free and open access of NASA satellite data and products now permits a much wider application of ET mapping. Typically the NASA supported approaches ranges from large regional and continental ET mapping using MODIS (also with AIRS and CERES), GRACE (gravimetric water balance), geostationary (e.g., GOES and Meteosat for near continental sca|e), land surface modeling (i.e, Land Data Assimilation Systems) to fine scale mapping such as provided bvLandsatdata(<100 m). Usually satellite or airborne thermal imagery are used as input to an ET estimated surface energy balance based approach. There are currently several of these ET approaches under development and implementation including 'METRIC', 'SEBS', 'ALEXI/DisALEXI', etc.. One exception is an approach using GRACE satellite data that estimates the terrestrial water storage using gravimetric data over large areas and estimates ET indirectly. Also land surface modeling within the context of data assimilation and integration schemes provides the capability to integrate in situ, ancillary and satellite together to provide a spatially and synoptic estimates of ET also for use to provide for short-term ET predictions. We will summarize NASA related activities contributing to the improved estimation of ET for water management and agriculture with an emphasis on the Western U3.. This summary includes a description of ET projects in the Middle Rio Grande, Yakima, North Platte and other selected basins in the western US. We will also discuss plans to further address ET applications through working with the USDA and the Group on Earth Observations (GEO) to extend and evaluate western U.S. ET mapping to other parts of the U.S. and internationally.
What drives patient mobility across Italian regions? Evidence from hospital discharge data.
Balia, Silvia; Brau, Rinaldo; Marrocu, Emanuela
2014-01-01
This chapter examines patient mobility across Italian regions using data on hospital discharges that occurred in 2008. The econometric analysis is based on Origin-Destination (OD) flow data. Since patient mobility is a crucial phenomenon in contexts of hospital competition based on quality and driven by patient choice, as is the case in Italy, it is crucial to understand its determinants. What makes the Italian case more interesting is the decentralization of the National Health Service that yields large regional variation in patient flows in favor of Centre-Northern regions, which typically are 'net exporters' of hospital treatments. We present results from gravity models estimated using count data estimators, for total and specific types of flows (ordinary admissions, surgical DRGs and medical DRGs). We model cross-section dependence by specifically including features other than geographical distance for OD pairs, such as past migration flows and the share of surgical DRGs. Most of the explanatory variables exhibit the expected effect, with distance and GDP per capita at origin showing a negative impact on patient outflows. Past migrations and indicators of performance at destination are effective determinants of patient mobility. Moreover, we find evidence of regional externalities due to spatial proximity effects at both origin and destination.
Health by Design: Interweaving Health Promotion into Environments and Settings
Springer, Andrew E.; Evans, Alexandra E.; Ortuño, Jaquelin; Salvo, Deborah; Varela Arévalo, Maria Teresa
2017-01-01
The important influence of the environmental context on health and health behavior—which includes place, settings, and the multiple environments within place and settings—has directed health promotion planners from a focus solely on changing individuals, toward a focus on harnessing and changing context for individual and community health promotion. Health promotion planning frameworks such as Intervention Mapping provide helpful guidance in addressing various facets of the environmental context in health intervention design, including the environmental factors that influence a given health condition or behavior, environmental agents that can influence a population’s health, and environmental change methods. In further exploring how to harness the environmental context for health promotion, we examine in this paper the concept of interweaving of health promotion into context, defined as weaving or blending together health promotion strategies, practices, programs, and policies to fit within, complement, and build from existing settings and environments. Health promotion interweaving stems from current perspectives in health intervention planning, improvement science and complex systems thinking by guiding practitioners from a conceptualization of context as a backdrop to intervention, to one that recognizes context as integral to the intervention design and to the potential to directly influence health outcomes. In exploring the general approach of health promotion interweaving, we examine selected theoretical and practice-based interweaving concepts in relation to four key environments (the policy environment, the information environment, the social/cultural/organizational environment, and the physical environment), followed by evidence-based and practice-based examples of health promotion interweaving from the literature. Interweaving of health promotion into context is a common practice for health planners in designing health promotion interventions, yet one which merits further intentionality as a specific health promotion planning design approach. PMID:29043248
Health by Design: Interweaving Health Promotion into Environments and Settings.
Springer, Andrew E; Evans, Alexandra E; Ortuño, Jaquelin; Salvo, Deborah; Varela Arévalo, Maria Teresa
2017-01-01
The important influence of the environmental context on health and health behavior-which includes place, settings, and the multiple environments within place and settings-has directed health promotion planners from a focus solely on changing individuals, toward a focus on harnessing and changing context for individual and community health promotion. Health promotion planning frameworks such as Intervention Mapping provide helpful guidance in addressing various facets of the environmental context in health intervention design, including the environmental factors that influence a given health condition or behavior, environmental agents that can influence a population's health, and environmental change methods. In further exploring how to harness the environmental context for health promotion, we examine in this paper the concept of interweaving of health promotion into context , defined as weaving or blending together health promotion strategies, practices, programs, and policies to fit within, complement, and build from existing settings and environments. Health promotion interweaving stems from current perspectives in health intervention planning, improvement science and complex systems thinking by guiding practitioners from a conceptualization of context as a backdrop to intervention, to one that recognizes context as integral to the intervention design and to the potential to directly influence health outcomes. In exploring the general approach of health promotion interweaving, we examine selected theoretical and practice-based interweaving concepts in relation to four key environments ( the policy environment, the information environment, the social/cultural/organizational environment , and the physical environment ), followed by evidence-based and practice-based examples of health promotion interweaving from the literature. Interweaving of health promotion into context is a common practice for health planners in designing health promotion interventions, yet one which merits further intentionality as a specific health promotion planning design approach.
Estimating the relative utility of screening mammography.
Abbey, Craig K; Eckstein, Miguel P; Boone, John M
2013-05-01
The concept of diagnostic utility is a fundamental component of signal detection theory, going back to some of its earliest works. Attaching utility values to the various possible outcomes of a diagnostic test should, in principle, lead to meaningful approaches to evaluating and comparing such systems. However, in many areas of medical imaging, utility is not used because it is presumed to be unknown. In this work, we estimate relative utility (the utility benefit of a detection relative to that of a correct rejection) for screening mammography using its known relation to the slope of a receiver operating characteristic (ROC) curve at the optimal operating point. The approach assumes that the clinical operating point is optimal for the goal of maximizing expected utility and therefore the slope at this point implies a value of relative utility for the diagnostic task, for known disease prevalence. We examine utility estimation in the context of screening mammography using the Digital Mammographic Imaging Screening Trials (DMIST) data. We show how various conditions can influence the estimated relative utility, including characteristics of the rating scale, verification time, probability model, and scope of the ROC curve fit. Relative utility estimates range from 66 to 227. We argue for one particular set of conditions that results in a relative utility estimate of 162 (±14%). This is broadly consistent with values in screening mammography determined previously by other means. At the disease prevalence found in the DMIST study (0.59% at 365-day verification), optimal ROC slopes are near unity, suggesting that utility-based assessments of screening mammography will be similar to those found using Youden's index.
NASA Astrophysics Data System (ADS)
Durocher, M.; Mostofi Zadeh, S.; Burn, D. H.; Ashkar, F.
2017-12-01
Floods are one of the most costly hazards and frequency analysis of river discharges is an important part of the tools at our disposal to evaluate their inherent risks and to provide an adequate response. In comparison to the common examination of annual streamflow maximums, peaks over threshold (POT) is an interesting alternative that makes better use of the available information by including more than one flood event per year (on average). However, a major challenge is the selection of a satisfactory threshold above which peaks are assumed to respect certain conditions necessary for an adequate estimation of the risk. Additionally, studies have shown that POT is also a valuable approach to investigate the evolution of flood regimes in the context of climate change. Recently, automatic procedures for the selection of the threshold were suggested to guide that important choice, which otherwise rely on graphical tools and expert judgment. Furthermore, having an automatic procedure that is objective allows for quickly repeating the analysis on a large number of samples, which is useful in the context of large databases or for uncertainty analysis based on a resampling approach. This study investigates the impact of considering such procedures in a case study including many sites across Canada. A simulation study is conducted to evaluate the bias and predictive power of the automatic procedures in similar conditions as well as investigating the power of derived nonstationarity tests. The results obtained are also evaluated in the light of expert judgments established in a previous study. Ultimately, this study provides a thorough examination of the considerations that need to be addressed when conducting POT analysis using automatic threshold selection.
Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S
2015-01-16
Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.
Toth, Peter P; Danese, Mark; Villa, Guillermo; Qian, Yi; Beaubrun, Anne; Lira, Armando; Jansen, Jeroen P
2017-06-01
To estimate real-world cardiovascular disease (CVD) burden and value-based price range of evolocumab for a US-context, high-risk, secondary-prevention population. Burden of CVD was assessed using the UK-based Clinical Practice Research Datalink (CPRD) in order to capture complete CV burden including CV mortality. Patients on standard of care (SOC; high-intensity statins) in CPRD were selected based on eligibility criteria of FOURIER, a phase 3 CV outcomes trial of evolocumab, and categorized into four cohorts: high-risk prevalent atherosclerotic CVD (ASCVD) cohort (n = 1448), acute coronary syndrome (ACS) (n = 602), ischemic stroke (IS) (n = 151), and heart failure (HF) (n = 291) incident cohorts. The value-based price range for evolocumab was assessed using a previously published economic model. The model incorporated CPRD CV event rates and considered CV event reduction rate ratios per 1 mmol/L reduction in low-density lipoprotein-cholesterol (LDL-C) from a meta-analysis of statin trials by the Cholesterol Treatment Trialists Collaboration (CTTC), i.e. CTTC relationship. Multiple-event rates of composite CV events (ACS, IS, or coronary revascularization) per 100 patient-years were 12.3 for the high-risk prevalent ASCVD cohort, and 25.7, 13.3, and 23.3, respectively, for incident ACS, IS, and HF cohorts. Approximately one-half (42%) of the high-risk ASCVD patients with a new CV event during follow-up had a subsequent CV event. Combining these real-world event rates and the CTTC relationship in the economic model, the value-based price range (credible interval) under a willingness-to-pay threshold of $150,000/quality-adjusted life-year gained for evolocumab was $11,990 ($9,341-$14,833) to $16,856 ($12,903-$20,678) in ASCVD patients with baseline LDL-C levels ≥70 mg/dL and ≥100 mg/dL, respectively. Real-world CVD burden is substantial. Using the observed CVD burden in CPRD and the CTTC relationship, the cost-effectiveness analysis showed that, accounting for uncertainties, the expected value-based price for evolocumab is higher than its current annual cost, as long as the payer discount off list price is greater than 20%.
Cipoli, Daniel E; Martinez, Edson Z; Castro, Margaret de; Moreira, Ayrton C
2012-12-01
To estimate the pretest probability of Cushing's syndrome (CS) diagnosis by a Bayesian approach using intuitive clinical judgment. Physicians were requested, in seven endocrinology meetings, to answer three questions: "Based on your personal expertise, after obtaining clinical history and physical examination, without using laboratorial tests, what is your probability of diagnosing Cushing's Syndrome?"; "For how long have you been practicing Endocrinology?"; and "Where do you work?". A Bayesian beta regression, using the WinBugs software was employed. We obtained 294 questionnaires. The mean pretest probability of CS diagnosis was 51.6% (95%CI: 48.7-54.3). The probability was directly related to experience in endocrinology, but not with the place of work. Pretest probability of CS diagnosis was estimated using a Bayesian methodology. Although pretest likelihood can be context-dependent, experience based on years of practice may help the practitioner to diagnosis CS.
NASA Astrophysics Data System (ADS)
Lobuglio, Joseph N.; Characklis, Gregory W.; Serre, Marc L.
2007-03-01
Sparse monitoring data and error inherent in water quality models make the identification of waters not meeting regulatory standards uncertain. Additional monitoring can be implemented to reduce this uncertainty, but it is often expensive. These costs are currently a major concern, since developing total maximum daily loads, as mandated by the Clean Water Act, will require assessing tens of thousands of water bodies across the United States. This work uses the Bayesian maximum entropy (BME) method of modern geostatistics to integrate water quality monitoring data together with model predictions to provide improved estimates of water quality in a cost-effective manner. This information includes estimates of uncertainty and can be used to aid probabilistic-based decisions concerning the status of a water (i.e., impaired or not impaired) and the level of monitoring needed to characterize the water for regulatory purposes. This approach is applied to the Catawba River reservoir system in western North Carolina as a means of estimating seasonal chlorophyll a concentration. Mean concentration and confidence intervals for chlorophyll a are estimated for 66 reservoir segments over an 11-year period (726 values) based on 219 measured seasonal averages and 54 model predictions. Although the model predictions had a high degree of uncertainty, integration of modeling results via BME methods reduced the uncertainty associated with chlorophyll estimates compared with estimates made solely with information from monitoring efforts. Probabilistic predictions of future chlorophyll levels on one reservoir are used to illustrate the cost savings that can be achieved by less extensive and rigorous monitoring methods within the BME framework. While BME methods have been applied in several environmental contexts, employing these methods as a means of integrating monitoring and modeling results, as well as application of this approach to the assessment of surface water monitoring networks, represent unexplored areas of research.
Use of Context in Video Processing
NASA Astrophysics Data System (ADS)
Wu, Chen; Aghajan, Hamid
Interpreting an event or a scene based on visual data often requires additional contextual information. Contextual information may be obtained from different sources. In this chapter, we discuss two broad categories of contextual sources: environmental context and user-centric context. Environmental context refers to information derived from domain knowledge or from concurrently sensed effects in the area of operation. User-centric context refers to information obtained and accumulated from the user. Both types of context can include static or dynamic contextual elements. Examples from a smart home environment are presented to illustrate how different types of contextual data can be applied to aid the decision-making process.
Balancing Score Adjusted Targeted Minimum Loss-based Estimation
Lendle, Samuel David; Fireman, Bruce; van der Laan, Mark J.
2015-01-01
Adjusting for a balancing score is sufficient for bias reduction when estimating causal effects including the average treatment effect and effect among the treated. Estimators that adjust for the propensity score in a nonparametric way, such as matching on an estimate of the propensity score, can be consistent when the estimated propensity score is not consistent for the true propensity score but converges to some other balancing score. We call this property the balancing score property, and discuss a class of estimators that have this property. We introduce a targeted minimum loss-based estimator (TMLE) for a treatment-specific mean with the balancing score property that is additionally locally efficient and doubly robust. We investigate the new estimator’s performance relative to other estimators, including another TMLE, a propensity score matching estimator, an inverse probability of treatment weighted estimator, and a regression-based estimator in simulation studies. PMID:26561539
Carolan, Stephany; Harris, Peter R; Greenwood, Kathryn; Cavanagh, Kate
2016-12-15
The evidence for the benefits of online cognitive behaviour therapy (CBT)-based programmes delivered in a clinical context is clear, but this evidence does not translate to online CBT-based stress management programmes delivered within a workplace context. One of the challenges to the delivery of online interventions is programme engagement; this challenge is even more acute for interventions delivered in real-world settings such as the workplace. The purpose of this pilot study is to explore the effect of an online facilitated discussion group on engagement, and to estimate the potential effectiveness of an online CBT-based stress management programme. This study is a three-arm randomised controlled trial (RCT) comparing a minimally guided, online, CBT-based stress management intervention delivered with and without an online facilitated bulletin board, and a wait list control group. Up to 90 employees from six UK-based organisations will be recruited to the study. Inclusion criteria will include age 18 years or over, elevated levels of stress (as measured on the PSS-10 scale), access to a computer or a tablet and the Internet. The primary outcome measure will be engagement, as defined by the number of logins to the site; secondary outcome measures will include further measures of engagement (the number of pages visited, the number of modules completed and self-report engagement) and measures of effectiveness (psychological distress and subjective wellbeing). Possible moderators will include measures of intervention quality (satisfaction, acceptability, credibility, system usability), time pressure, goal conflict, levels of distress at baseline and job autonomy. Measures will be taken at baseline, 2 weeks (credibility and expectancy measures only), 8 weeks (completion of intervention) and 16 weeks (follow-up). Primary analysis will be conducted on intention-to-treat principles. To our knowledge this is the first study to explore the effect of an online discussion group on the engagement and effectiveness of an online CBT-based stress management intervention. This study could provide a solution to the growing problem of poor employee psychological health and begin to address the challenge of increasing engagement with Internet-delivered health interventions. ClinicalTrials.gov Identifier: NCT02729987 . Registered on 18 Mar 2016.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hongbin; Szilard, Ronaldo; Epiney, Aaron
Under the auspices of the DOE LWRS Program RISMC Industry Application ECCS/LOCA, INL has engaged staff from both South Texas Project (STP) and the Texas A&M University (TAMU) to produce a generic pressurized water reactor (PWR) model including reactor core, clad/fuel design and systems thermal hydraulics based on the South Texas Project (STP) nuclear power plant, a 4-Loop Westinghouse PWR. A RISMC toolkit, named LOCA Toolkit for the U.S. (LOTUS), has been developed for use in this generic PWR plant model to assess safety margins for the proposed NRC 10 CFR 50.46c rule, Emergency Core Cooling System (ECCS) performance duringmore » LOCA. This demonstration includes coupled analysis of core design, fuel design, thermalhydraulics and systems analysis, using advanced risk analysis tools and methods to investigate a wide range of results. Within this context, a multi-physics best estimate plus uncertainty (MPBEPU) methodology framework is proposed.« less
A critical review of the ESCAPE project for estimating long-term health effects of air pollution.
Lipfert, Frederick W
2017-02-01
The European Study of Cohorts for Air Pollution Effects (ESCAPE) is a13-nation study of long-term health effects of air pollution based on subjects pooled from up to 22 cohorts that were intended for other purposes. Twenty-five papers have been published on associations of various health endpoints with long-term exposures to NOx, NO2, traffic indicators, PM10, PM2.5 and PM constituents including absorbance (elemental carbon). Seven additional ESCAPE papers found moderate correlations (R2=0.3-0.8) between measured air quality and estimates based on land-use regression that were used; personal exposures were not considered. I found no project summaries or comparisons across papers; here I conflate the 25 ESCAPE findings in the context of other recent European epidemiology studies. Because one ESCAPE cohort contributed about half of the subjects, I consider it and the other 18 cohorts separately to compare their contributions to the combined risk estimates. I emphasize PM2.5 and confirm the published hazard ratio of 1.14 (1.04-1.26) per 10μg/m3 for all-cause mortality. The ESCAPE papers found 16 statistically significant (p<0.05) risks among the125 pollutant-endpoint combinations; 4 each for PM2.5 and PM10, 1 for PM absorbance, 5 for NO2, and 2 for traffic. No PM constituent was consistently significant. No significant associations were reported for cardiovascular mortality; low birthrate was significant for all pollutants except PM absorbance. Based on associations with PM2.5, I find large differences between all-cause death estimates and the sum of specific-cause death estimates. Scatterplots of PM2.5 mortality risks by cause show no consistency across the 18 cohorts, ostensibly because of the relatively few subjects. Overall, I find the ESCAPE project inconclusive and I question whether the efforts required to estimate exposures for small cohorts were worthwhile. I suggest that detailed studies of the large cohort using historical exposures and additional cardiovascular risk factors might be productive. Copyright © 2016 Elsevier Ltd. All rights reserved.
Island emergence/subsidence histories and their bearing upon biological speciation in the Galápagos
NASA Astrophysics Data System (ADS)
Orellana Rovirosa, F.
2017-12-01
In the context of plate motion reconstructions for the Nazca, Cocos and South American plates in relation with the Galápagos hotspot, it is found that the age-depth dependence of bathymetry, dynamic topography due to the Galápagos plume, crustal relaxation, and magmatic production allow for us to estimate the subsidence of islands and seamounts along the Carnegie Ridge. Our estimates are partially based on geodynamic theory (fluid mechanics and elasticity), but also on detailed bathymetric observations and analysis. For the Carnegie Ridge saddle, we estimate subsidence of about 2 km occurring during the past 13 Ma. Because the present-day depths of the region are in the range 2-2.5 km, this assessment shows that the deepest region of the present-day Carnegie Ridge may have been above sea level when it was closer to the active hotspot, therefore providing habitat for land species for a few (<5) million years. Moreover, the migrating hotspot swell may have caused the emerging of different portions of the Carnegie Ridge on a spatio-temporal progression. A more sophisticated 3D numerical model including an asthenosphere and plume interacting with the overlying Nazca plate may provide an improved understanding of geological-biological co-evolution in the Galápagos-Carnegie Ridge.
Zhang, Xiaotong; Schmitter, Sebastian; Van de Moortel, Pierre-François; Liu, Jiaen
2014-01-01
Elevated Specific Absorption Rate (SAR) associated with increased main magnetic field strength remains as a major safety concern in ultra-high-field (UHF) Magnetic Resonance Imaging (MRI) applications. The calculation of local SAR requires the knowledge of the electric field induced by radiofrequency (RF) excitation, and the local electrical properties of tissues. Since electric field distribution cannot be directly mapped in conventional MR measurements, SAR estimation is usually performed using numerical model-based electromagnetic simulations which, however, are highly time consuming and cannot account for the specific anatomy and tissue properties of the subject undergoing a scan. In the present study, starting from the measurable RF magnetic fields (B1) in MRI, we conducted a series of mathematical deduction to estimate the local, voxel-wise and subject-specific SAR for each single coil element using a multi-channel transceiver array coil. We first evaluated the feasibility of this approach in numerical simulations including two different human head models. We further conducted experimental study in a physical phantom and in two human subjects at 7T using a multi-channel transceiver head coil. Accuracy of the results is discussed in the context of predicting local SAR in the human brain at UHF MRI using multi-channel RF transmission. PMID:23508259
Radiation environment and shielding for early manned Mars missions
NASA Technical Reports Server (NTRS)
Hall, Stephen B.; Mccann, Michael E.
1986-01-01
The problem of shielding a crew during early manned Mars missions is discussed. Requirements for shielding are presented in the context of current astronaut exposure limits, natural ionizing radiation sources, and shielding inherent in a particular Mars vehicle configuration. An estimated range for shielding weight is presented based on the worst solar flare dose, mission duration, and inherent vehicle shielding.
Bilayer segmentation of webcam videos using tree-based classifiers.
Yin, Pei; Criminisi, Antonio; Winn, John; Essa, Irfan
2011-01-01
This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems.
Weight training in youth-growth, maturation, and safety: an evidence-based review.
Malina, Robert M
2006-11-01
To review the effects of resistance training programs on pre- and early-pubertal youth in the context of response, potential influence on growth and maturation, and occurrence of injury. Evidence-based review. Twenty-two reports dealing with experimental resistance training protocols, excluding isometric programs, in pre- and early-pubertal youth, were reviewed in the context of subject characteristics, training protocol, responses, and occurrence of injury. Experimental programs most often used isotonic machines and free weights, 2- and 3-day protocols, and 8- and 12-week durations, with significant improvements in muscular strength during childhood and early adolescence. Strength gains were lost during detraining. Experimental resistance training programs did not influence growth in height and weight of pre- and early-adolescent youth, and changes in estimates of body composition were variable and quite small. Only 10 studies systematically monitored injuries, and only three injuries were reported. Estimated injury rates were 0.176, 0.053, and 0.055 per 100 participant-hours in the respective programs. Experimental training protocols with weights and resistance machines and with supervision and low instructor/participant ratios are relatively safe and do not negatively impact growth and maturation of pre- and early-pubertal youth.
ERIC Educational Resources Information Center
Fenwick, Lisl
2017-01-01
This study analyses how discourses in regional contexts affect the development of curriculum-based literacy standards for adolescents in schooling. A comparative case-study research design enabled the influences of discourses at the regional level to be analysed. The case studies include the development of curricula to define a minimum literacy…
Schneider, Bruna Celestino; Motta, Janaína Vieira Dos Santos; Muniz, Ludmila Correa; Bielemann, Renata Moraes; Madruga, Samanta Winck; Orlandi, Silvana Paiva; Gigante, Denise Petrucci; Assunção, Maria Cecília Formoso
2016-01-01
Methodological paper aiming to describe the development of a digital and self-reported food frequency questionnaire (FFQ), created to the 1982 and 1993 Pelotas Birth Cohorts. The instrument was created based on FFQs previously applied to subjects belonging to both cohorts in the 2004 and 2008 follow-ups. The FFQ was developed including 88 foods and/or meals where frequencies were clustered from a minimum of never or once/month to a maximum of greater than or equal to 5 times/day. The closed options related to portions were based on a 24-hour recall previously asked to a subsample from the 1993 cohort. Three options for portions were created: equal to, less than or greater than. Equal to portion was described based on the 50 percentile of each food consumed reported in a 24-hour recall. Photographs of portions related to the 50 percentile for each food were also included in the software. This digital FFQ included food and meals based on the needs of current researches. The layout of the software was attractive to the staff members as well as to the cohort members. The responding time was 12 minutes and the software allowed several individuals to use it at the same time. Moreover, this instrument dismissed interviewers and double data entry. It is recommended the use of the same strategy in other studies, adapted to different contexts and situations.
Design Considerations for Creating a Chemical Information Workstation.
ERIC Educational Resources Information Center
Mess, John A.
1995-01-01
Discusses what a functional chemical information workstation should provide to support the users in an academic library and examines how it can be implemented. Highlights include basic design considerations; natural language interface, including grammar-based, context-based, and statistical methodologies; expert system interface; and programming…
The Efficacy of Injury Prevention Programs in Adolescent Team Sports: A Meta-analysis.
Soomro, Najeebullah; Sanders, Ross; Hackett, Daniel; Hubka, Tate; Ebrahimi, Saahil; Freeston, Jonathan; Cobley, Stephen
2016-09-01
Intensive sport participation in childhood and adolescence is an established cause of acute and overuse injury. Interventions and programs designed to prevent such injuries are important in reducing individual and societal costs associated with treatment and recovery. Likewise, they help to maintain the accrual of positive outcomes from participation, such as cardiovascular health and skill development. To date, several studies have individually tested the effectiveness of injury prevention programs (IPPs). To determine the overall efficacy of structured multifaceted IPPs containing a combination of warm-up, neuromuscular strength, or proprioception training, targeting injury reduction rates according to risk exposure time in adolescent team sport contexts. Systematic review and meta-analysis. With established inclusion criteria, studies were searched in the following databases: Cochrane Central Register of Controlled Trials, MEDLINE, SPORTDiscus, Web of Science, EMBASE, CINAHL, and AusSportMed. The keyword search terms (including derivations) included the following: adolescents, sports, athletic injuries, prevention/warm-up programs. Eligible studies were then pooled for meta-analysis with an invariance random-effects model, with injury rate ratio (IRR) as the primary outcome. Heterogeneity among studies and publication bias were tested, and subgroup analysis examined heterogeneity sources. Across 10 studies, including 9 randomized controlled trials, a pooled overall point estimate yielded an IRR of 0.60 (95% CI = 0.48-0.75; a 40% reduction) while accounting for hours of risk exposure. Publication bias assessment suggested an 8% reduction in the estimate (IRR = 0.68, 95% CI = 0.54-0.84), and the prediction interval intimated that any study estimate could still fall between 0.33 and 1.48. Subgroup analyses identified no significant moderators, although possible influences may have been masked because of data constraints. Compared with normative practices or control, IPPs significantly reduced IRRs in adolescent team sport contexts. The underlying explanations for IPP efficacy remain to be accurately identified, although they potentially relate to IPP content and improvements in muscular strength, proprioceptive balance, and flexibility. Clinical practitioners (eg, orthopaedics, physical therapists) and sports practitioners (eg, strength and conditioners, coaches) can respectively recommend and implement IPPs similar to those examined to help reduce injury rates in adolescent team sports contexts. © 2015 The Author(s).
NASA Astrophysics Data System (ADS)
Wang, Danshi; Zhang, Min; Li, Ze; Song, Chuang; Fu, Meixia; Li, Jin; Chen, Xue
2017-09-01
A bio-inspired detector based on the artificial neural network (ANN) and genetic algorithm is proposed in the context of a coherent optical transmission system. The ANN is designed to mitigate 16-quadrature amplitude modulation system impairments, including linear impairment: Gaussian white noise, laser phase noise, in-phase/quadrature component imbalance, and nonlinear impairment: nonlinear phase. Without prior information or heuristic assumptions, the ANN, functioning as a machine learning algorithm, can learn and capture the characteristics of impairments from observed data. Numerical simulations were performed, and dispersion-shifted, dispersion-managed, and dispersion-unmanaged fiber links were investigated. The launch power dynamic range and maximum transmission distance for the bio-inspired method were 2.7 dBm and 240 km greater, respectively, than those of the maximum likelihood estimation algorithm. Moreover, the linewidth tolerance of the bio-inspired technique was 170 kHz greater than that of the k-means method, demonstrating its usability for digital signal processing in coherent systems.
Visibility of healthcare research institutes through the Web of Science database.
González-Albo, B; Moreno-Solano, L; Aparicio, J; Bordons, M
2017-12-01
The strategic importance of healthcare research institutes (HRIs) in health sciences research in Spain has motivated this analysis of the feasibility of studing their contribution to the Spanish scientific output through their presence as a signatory institution in the publications. We identified the output of the HRIs in the Web of Science database, comparing their observed output (the institutes are explicitly listed in the authors' workplace) and potential output (estimated based on the linked hospitals). The studies based on scientific publications do not help us reliably identify the contribution of the HRIs because their observed production is much lower than the potential output, although their visibility tends to increase over time. This article highlights the importance of HRI members including the institute among their work addresses to increase the visibility of these organisations and to facilitate studies aimed at assessing their activity in the national and international context. Copyright © 2017 Elsevier España, S.L.U. and Sociedad Española de Medicina Interna (SEMI). All rights reserved.
The natural resources inventory system ASVT project
NASA Technical Reports Server (NTRS)
Joyce, A. T.
1979-01-01
The hardware/software and the associated procedures for a natural resource inventory and information system based on the use of LANDSAT-acquired multispectral scanner digital data is described. The system is designed to derive land cover/vegetation information from LANDSAT data and geographically reference this information for the production of various types of maps and for the compilation of acreage by land cover/vegetation category. The system also provides for data base building so that the LANDSAT-derived information can be related to information digitized from other sources (e.g., soils maps) in a geographic context in order to address specific applications. These applications include agricultural crop production estimation, erosion hazard-reforestation need assessment, whitetail deer habitat assessment, and site selection. The system is tested in demonstration areas located in the state of Mississippi, and the results of these application demonstrations are presented. A cost-efficiency comparison of producing land cover/vegetation maps and statistics with this system versus the use of small-scale aerial photography is made.
Estimating Independent Locally Shifted Random Utility Models for Ranking Data
ERIC Educational Resources Information Center
Lam, Kar Yin; Koning, Alex J.; Franses, Philip Hans
2011-01-01
We consider the estimation of probabilistic ranking models in the context of conjoint experiments. By using approximate rather than exact ranking probabilities, we avoided the computation of high-dimensional integrals. We extended the approximation technique proposed by Henery (1981) in the context of the Thurstone-Mosteller-Daniels model to any…
NASA Astrophysics Data System (ADS)
Aziz, Omer; Hussain, Tahir; Ullah, Matee; Bhatti, Asher Samuel; Ali, Aamir
2018-02-01
The exploration and production of unconventional resources has increased significantly over the past few years around the globe to fulfill growing energy demands. Hydrocarbon potential of these unconventional petroleum systems depends on the presence of significant organic matter; their thermal maturity and the quality of present hydrocarbons i.e. gas or oil shale. In this work, we present a workflow for estimating Total Organic Content (TOC) from seismic reflection data. To achieve the objective of this study, we have chosen a classic potential candidate for exploration of unconventional reserves, the shale of the Sembar Formation, Lower Indus Basin, Pakistan. Our method includes the estimation of TOC from the well data using the Passey's ΔlogR and Schwarzkofp's methods. From seismic data, maps of Relative Acoustic Impedance (RAI) are extracted at maximum and minimum TOC zones within the Sembar Formation. A geostatistical trend with good correlation coefficient (R2) for cross-plots between TOC and RAI at well locations is used for estimation of seismic based TOC at the reservoir scale. Our results suggest a good calibration of TOC values from seismic at well locations. The estimated TOC values range from 1 to 4% showing that the shale of the Sembar Formation lies in the range of good to excellent unconventional oil/gas play within the context of TOC. This methodology of source rock evaluation provides a spatial distribution of TOC at the reservoir scale as compared to the conventional distribution generated from samples collected over sparse wells. The approach presented in this work has wider applications for source rock evaluation in other similar petroliferous basins worldwide.
NASA Astrophysics Data System (ADS)
Omar, Artur; Bujila, Robert; Fransson, Annette; Andreo, Pedro; Poludniowski, Gavin
2016-04-01
Although interventional x-ray angiography (XA) procedures involve relatively high radiation doses that can lead to deterministic tissue reactions in addition to stochastic effects, convenient and accurate estimation of absorbed organ doses has traditionally been out of reach. This has mainly been due to the absence of practical means to access dose-related data that describe the physical context of the numerous exposures during an XA procedure. The present work provides a comprehensive and general framework for the determination of absorbed organ dose, based on non-proprietary access to dose-related data by utilizing widely available DICOM radiation dose structured reports. The framework comprises a straightforward calculation workflow to determine the incident kerma and reconstruction of the geometrical relation between the projected x-ray beam and the patient’s anatomy. The latter is difficult in practice, as the position of the patient on the table top is unknown. A novel patient-specific approach for reconstruction of the patient position on the table is presented. The proposed approach was evaluated for 150 patients by comparing the estimated position of the primary irradiated organs (the target organs) with their position in clinical DICOM images. The approach is shown to locate the target organ position with a mean (max) deviation of 1.3 (4.3), 1.8 (3.6) and 1.4 (2.9) cm for neurovascular, adult and paediatric cardiovascular procedures, respectively. To illustrate the utility of the framework for systematic and automated organ dose estimation in routine clinical practice, a prototype implementation of the framework with Monte Carlo simulations is included.
Assessing the impact of Syrian refugees on earthquake fatality estimations in southeast Turkey
NASA Astrophysics Data System (ADS)
Wilson, Bradley; Paradise, Thomas
2018-01-01
The influx of millions of Syrian refugees into Turkey has rapidly changed the population distribution along the Dead Sea Rift and East Anatolian fault zones. In contrast to other countries in the Middle East where refugees are accommodated in camp environments, the majority of displaced individuals in Turkey are integrated into local cities, towns, and villages - placing stress on urban settings and increasing potential exposure to strong earthquake shaking. Yet displaced populations are often unaccounted for in the census-based population models used in earthquake fatality estimations. This study creates a minimally modeled refugee gridded population model and analyzes its impact on semi-empirical fatality estimations across southeast Turkey. Daytime and nighttime fatality estimates were produced for five fault segments at earthquake magnitudes 5.8, 6.4, and 7.0. Baseline fatality estimates calculated from census-based population estimates for the study area varied in scale from tens to thousands of fatalities, with higher death totals in nighttime scenarios. Refugee fatality estimations were analyzed across 500 semi-random building occupancy distributions. Median fatality estimates for refugee populations added non-negligible contributions to earthquake fatalities at four of five fault locations, increasing total fatality estimates by 7-27 %. These findings communicate the necessity of incorporating refugee statistics into earthquake fatality estimations in southeast Turkey and the ongoing importance of placing environmental hazards in their appropriate regional and temporal context.
Estimating the safety benefits of context sensitive solutions.
DOT National Transportation Integrated Search
2011-11-01
Context Sensitive Solutions (CSS), also commonly known by the original name Context Sensitive Design : (CSD), is an alternative approach to the conventional transportation-oriented decision-making and design : processes. The CSS approach can be used ...
Stillbirth With Group B Streptococcus Disease Worldwide: Systematic Review and Meta-analyses.
Seale, Anna C; Blencowe, Hannah; Bianchi-Jassir, Fiorella; Embleton, Nicholas; Bassat, Quique; Ordi, Jaume; Menéndez, Clara; Cutland, Clare; Briner, Carmen; Berkley, James A; Lawn, Joy E; Baker, Carol J; Bartlett, Linda; Gravett, Michael G; Heath, Paul T; Ip, Margaret; Le Doare, Kirsty; Rubens, Craig E; Saha, Samir K; Schrag, Stephanie; Meulen, Ajoke Sobanjo-Ter; Vekemans, Johan; Madhi, Shabir A
2017-11-06
There are an estimated 2.6 million stillbirths each year, many of which are due to infections, especially in low- and middle-income contexts. This paper, the eighth in a series on the burden of group B streptococcal (GBS) disease, aims to estimate the percentage of stillbirths associated with GBS disease. We conducted systematic literature reviews (PubMed/Medline, Embase, Literatura Latino-Americana e do Caribe em Ciências da Saúde, World Health Organization Library Information System, and Scopus) and sought unpublished data from investigator groups. Studies were included if they reported original data on stillbirths (predominantly ≥28 weeks' gestation or ≥1000 g, with GBS isolated from a sterile site) as a percentage of total stillbirths. We did meta-analyses to derive pooled estimates of the percentage of GBS-associated stillbirths, regionally and worldwide for recent datasets. We included 14 studies from any period, 5 with recent data (after 2000). There were no data from Asia. We estimated that 1% (95% confidence interval [CI], 0-2%) of all stillbirths in developed countries and 4% (95% CI, 2%-6%) in Africa were associated with GBS. GBS is likely an important cause of stillbirth, especially in Africa. However, data are limited in terms of geographic spread, with no data from Asia, and cases worldwide are probably underestimated due to incomplete case ascertainment. More data, using standardized, systematic methods, are critical, particularly from low- and middle-income contexts where the highest burden of stillbirths occurs. These data are essential to inform interventions, such as maternal GBS vaccination. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America.
Hooper, R.P.; Aulenbach, Brent T.; Kelly, V.J.
2001-01-01
Estimating the annual mass flux at a network of fixed stations is one approach to characterizing water quality of large rivers. The interpretive context provided by annual flux includes identifying source and sink areas for constituents and estimating the loadings to receiving waters, such as reservoirs or the ocean. Since 1995, the US Geological Survey's National Stream Quality Accounting Network (NASQAN) has employed this approach at a network of 39 stations in four of the largest river basins of the USA: The Mississippi, the Columbia, the Colorado and the Rio Grande. In this paper, the design of NASQAN is described and its effectiveness at characterizing the water quality of these rivers is evaluated using data from the first 3 years of operation. A broad range of constituents was measured by NASQAN, including trace organic and inorganic chemicals, major ions, sediment and nutrients. Where possible, a regression model relating concentration to discharge and season was used to interpolate between chemical observations for flux estimation. For water-quality network design, the most important finding from NASQAN was the importance of having a specific objective (that is, estimating annual mass flux) and, from that, an explicitly stated data analysis strategy, namely the use of regression models to interpolate between observations. The use of such models aided in the design of sampling strategy and provided a context for data review. The regression models essentially form null hypotheses for concentration variation that can be evaluated by the observed data. The feedback between network operation and data collection established by the hypothesis tests places the water-quality network on a firm scientific footing.
Prioritising sewerage maintenance using inferred sewer age: a case study for Edinburgh.
Arthur, S; Burkhard, R
2010-01-01
The reported research project focuses on using a database which contains details of customer contacts and CCTV data for a key Scottish catchment to construct a GIS based sewer condition model. Given the nature of the asset registry, a key research challenge was estimating the age of individual lengths of pipe. Within this context, asset age was inferred using the estimated age of surface developments-this involved overlaying the network in a GIS with historical digital maps. The paper illustrates that inferred asset age can reliably be used to highlight assets which are more likely to fail.
NASA Astrophysics Data System (ADS)
Ducklow, H. W.; Stukel, M. R.; Bowman, J. S.; Kim, H.; Cassar, N.; Eveleth, R.; Li, Z.; Doney, S. C.; Sailley, S. F.; Jickells, T. D.; Baker, A. R.; Chance, R.
2016-12-01
In this presentation, we will compare different estimates of net community production (NCP) and export production (EP), including both traditional (changes in nutrient inventories and biological incubations) and newer measurements (Oxygen-Argon ratio, Thorium-234 disequilibrium, Iodide accumulation). Palmer Long Term Ecological Research (PAL-LTER) has been conducting observations of core biogeochemical (nutrient and carbon inventories, sediment trap flux) and ecological (standing stocks, production and grazing rates) processes along the WAP since 1993. Datasets include both temporally-intensive (semiweekly, Oct-April) observations in two nearshore locations at Palmer Station, and regionally-extensive observations over a 200 x 700 km grid of stations extending across the shelf into deep ocean water (>3000 m) each January. These observations provide a long term temporal and spatial context for more recent and focused measurements of net NCP and EP from the euphotic zone. For example, long-term net drawdown of nitrate averaged 415 mmol N m-2 season-1 (33 gC m-2 Season-1) at Palmer Station and 557 mmol N m-2 Season-1 (45 gC m-2 Season-1) over the regional grid. In comparison, discrete bottle-based O2/Ar estimates of NCP averaged 44 mmol O2 m-2 d-1 (0.37 gC m-2 d-1) regionally in January 2008-11. Th234 export was 684 dpm-2 d-1 (0.15 gC m-2 d-1) in January 2012, sourced from 15NO3 uptake-based new production of 4.1 mmol N m-2 d-1 (0.37 gC m-2 d-1). Intercomparison of these estimates is not straightforward. Measurements are based on several elemental currencies (C, N, O2, Th). We do not fully understand the processes each method claims to address. Is NCP the same as new production? Different processes and their measurements proceed over timescales of hours (new and net PP) to weeks (O2/Ar, 234Th) to months (inventory drawdowns). As implied above, assignment of time duration of net drawdown processes is uncertain for changes in water column inventories. Models provide additional insights, as modeled processes can be exactly defined. Inverse foodweb models of foodwebs in the PAL-WAP region yield NCP and EP estimates ranging 0.14 - 0.48 gC m-2 d-1. NCP and EP are equivalent in these steady-state foodweb models. We will synthesize these and other estimates and placed in emergent objective schemes.
Evolutionary approaches to cultural and linguistic diversity.
Steele, James; Jordan, Peter; Cochrane, Ethan
2010-12-12
Evolutionary approaches to cultural change are increasingly influential, and many scientists believe that a 'grand synthesis' is now in sight. The papers in this Theme Issue, which derives from a symposium held by the AHRC Centre for the Evolution of Cultural Diversity (University College London) in December 2008, focus on how the phylogenetic tree-building and network-based techniques used to estimate descent relationships in biology can be adapted to reconstruct cultural histories, where some degree of inter-societal diffusion will almost inevitably be superimposed on any deeper signal of a historical branching process. The disciplines represented include the three most purely 'cultural' fields from the four-field model of anthropology (cultural anthropology, archaeology and linguistic anthropology). In this short introduction, some context is provided from the history of anthropology, and key issues raised by the papers are highlighted.
United States benefits of improved worldwide wheat crop information from a LANDSAT system
NASA Technical Reports Server (NTRS)
Heiss, K. P.; Sand, F.; Seidel, A.; Warner, D.; Sheflin, N.; Bhattacharyya, R.; Andrews, J.
1975-01-01
The value of worldwide information improvements on wheat crops, promised by LANDSAT, is measured in the context of world wheat markets. These benefits are based on current LANDSAT technical goals and assume that information is made available to all (United States and other countries) at the same time. A detailed empirical sample demonstration of the effect of improved information is given; the history of wheat commodity prices for 1971-72 is reconstructed and the price changes from improved vs. historical information are compared. The improved crop forecasting from a LANDSAT system assumed include wheat crop estimates of 90 percent accuracy for each major wheat producing region. Accurate, objective worldwide wheat crop information using space systems may have a very stabilizing influence on world commodity markets, in part making possible the establishment of long-term, stable trade relationships.
Elderly Care and Intrafamily Resource Allocation when Children Migrate.
Antman, Francisca M
2012-01-01
This paper considers the intrafamily allocation of elderly care in the context of international migration where migrant children may be able to provide financial assistance to their parents, but are unable to offer physical care. To investigate the interaction between siblings, I take a non-cooperative view of family decision-making and estimate best response functions for individual physical and financial contributions as a function of siblings' contributions. I address the endogeneity of siblings' contributions and individual migration decisions by using siblings' characteristics as instrumental variables as well as models including family fixed effects. For both migrants and non-migrants, I find evidence that financial contributions function as strategic complements while siblings' time contributions operate as strategic substitutes. This suggests that children's contributions toward elderly care may be based on both strategic bequest and public good motivations.
Elderly Care and Intrafamily Resource Allocation when Children Migrate *
Antman, Francisca M.
2012-01-01
This paper considers the intrafamily allocation of elderly care in the context of international migration where migrant children may be able to provide financial assistance to their parents, but are unable to offer physical care. To investigate the interaction between siblings, I take a non-cooperative view of family decision-making and estimate best response functions for individual physical and financial contributions as a function of siblings’ contributions. I address the endogeneity of siblings’ contributions and individual migration decisions by using siblings’ characteristics as instrumental variables as well as models including family fixed effects. For both migrants and non-migrants, I find evidence that financial contributions function as strategic complements while siblings’ time contributions operate as strategic substitutes. This suggests that children’s contributions toward elderly care may be based on both strategic bequest and public good motivations. PMID:22518064
Evolutionary approaches to cultural and linguistic diversity
Steele, James; Jordan, Peter; Cochrane, Ethan
2010-01-01
Evolutionary approaches to cultural change are increasingly influential, and many scientists believe that a ‘grand synthesis’ is now in sight. The papers in this Theme Issue, which derives from a symposium held by the AHRC Centre for the Evolution of Cultural Diversity (University College London) in December 2008, focus on how the phylogenetic tree-building and network-based techniques used to estimate descent relationships in biology can be adapted to reconstruct cultural histories, where some degree of inter-societal diffusion will almost inevitably be superimposed on any deeper signal of a historical branching process. The disciplines represented include the three most purely ‘cultural’ fields from the four-field model of anthropology (cultural anthropology, archaeology and linguistic anthropology). In this short introduction, some context is provided from the history of anthropology, and key issues raised by the papers are highlighted. PMID:21041203
NASA Astrophysics Data System (ADS)
Gavrielides, Marios A.; DeFilippo, Gino; Berman, Benjamin P.; Li, Qin; Petrick, Nicholas; Schultz, Kurt; Siegelman, Jenifer
2017-03-01
Computed tomography is primarily the modality of choice to assess stability of nonsolid pulmonary nodules (sometimes referred to as ground-glass opacity) for three or more years, with change in size being the primary factor to monitor. Since volume extracted from CT is being examined as a quantitative biomarker of lung nodule size, it is important to examine factors affecting the performance of volumetric CT for this task. More specifically, the effect of reconstruction algorithms and measurement method in the context of low-dose CT protocols has been an under-examined area of research. In this phantom study we assessed volumetric CT with two different measurement methods (model-based and segmentation-based) for nodules with radiodensities of both nonsolid (-800HU and -630HU) and solid (-10HU) nodules, sizes of 5mm and 10mm, and two different shapes (spherical and spiculated). Imaging protocols included CTDIvol typical of screening (1.7mGy) and sub-screening (0.6mGy) scans and different types of reconstruction algorithms across three scanners. Results showed that radio-density was the factor contributing most to overall error based on ANOVA. The choice of reconstruction algorithm or measurement method did not affect substantially the accuracy of measurements; however, measurement method affected repeatability with repeatability coefficients ranging from around 3-5% for the model-based estimator to around 20-30% across reconstruction algorithms for the segmentation-based method. The findings of the study can be valuable toward developing standardized protocols and performance claims for nonsolid nodules.
NASA Astrophysics Data System (ADS)
Durand, Michael; Andreadis, Konstantinos M.; Alsdorf, Douglas E.; Lettenmaier, Dennis P.; Moller, Delwyn; Wilson, Matthew
2008-10-01
The proposed Surface Water and Ocean Topography (SWOT) mission would provide measurements of water surface elevation (WSE) for characterization of storage change and discharge. River channel bathymetry is a significant source of uncertainty in estimating discharge from WSE measurements, however. In this paper, we demonstrate an ensemble-based data assimilation (DA) methodology for estimating bathymetric depth and slope from WSE measurements and the LISFLOOD-FP hydrodynamic model. We performed two proof-of-concept experiments using synthetically generated SWOT measurements. The experiments demonstrated that bathymetric depth and slope can be estimated to within 3.0 microradians or 50 cm, respectively, using SWOT WSE measurements, within the context of our DA and modeling framework. We found that channel bathymetry estimation accuracy is relatively insensitive to SWOT measurement error, because uncertainty in LISFLOOD-FP inputs (such as channel roughness and upstream boundary conditions) is likely to be of greater magnitude than measurement error.
Cost-effectiveness of dabigatran for stroke prevention in atrial fibrillation in Switzerland.
Pletscher, Mark; Plessow, Rafael; Eichler, Klaus; Wieser, Simon
2013-01-08
Atrial fibrillation is a major risk factor for ischemic stroke and anticoagulation therapy is indicated to reduce risk. Dabigatran is a new oral anticoagulant that does not require INR monitoring. This study evaluated the cost-effectiveness of dabigatran versus vitamin K antagonists for stroke prevention in atrial fibrillation in Switzerland. A Markov model simulating the course of treatment and occurrence of clinical events in two treatment arms over the lifetime of patients was adapted to the Swiss context. The adaptation included the cost of anticoagulation therapy and clinical events in Switzerland. The cost of inpatient care was estimated on data of all inpatient hospital stays in 2008. The calculation of outpatient care costs was based on peer reviewed studies, expert interviews and local tariffs. Patients treated with dabigatran had a higher life expectancy and experienced more quality adjusted life years (QALY) while incurring higher costs than patients treated with vitamin K antagonists. The estimated incremental cost-effectiveness ratio (ICER) was CHF 25,108.‒ per QALY with 110 mg and CHF 9,702 per QALY with 150 mg of dabigatran. A sequential dosage scheme, in which 150 mg are administered up to the age of 80 years and 110 mg thereafter, resulted in an ICER of CHF 10,215 per QALY. A sensitivity analysis confirmed that these results are robust. Dabigatran can be considered cost-effective in comparison with vitamin K antagonists in the Swiss context. The higher drug cost of dabigatran is compensated by savings in INR monitoring, lower cost of clinical events and QALY-gains.
Samantra, Chitrasen; Datta, Saurav; Mahapatra, Siba Sankar
2017-03-01
In the context of underground coal mining industry, the increased economic issues regarding implementation of additional safety measure systems, along with growing public awareness to ensure high level of workers safety, have put great pressure on the managers towards finding the best solution to ensure safe as well as economically viable alternative selection. Risk-based decision support system plays an important role in finding such solutions amongst candidate alternatives with respect to multiple decision criteria. Therefore, in this paper, a unified risk-based decision-making methodology has been proposed for selecting an appropriate safety measure system in relation to an underground coal mining industry with respect to multiple risk criteria such as financial risk, operating risk, and maintenance risk. The proposed methodology uses interval-valued fuzzy set theory for modelling vagueness and subjectivity in the estimates of fuzzy risk ratings for making appropriate decision. The methodology is based on the aggregative fuzzy risk analysis and multi-criteria decision making. The selection decisions are made within the context of understanding the total integrated risk that is likely to incur while adapting the particular safety system alternative. Effectiveness of the proposed methodology has been validated through a real-time case study. The result in the context of final priority ranking is seemed fairly consistent.
Improving Snow Modeling by Assimilating Observational Data Collected by Citizen Scientists
NASA Astrophysics Data System (ADS)
Crumley, R. L.; Hill, D. F.; Arendt, A. A.; Wikstrom Jones, K.; Wolken, G. J.; Setiawan, L.
2017-12-01
Modeling seasonal snow pack in alpine environments includes a multiplicity of challenges caused by a lack of spatially extensive and temporally continuous observational datasets. This is partially due to the difficulty of collecting measurements in harsh, remote environments where extreme gradients in topography exist, accompanied by large model domains and inclement weather. Engaging snow enthusiasts, snow professionals, and community members to participate in the process of data collection may address some of these challenges. In this study, we use SnowModel to estimate seasonal snow water equivalence (SWE) in the Thompson Pass region of Alaska while incorporating snow depth measurements collected by citizen scientists. We develop a modeling approach to assimilate hundreds of snow depth measurements from participants in the Community Snow Observations (CSO) project (www.communitysnowobs.org). The CSO project includes a mobile application where participants record and submit geo-located snow depth measurements while working and recreating in the study area. These snow depth measurements are randomly located within the model grid at irregular time intervals over the span of four months in the 2017 water year. This snow depth observation dataset is converted into a SWE dataset by employing an empirically-based, bulk density and SWE estimation method. We then assimilate this data using SnowAssim, a sub-model within SnowModel, to constrain the SWE output by the observed data. Multiple model runs are designed to represent an array of output scenarios during the assimilation process. An effort to present model output uncertainties is included, as well as quantification of the pre- and post-assimilation divergence in modeled SWE. Early results reveal pre-assimilation SWE estimations are consistently greater than the post-assimilation estimations, and the magnitude of divergence increases throughout the snow pack evolution period. This research has implications beyond the Alaskan context because it increases our ability to constrain snow modeling outputs by making use of snow measurements collected by non-expert, citizen scientists.
Reform in Undergraduate Science, Technology, Engineering, and Mathematics: The Classroom Context
ERIC Educational Resources Information Center
Stage, Frances K.; Kinzie, Jillian
2009-01-01
This article reports the results of a series of site visits examining modifications to science, technology, engineering, and mathematics (STEM) teaching and learning based on reform on three differing campuses. Innovations in stem classrooms included collaborative approaches to learning; incorporation of active learning, authentic contexts, peer…
Supervising an International Teaching Practicum: Building Partnerships in Postcolonial Contexts
ERIC Educational Resources Information Center
Major, Jae; Santoro, Ninetta
2016-01-01
Teaching practicum experiences, including those in international contexts, are based on partnerships between institutions and host schools, and the partnership between the pre-service teacher, the cooperating teacher and the university supervisor. This article explores the relationship between pre-service teachers and cooperating teachers in an…
Brito, Rita S; Pinheiro, Helena M; Ferreira, Filipa; Matos, José S; Pinheiro, Alexandre; Lourenço, Nídia D
2016-03-01
Online monitoring programs based on spectroscopy have a high application potential for the detection of hazardous wastewater discharges in sewer systems. Wastewater hydraulics poses a challenge for in situ spectroscopy, especially when the system includes storm water connections leading to rapid changes in water depth, velocity, and in the water quality matrix. Thus, there is a need to optimize and fix the location of in situ instruments, limiting their availability for calibration. In this context, the development of calibration models on bench spectrophotometers to estimate wastewater quality parameters from spectra acquired with in situ instruments could be very useful. However, spectra contain information not only from the samples, but also from the spectrophotometer generally invalidating this approach. The use of calibration transfer methods is a promising solution to this problem. In this study, calibration models were developed using interval partial least squares (iPLS), for the estimation of total suspended solids (TSS) and chemical oxygen demand (COD) in sewage from Ultraviolet-visible spectra acquired in a bench scanning spectrophotometer. The feasibility of calibration transfer to a submersible, diode array equipment, to be subsequently operated in situ, was assessed using three procedures: slope and bias correction (SBC); single wavelength standardization (SWS) on mean spectra; and local centering (LC). The results showed that SBC was the most adequate for the available data, adding insignificant error to the base model estimates. Single wavelength standardization was a close second best, potentially more robust, and independent of the base iPLS model. Local centering was shown to be inadequate for the samples and instruments used. © The Author(s) 2016.
Performance of a large building rainwater harvesting system.
Ward, S; Memon, F A; Butler, D
2012-10-15
Rainwater harvesting is increasingly becoming an integral part of the sustainable water management toolkit. Despite a plethora of studies modelling the feasibility of the utilisation of rainwater harvesting (RWH) systems in particular contexts, there remains a significant gap in knowledge in relation to detailed empirical assessments of performance. Domestic systems have been investigated to a limited degree in the literature, including in the UK, but there are few recent longitudinal studies of larger non-domestic systems. Additionally, there are few studies comparing estimated and actual performance. This paper presents the results of a longitudinal empirical performance assessment of a non-domestic RWH system located in an office building in the UK. Furthermore, it compares actual performance with the estimated performance based on two methods recommended by the British Standards Institute - the Intermediate (simple calculations) and Detailed (simulation-based) Approaches. Results highlight that the average measured water saving efficiency (amount of mains water saved) of the office-based RWH system was 87% across an 8-month period, due to the system being over-sized for the actual occupancy level. Consequently, a similar level of performance could have been achieved using a smaller-sized tank. Estimated cost savings resulted in capital payback periods of 11 and 6 years for the actual over-sized tank and the smaller optimised tank, respectively. However, more detailed cost data on maintenance and operation is required to perform whole life cost analyses. These findings indicate that office-scale RWH systems potentially offer significant water and cost savings. They also emphasise the importance of monitoring data and that a transition to the use of Detailed Approaches (particularly in the UK) is required to (a) minimise over-sizing of storage tanks and (b) build confidence in RWH system performance. Copyright © 2012 Elsevier Ltd. All rights reserved.
Westö, Johan; May, Patrick J C
2018-05-02
Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multi-filter linear-nonlinear (LN) models and context models. Models are, however, never correct and they rely on assumptions to keep them simple enough to be interpretable. As a consequence, different models describe different stimulus-response mappings, which may or may not be good approximations of real neural behavior. In the current study, we take up two tasks: First, we introduce new ways to estimate context models with realistic nonlinearities, that is, with logistic and exponential functions. Second, we evaluate context models and multi-filter LN models in terms of how well they describe recorded data from complex cells in cat primary visual cortex. Our results, based on single-spike information and correlation coefficients, indicate that context models outperform corresponding multi-filter LN models of equal complexity (measured in terms of number of parameters), with the best increase in performance being achieved by the novel context models. Consequently, our results suggest that the multi-filter LN-model framework is suboptimal for describing the behavior of complex cells: the context-model framework is clearly superior while still providing interpretable quantizations of neural behavior.
Kelly, Valerie J.; Hooper, Richard P.; Aulenbach, Brent T.; Janet, Mary
2001-01-01
This report contains concentrations and annual mass fluxes (loadings) for a broad range of water-quality constituents measured during 1996-2000 as part of the U.S. Geological Survey National Stream Quality Accounting Network (NASQAN). During this period, NASQAN operated a network of 40-42 stations in four of the largest river basins of the USA: the Colorado, the Columbia, the Mississippi (including the Missouri and Ohio), and the Rio Grande. The report contains surface-water quality data, streamflow data, field measurements (e.g. water temperature and pH), sediment-chemistry data, and quality-assurance data; interpretive products include annual and average loads, regression parameters for models used to estimate loads, sub-basin yield maps, maps depicting percent detections for censored constituents, and diagrams depicting flow-weighted average concentrations. Where possible, a regression model relating concentration to discharge and season was used for flux estimation. The interpretive context provided by annual loads includes identifying source and sink areas for constituents and estimating the loadings to receiving waters, such as reservoirs or the ocean.
Burr, Tom; Hamada, Michael S.; Howell, John; ...
2013-01-01
Process monitoring (PM) for nuclear safeguards sometimes requires estimation of thresholds corresponding to small false alarm rates. Threshold estimation dates to the 1920s with the Shewhart control chart; however, because possible new roles for PM are being evaluated in nuclear safeguards, it is timely to consider modern model selection options in the context of threshold estimation. One of the possible new PM roles involves PM residuals, where a residual is defined as residual = data − prediction. This paper reviews alarm threshold estimation, introduces model selection options, and considers a range of assumptions regarding the data-generating mechanism for PM residuals.more » Two PM examples from nuclear safeguards are included to motivate the need for alarm threshold estimation. The first example involves mixtures of probability distributions that arise in solution monitoring, which is a common type of PM. The second example involves periodic partial cleanout of in-process inventory, leading to challenging structure in the time series of PM residuals.« less
FunTree: advances in a resource for exploring and contextualising protein function evolution.
Sillitoe, Ian; Furnham, Nicholas
2016-01-04
FunTree is a resource that brings together protein sequence, structure and functional information, including overall chemical reaction and mechanistic data, for structurally defined domain superfamilies. Developed in tandem with the CATH database, the original FunTree contained just 276 superfamilies focused on enzymes. Here, we present an update of FunTree that has expanded to include 2340 superfamilies including both enzymes and proteins with non-enzymatic functions annotated by Gene Ontology (GO) terms. This allows the investigation of how novel functions have evolved within a structurally defined superfamily and provides a means to analyse trends across many superfamilies. This is done not only within the context of a protein's sequence and structure but also the relationships of their functions. New measures of functional similarity have been integrated, including for enzymes comparisons of overall reactions based on overall bond changes, reaction centres (the local environment atoms involved in the reaction) and the sub-structure similarities of the metabolites involved in the reaction and for non-enzymes semantic similarities based on the GO. To identify and highlight changes in function through evolution, ancestral character estimations are made and presented. All this is accessible through a new re-designed web interface that can be found at http://www.funtree.info. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Cousins, Matthew M.; Konikoff, Jacob; Laeyendecker, Oliver; Celum, Connie; Buchbinder, Susan P.; Seage, George R.; Kirk, Gregory D.; Moore, Richard D.; Mehta, Shruti H.; Margolick, Joseph B.; Brown, Joelle; Mayer, Kenneth H.; Koblin, Beryl A.; Wheeler, Darrell; Justman, Jessica E.; Hodder, Sally L.; Quinn, Thomas C.; Brookmeyer, Ron
2014-01-01
Multiassay algorithms (MAAs) can be used to estimate cross-sectional HIV incidence. We previously identified a robust MAA that includes the BED capture enzyme immunoassay (BED-CEIA), the Bio-Rad Avidity assay, viral load, and CD4 cell count. In this report, we evaluated MAAs that include a high-resolution melting (HRM) diversity assay that does not require sequencing. HRM scores were determined for eight regions of the HIV genome (2 in gag, 1 in pol, and 5 in env). The MAAs that were evaluated included the BED-CEIA, the Bio-Rad Avidity assay, viral load, and the HRM diversity assay, using HRM scores from different regions and a range of region-specific HRM diversity assay cutoffs. The performance characteristics based on the proportion of samples that were classified as MAA positive by duration of infection were determined for each MAA, including the mean window period. The cross-sectional incidence estimates obtained using optimized MAAs were compared to longitudinal incidence estimates for three cohorts in the United States. The performance of the HRM-based MAA was nearly identical to that of the MAA that included CD4 cell count. The HRM-based MAA had a mean window period of 154 days and provided cross-sectional incidence estimates that were similar to those based on cohort follow-up. HIV diversity is a useful biomarker for estimating HIV incidence. MAAs that include the HRM diversity assay can provide accurate HIV incidence estimates using stored blood plasma or serum samples without a requirement for CD4 cell count data. PMID:24153134
Sweeney, Sedona; Vassall, Anna; Foster, Nicola; Simms, Victoria; Ilboudo, Patrick; Kimaro, Godfather; Mudzengi, Don; Guinness, Lorna
2016-02-01
Out-of-pocket spending is increasingly recognized as an important barrier to accessing health care, particularly in low-income and middle-income countries (LMICs) where a large portion of health expenditure comes from out-of-pocket payments. Emerging universal healthcare policies prioritize reduction of poverty impact such as catastrophic and impoverishing healthcare expenditure. Poverty impact is therefore increasingly evaluated alongside and within economic evaluations to estimate the impact of specific health interventions on poverty. However, data collection for these metrics can be challenging in intervention-based contexts in LMICs because of study design and practical limitations. Using a set of case studies, this letter identifies methodological challenges in collecting patient cost data in LMIC contexts. These components are presented in a framework to encourage researchers to consider the implications of differing approaches in data collection and to report their approach in a standardized and transparent way. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd.
A Context-Aware Method for Authentically Simulating Outdoors Shadows for Mobile Augmented Reality.
Barreira, Joao; Bessa, Maximino; Barbosa, Luis; Magalhaes, Luis
2018-03-01
Visual coherence between virtual and real objects is a major issue in creating convincing augmented reality (AR) applications. To achieve this seamless integration, actual light conditions must be determined in real time to ensure that virtual objects are correctly illuminated and cast consistent shadows. In this paper, we propose a novel method to estimate daylight illumination and use this information in outdoor AR applications to render virtual objects with coherent shadows. The illumination parameters are acquired in real time from context-aware live sensor data. The method works under unprepared natural conditions. We also present a novel and rapid implementation of a state-of-the-art skylight model, from which the illumination parameters are derived. The Sun's position is calculated based on the user location and time of day, with the relative rotational differences estimated from a gyroscope, compass and accelerometer. The results illustrated that our method can generate visually credible AR scenes with consistent shadows rendered from recovered illumination.
NASA Astrophysics Data System (ADS)
Brockmann, J. M.; Schuh, W.-D.
2011-07-01
The estimation of the global Earth's gravity field parametrized as a finite spherical harmonic series is computationally demanding. The computational effort depends on the one hand on the maximal resolution of the spherical harmonic expansion (i.e. the number of parameters to be estimated) and on the other hand on the number of observations (which are several millions for e.g. observations from the GOCE satellite missions). To circumvent these restrictions, a massive parallel software based on high-performance computing (HPC) libraries as ScaLAPACK, PBLAS and BLACS was designed in the context of GOCE HPF WP6000 and the GOCO consortium. A prerequisite for the use of these libraries is that all matrices are block-cyclic distributed on a processor grid comprised by a large number of (distributed memory) computers. Using this set of standard HPC libraries has the benefit that once the matrices are distributed across the computer cluster, a huge set of efficient and highly scalable linear algebra operations can be used.
Williamson, Scott; Fledel-Alon, Adi; Bustamante, Carlos D
2004-09-01
We develop a Poisson random-field model of polymorphism and divergence that allows arbitrary dominance relations in a diploid context. This model provides a maximum-likelihood framework for estimating both selection and dominance parameters of new mutations using information on the frequency spectrum of sequence polymorphisms. This is the first DNA sequence-based estimator of the dominance parameter. Our model also leads to a likelihood-ratio test for distinguishing nongenic from genic selection; simulations indicate that this test is quite powerful when a large number of segregating sites are available. We also use simulations to explore the bias in selection parameter estimates caused by unacknowledged dominance relations. When inference is based on the frequency spectrum of polymorphisms, genic selection estimates of the selection parameter can be very strongly biased even for minor deviations from the genic selection model. Surprisingly, however, when inference is based on polymorphism and divergence (McDonald-Kreitman) data, genic selection estimates of the selection parameter are nearly unbiased, even for completely dominant or recessive mutations. Further, we find that weak overdominant selection can increase, rather than decrease, the substitution rate relative to levels of polymorphism. This nonintuitive result has major implications for the interpretation of several popular tests of neutrality.
The advantages and limitations of guideline adaptation frameworks.
Wang, Zhicheng; Norris, Susan L; Bero, Lisa
2018-05-29
The implementation of evidence-based guidelines can improve clinical and public health outcomes by helping health professionals practice in the most effective manner, as well as assisting policy-makers in designing optimal programs. Adaptation of a guideline to suit the context in which it is intended to be applied can be a key step in the implementation process. Without taking the local context into account, certain interventions recommended in evidence-based guidelines may be infeasible under local conditions. Guideline adaptation frameworks provide a systematic way of approaching adaptation, and their use may increase transparency, methodological rigor, and the quality of the adapted guideline. This paper presents a number of adaptation frameworks that are currently available. We aim to compare the advantages and limitations of their processes, methods, and resource implications. These insights into adaptation frameworks can inform the future development of guidelines and systematic methods to optimize their adaptation. Recent adaptation frameworks show an evolution from adapting entire existing guidelines, to adapting specific recommendations extracted from an existing guideline, to constructing evidence tables for each recommendation that needs to be adapted. This is a move towards more recommendation-focused, context-specific processes and considerations. There are still many gaps in knowledge about guideline adaptation. Most of the frameworks reviewed lack any evaluation of the adaptation process and outcomes, including user satisfaction and resources expended. The validity, usability, and health impact of guidelines developed via an adaptation process have not been studied. Lastly, adaptation frameworks have not been evaluated for use in low-income countries. Despite the limitations in frameworks, a more systematic approach to adaptation based on a framework is valuable, as it helps to ensure that the recommendations stay true to the evidence while taking local needs into account. The utilization of frameworks in the guideline implementation process can be optimized by increasing the understanding and upfront estimation of resource and time needed, capacity building in adaptation methods, and increasing the adaptability of the source recommendation document.
Context-Enabled Business Intelligence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Troy Hiltbrand
To truly understand context and apply it in business intelligence, it is vital to understand what context is and how it can be applied in addressing organizational needs. Context describes the facets of the environment that impact the way that end users interact with the system. Context includes aspects of location, chronology, access method, demographics, social influence/ relationships, end-user attitude/ emotional state, behavior/ past behavior, and presence. To be successful in making Business Intelligence content enabled, it is important to be able to capture the context of use user. With advances in technology, there are a number of ways inmore » which this user based information can be gathered and exposed to enhance the overall end user experience.« less
Hagger, Martin S; Chan, Derwin K C; Protogerou, Cleo; Chatzisarantis, Nikos L D
2016-08-01
Synthesizing research on social cognitive theories applied to health behavior is an important step in the development of an evidence base of psychological factors as targets for effective behavioral interventions. However, few meta-analyses of research on social cognitive theories in health contexts have conducted simultaneous tests of theoretically-stipulated pattern effects using path analysis. We argue that conducting path analyses of meta-analytic effects among constructs from social cognitive theories is important to test nomological validity, account for mediation effects, and evaluate unique effects of theory constructs independent of past behavior. We illustrate our points by conducting new analyses of two meta-analyses of a popular theory applied to health behaviors, the theory of planned behavior. We conducted meta-analytic path analyses of the theory in two behavioral contexts (alcohol and dietary behaviors) using data from the primary studies included in the original meta-analyses augmented to include intercorrelations among constructs and relations with past behavior missing from the original analysis. Findings supported the nomological validity of the theory and its hypotheses for both behaviors, confirmed important model processes through mediation analysis, demonstrated the attenuating effect of past behavior on theory relations, and provided estimates of the unique effects of theory constructs independent of past behavior. Our analysis illustrates the importance of conducting a simultaneous test of theory-stipulated effects in meta-analyses of social cognitive theories applied to health behavior. We recommend researchers adopt this analytic procedure when synthesizing evidence across primary tests of social cognitive theories in health. Copyright © 2016 Elsevier Inc. All rights reserved.
Ultrasound visual feedback treatment and practice variability for residual speech sound errors
Preston, Jonathan L.; McCabe, Patricia; Rivera-Campos, Ahmed; Whittle, Jessica L.; Landry, Erik; Maas, Edwin
2014-01-01
Purpose The goals were to (1) test the efficacy of a motor-learning based treatment that includes ultrasound visual feedback for individuals with residual speech sound errors, and (2) explore whether the addition of prosodic cueing facilitates speech sound learning. Method A multiple baseline single subject design was used, replicated across 8 participants. For each participant, one sound context was treated with ultrasound plus prosodic cueing for 7 sessions, and another sound context was treated with ultrasound but without prosodic cueing for 7 sessions. Sessions included ultrasound visual feedback as well as non-ultrasound treatment. Word-level probes assessing untreated words were used to evaluate retention and generalization. Results For most participants, increases in accuracy of target sound contexts at the word level were observed with the treatment program regardless of whether prosodic cueing was included. Generalization between onset singletons and clusters was observed, as well as generalization to sentence-level accuracy. There was evidence of retention during post-treatment probes, including at a two-month follow-up. Conclusions A motor-based treatment program that includes ultrasound visual feedback can facilitate learning of speech sounds in individuals with residual speech sound errors. PMID:25087938
An u-Service Model Based on a Smart Phone for Urban Computing Environments
NASA Astrophysics Data System (ADS)
Cho, Yongyun; Yoe, Hyun
In urban computing environments, all of services should be based on the interaction between humans and environments around them, which frequently and ordinarily in home and office. This paper propose an u-service model based on a smart phone for urban computing environments. The suggested service model includes a context-aware and personalized service scenario development environment that can instantly describe user's u-service demand or situation information with smart devices. To do this, the architecture of the suggested service model consists of a graphical service editing environment for smart devices, an u-service platform, and an infrastructure with sensors and WSN/USN. The graphic editor expresses contexts as execution conditions of a new service through a context model based on ontology. The service platform deals with the service scenario according to contexts. With the suggested service model, an user in urban computing environments can quickly and easily make u-service or new service using smart devices.
Target-depth estimation in active sonar: Cramer-Rao bounds for a bilinear sound-speed profile.
Mours, Alexis; Ioana, Cornel; Mars, Jérôme I; Josso, Nicolas F; Doisy, Yves
2016-09-01
This paper develops a localization method to estimate the depth of a target in the context of active sonar, at long ranges. The target depth is tactical information for both strategy and classification purposes. The Cramer-Rao lower bounds for the target position as range and depth are derived for a bilinear profile. The influence of sonar parameters on the standard deviations of the target range and depth are studied. A localization method based on ray back-propagation with a probabilistic approach is then investigated. Monte-Carlo simulations applied to a summer Mediterranean sound-speed profile are performed to evaluate the efficiency of the estimator. This method is finally validated on data in an experimental tank.
Dentalmaps: Automatic Dental Delineation for Radiotherapy Planning in Head-and-Neck Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thariat, Juliette, E-mail: jthariat@hotmail.com; Ramus, Liliane; INRIA
Purpose: To propose an automatic atlas-based segmentation framework of the dental structures, called Dentalmaps, and to assess its accuracy and relevance to guide dental care in the context of intensity-modulated radiotherapy. Methods and Materials: A multi-atlas-based segmentation, less sensitive to artifacts than previously published head-and-neck segmentation methods, was used. The manual segmentations of a 21-patient database were first deformed onto the query using nonlinear registrations with the training images and then fused to estimate the consensus segmentation of the query. Results: The framework was evaluated with a leave-one-out protocol. The maximum doses estimated using manual contours were considered as groundmore » truth and compared with the maximum doses estimated using automatic contours. The dose estimation error was within 2-Gy accuracy in 75% of cases (with a median of 0.9 Gy), whereas it was within 2-Gy accuracy in 30% of cases only with the visual estimation method without any contour, which is the routine practice procedure. Conclusions: Dose estimates using this framework were more accurate than visual estimates without dental contour. Dentalmaps represents a useful documentation and communication tool between radiation oncologists and dentists in routine practice. Prospective multicenter assessment is underway on patients extrinsic to the database.« less
Stochastic Estimation of Cost Frontier: Evidence from Bangladesh
ERIC Educational Resources Information Center
Mamun, Shamsul Arifeen Khan
2012-01-01
In the literature of higher education cost function study, enough knowledge is created in the area of economy scale in the context of developed countries but the knowledge of input demand is lacking. On the other hand, empirical knowledge in the context of developing countries is very meagre. The paper fills up the knowledge gap, estimating a…
Evaluating Specification Tests in the Context of Value-Added Estimation
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Reckase, Mark D.; Stacy, Brian W.; Wooldridge, Jeffrey M.
2015-01-01
We study the properties of two specification tests that have been applied to a variety of estimators in the context of value-added measures (VAMs) of teacher and school quality: the Hausman test for choosing between student-level random and fixed effects, and a test for feedback (sometimes called a "falsification test"). We discuss…
Evaluating Specification Tests in the Context of Value-Added Estimation. Working Paper #38
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Reckase, Mark D.; Stacy, Brian W.; Wooldridge, Jeffrey M.
2014-01-01
We study the properties of two specification tests that have been applied to a variety of estimators in the context of value-added measures (VAMs) of teacher and school quality: the Hausman test for choosing between random and fixed effects and a test for feedback (sometimes called a "falsification test"). We discuss theoretical…
Multitaper scan-free spectrum estimation using a rotational shear interferometer.
Lepage, Kyle; Thomson, David J; Kraut, Shawn; Brady, David J
2006-05-01
Multitaper methods for a scan-free spectrum estimation that uses a rotational shear interferometer are investigated. Before source spectra can be estimated the sources must be detected. A source detection algorithm based upon the multitaper F-test is proposed. The algorithm is simulated, with additive, white Gaussian detector noise. A source with a signal-to-noise ratio (SNR) of 0.71 is detected 2.9 degrees from a source with a SNR of 70.1, with a significance level of 10(-4), approximately 4 orders of magnitude more significant than the source detection obtained with a standard detection algorithm. Interpolation and the use of prewhitening filters are investigated in the context of rotational shear interferometer (RSI) source spectra estimation. Finally, a multitaper spectrum estimator is proposed, simulated, and compared with untapered estimates. The multitaper estimate is found via simulation to distinguish a spectral feature with a SNR of 1.6 near a large spectral feature. The SNR of 1.6 spectral feature is not distinguished by the untapered spectrum estimate. The findings are consistent with the strong capability of the multitaper estimate to reduce out-of-band spectral leakage.
Multitaper scan-free spectrum estimation using a rotational shear interferometer
NASA Astrophysics Data System (ADS)
Lepage, Kyle; Thomson, David J.; Kraut, Shawn; Brady, David J.
2006-05-01
Multitaper methods for a scan-free spectrum estimation that uses a rotational shear interferometer are investigated. Before source spectra can be estimated the sources must be detected. A source detection algorithm based upon the multitaper F-test is proposed. The algorithm is simulated, with additive, white Gaussian detector noise. A source with a signal-to-noise ratio (SNR) of 0.71 is detected 2.9° from a source with a SNR of 70.1, with a significance level of 10-4, ˜4 orders of magnitude more significant than the source detection obtained with a standard detection algorithm. Interpolation and the use of prewhitening filters are investigated in the context of rotational shear interferometer (RSI) source spectra estimation. Finally, a multitaper spectrum estimator is proposed, simulated, and compared with untapered estimates. The multitaper estimate is found via simulation to distinguish a spectral feature with a SNR of 1.6 near a large spectral feature. The SNR of 1.6 spectral feature is not distinguished by the untapered spectrum estimate. The findings are consistent with the strong capability of the multitaper estimate to reduce out-of-band spectral leakage.
Data Fusion of Gridded Snow Products Enhanced with Terrain Covariates and a Simple Snow Model
NASA Astrophysics Data System (ADS)
Snauffer, A. M.; Hsieh, W. W.; Cannon, A. J.
2017-12-01
Hydrologic planning requires accurate estimates of regional snow water equivalent (SWE), particularly areas with hydrologic regimes dominated by spring melt. While numerous gridded data products provide such estimates, accurate representations are particularly challenging under conditions of mountainous terrain, heavy forest cover and large snow accumulations, contexts which in many ways define the province of British Columbia (BC), Canada. One promising avenue of improving SWE estimates is a data fusion approach which combines field observations with gridded SWE products and relevant covariates. A base artificial neural network (ANN) was constructed using three of the best performing gridded SWE products over BC (ERA-Interim/Land, MERRA and GLDAS-2) and simple location and time covariates. This base ANN was then enhanced to include terrain covariates (slope, aspect and Terrain Roughness Index, TRI) as well as a simple 1-layer energy balance snow model driven by gridded bias-corrected ANUSPLIN temperature and precipitation values. The ANN enhanced with all aforementioned covariates performed better than the base ANN, but most of the skill improvement was attributable to the snow model with very little contribution from the terrain covariates. The enhanced ANN improved station mean absolute error (MAE) by an average of 53% relative to the composing gridded products over the province. Interannual peak SWE correlation coefficient was found to be 0.78, an improvement of 0.05 to 0.18 over the composing products. This nonlinear approach outperformed a comparable multiple linear regression (MLR) model by 22% in MAE and 0.04 in interannual correlation. The enhanced ANN has also been shown to estimate better than the Variable Infiltration Capacity (VIC) hydrologic model calibrated and run for four BC watersheds, improving MAE by 22% and correlation by 0.05. The performance improvements of the enhanced ANN are statistically significant at the 5% level across the province and in four out of five physiographic regions.
Dual Quaternions as Constraints in 4D-DPM Models for Pose Estimation.
Martinez-Berti, Enrique; Sánchez-Salmerón, Antonio-José; Ricolfe-Viala, Carlos
2017-08-19
The goal of this research work is to improve the accuracy of human pose estimation using the Deformation Part Model (DPM) without increasing computational complexity. First, the proposed method seeks to improve pose estimation accuracy by adding the depth channel to DPM, which was formerly defined based only on red-green-blue (RGB) channels, in order to obtain a four-dimensional DPM (4D-DPM). In addition, computational complexity can be controlled by reducing the number of joints by taking it into account in a reduced 4D-DPM. Finally, complete solutions are obtained by solving the omitted joints by using inverse kinematics models. In this context, the main goal of this paper is to analyze the effect on pose estimation timing cost when using dual quaternions to solve the inverse kinematics.
ERIC Educational Resources Information Center
Niemiec, Richard P.; Walberg, Herbert J.
1989-01-01
Examines the history of computer-based education within the context of psychological theorists of instruction, including Pressey, Thorndike, Skinner, and Crowder. Topics discussed include computer-managed instruction; computer-assisted instruction; the Computer Curriculum Corporation; PLATO; TICCIT; microcomputers; effects on students; and cost…
Context-based Strategies for Engaging Consumers with Public Reports about Health Care Providers
Shaller, Dale; Kanouse, David E.; Schlesinger, Mark
2017-01-01
Efforts to engage consumers in the use of public reports on health care provider performance have met with limited success. Fostering greater engagement will require new approaches that provide consumers with relevant content at the time and in the context they need to make a decision of consequence. To this end, we identify three key factors influencing consumer engagement and show how they manifest in different ways and combinations for four particular choice contexts that appear to offer realistic opportunities for engagement. We analyze how these engagement factors play out differently in each choice context and suggest specific strategies that sponsors of public reports can use in each context. Cross-cutting lessons for report sponsors and policy makers include new media strategies such as a commitment to adaptive web-based reporting, new metrics with richer emotional content, and the use of navigators or advocates to assist consumers with interpreting reports. PMID:23819945
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Cruz, Jose A.; Johnson Stephen B.; Lo, Yunnhon
2015-01-01
This paper describes a quantitative methodology for bounding the false positive (FP) and false negative (FN) probabilities associated with a human-rated launch vehicle abort trigger (AT) that includes sensor data qualification (SDQ). In this context, an AT is a hardware and software mechanism designed to detect the existence of a specific abort condition. Also, SDQ is an algorithmic approach used to identify sensor data suspected of being corrupt so that suspect data does not adversely affect an AT's detection capability. The FP and FN methodologies presented here were developed to support estimation of the probabilities of loss of crew and loss of mission for the Space Launch System (SLS) which is being developed by the National Aeronautics and Space Administration (NASA). The paper provides a brief overview of system health management as being an extension of control theory; and describes how ATs and the calculation of FP and FN probabilities relate to this theory. The discussion leads to a detailed presentation of the FP and FN methodology and an example showing how the FP and FN calculations are performed. This detailed presentation includes a methodology for calculating the change in FP and FN probabilities that result from including SDQ in the AT architecture. To avoid proprietary and sensitive data issues, the example incorporates a mixture of open literature and fictitious reliability data. Results presented in the paper demonstrate the effectiveness of the approach in providing quantitative estimates that bound the probability of a FP or FN abort determination.
The Social and Scientific Temporal Correlates of Genotypic Intelligence and the Flynn Effect
ERIC Educational Resources Information Center
Woodley, Michael A.
2012-01-01
In this study the pattern of temporal variation in innovation rates is examined in the context of Western IQ measures in which historical genotypic gains and losses along with the Flynn effect are considered. It is found that two alternative genotypic IQ estimates based on an increase in IQ from 1455 to 1850 followed by a decrease from 1850 to the…
A k-nearest neighbor approach for estimation of single-tree biomass
Lutz Fehrmann; Christoph Kleinn
2007-01-01
Allometric biomass models are typically site and species specific. They are mostly based on a low number of independent variables such as diameter at breast height and tree height. Because of relatively small datasets, their validity is limited to the set of conditions of the study, such as site conditions and diameter range. One challenge in the context of the current...
The applicability of dental wear in age estimation for a modern American population.
Faillace, Katie E; Bethard, Jonathan D; Marks, Murray K
2017-12-01
Though applied in bioarchaeology, dental wear is an underexplored age indicator in the biological anthropology of contemporary populations, although research has been conducted on dental attrition in forensic contexts (Kim et al., , Journal of Forensic Sciences, 45, 303; Prince et al., , Journal of Forensic Sciences, 53, 588; Yun et al., , Journal of Forensic Sciences, 52, 678). The purpose of this study is to apply and adapt existing techniques for age estimation based on dental wear to a modern American population, with the aim of producing accurate age range estimates for individuals from an industrialized context. Methodologies following Yun and Prince were applied to a random sample from the University of New Mexico (n = 583) and Universidade de Coimbra (n = 50) cast and skeletal collections. Analysis of variance (ANOVA) and linear regression analyses were conducted to examine the relationship between tooth wear scores and age. Application of both Yun et al. () and Prince et al. () methodologies resulted in inaccurate age estimates. Recalibrated sectioning points correctly classified individuals as over or under 50 years for 88% of the sample. Linear regression demonstrated 60% of age estimates fell within ±10 years of the actual age, and accuracy improved for individuals under 45 years, with 74% of predictions within ±10 years. This study demonstrates age estimation from dental wear is possible for modern populations, with comparable age intervals to other established methods. It provides a quantifiable method of seriation into "older" and "younger" adult categories, and provides more reliable age interval estimates than cranial sutures in instances where only the skull is available. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Caracciolo, D.; Deidda, R.; Viola, F.
2017-11-01
The assessment of the mean annual runoff and its interannual variability in a basin is the first and fundamental task for several activities related to water resources management and water quality analysis. The scarcity of observed runoff data is a common problem worldwide so that the runoff estimation in ungauged basins is still an open question. In this context, the main aim of this work is to propose and test a simple tool able to estimate the probability distribution of the annual surface runoff in ungauged river basins in arid and semi-arid areas using a simplified Fu's parameterization of the Budyko's curve at regional scale. Starting from a method recently developed to derive the distribution of annual runoff, under the assumption of negligible inter-annual change in basin water storage, we here generalize the application to any catchment where the parameter of the Fu's curve is known. Specifically, we provide a closed-form expression of the annual runoff distribution as a function of the mean and standard deviation of annual rainfall and potential evapotranspiration, and the Fu's parameter. The proposed method is based on a first order Taylor expansion of the Fu's equation and allows calculating the probability density function of annual runoff in seasonally dry arid and semi-arid geographic context around the world by taking advantage of simple easy-to-find climatic data and the many studies with estimates of the Fu's parameter worldwide. The computational simplicity of the proposed tool makes it a valuable supporting tool in the field of water resources assessment for practitioners, regional agencies and authorities.
French, David P; Cameron, Elaine; Benton, Jack S; Deaton, Christi; Harvie, Michelle
2017-10-01
The assessment and communication of disease risk that is personalised to the individual is widespread in healthcare contexts. Despite several systematic reviews of RCTs, it is unclear under what circumstances that personalised risk estimates promotes change in four key health-related behaviours: smoking, physical activity, diet and alcohol consumption. The present research aims to systematically identify, evaluate and synthesise the findings of existing systematic reviews. This systematic review of systematic reviews followed published guidance. A search of four databases and two-stage screening procedure with good reliability identified nine eligible systematic reviews. The nine reviews each included between three and 15 primary studies, containing 36 unique studies. Methods of personalising risk feedback included imaging/visual feedback, genetic testing, and numerical estimation from risk algorithms. The reviews were generally high quality. For a broad range of methods of estimating and communicating risk, the reviews found no evidence that risk information had strong or consistent effects on health-related behaviours. The most promising effects came from interventions using visual or imaging techniques and with smoking cessation and dietary behaviour as outcomes, but with inconsistent results. Few interventions explicitly used theory, few targeted self-efficacy or response efficacy, and a limited range of Behaviour Change Techniques were used. Presenting risk information on its own, even when highly personalised, does not produce strong effects on health-related behaviours or changes which are sustained. Future research in this area should build on the existing knowledge base about increasing the effects of risk communication on behaviour.
Olfactory deposition of inhaled nanoparticles in humans
Garcia, Guilherme J. M.; Schroeter, Jeffry D.; Kimbell, Julia S.
2016-01-01
Context Inhaled nanoparticles can migrate to the brain via the olfactory bulb, as demonstrated in experiments in several animal species. This route of exposure may be the mechanism behind the correlation between air pollution and human neurodegenerative diseases, including Alzheimer’s disease and Parkinson’s disease. Objectives This manuscript aims to (1) estimate the dose of inhaled nanoparticles that deposit in the human olfactory epithelium during nasal breathing at rest and (2) compare the olfactory dose in humans with our earlier dose estimates for rats. Materials and methods An anatomically-accurate model of the human nasal cavity was developed based on computed tomography scans. The deposition of 1–100 nm particles in the whole nasal cavity and its olfactory region were estimated via computational fluid dynamics (CFD) simulations. Our CFD methods were validated by comparing our numerical predictions for whole-nose deposition with experimental data and previous CFD studies in the literature. Results In humans, olfactory dose of inhaled nanoparticles is highest for 1–2 nm particles with approximately 1% of inhaled particles depositing in the olfactory region. As particle size grows to 100 nm, olfactory deposition decreases to 0.01% of inhaled particles. Discussion and conclusion Our results suggest that the percentage of inhaled particles that deposit in the olfactory region is lower in humans than in rats. However, olfactory dose per unit surface area is estimated to be higher in humans due to their larger minute volume. These dose estimates are important for risk assessment and dose-response studies investigating the neurotoxicity of inhaled nanoparticles. PMID:26194036
NASA Astrophysics Data System (ADS)
Graven, H. D.; Gruber, N.
2011-12-01
The 14C-free fossil carbon added to atmospheric CO2 by combustion dilutes the atmospheric 14C/C ratio (Δ14C), potentially providing a means to verify fossil CO2 emissions calculated using economic inventories. However, sources of 14C from nuclear power generation and spent fuel reprocessing can counteract this dilution and may bias 14C/C-based estimates of fossil fuel-derived CO2 if these nuclear influences are not correctly accounted for. Previous studies have examined nuclear influences on local scales, but the potential for continental-scale influences on Δ14C has not yet been explored. We estimate annual 14C emissions from each nuclear site in the world and conduct an Eulerian transport modeling study to investigate the continental-scale, steady-state gradients of Δ14C caused by nuclear activities and fossil fuel combustion. Over large regions of Europe, North America and East Asia, nuclear enrichment may offset at least 20% of the fossil fuel dilution in Δ14C, corresponding to potential biases of more than -0.25 ppm in the CO2 attributed to fossil fuel emissions, larger than the bias from plant and soil respiration in some areas. Model grid cells including high 14C-release reactors or fuel reprocessing sites showed much larger nuclear enrichment, despite the coarse model resolution of 1.8°×1.8°. The recent growth of nuclear 14C emissions increased the potential nuclear bias over 1985-2005, suggesting that changing nuclear activities may complicate the use of Δ14C observations to identify trends in fossil fuel emissions. The magnitude of the potential nuclear bias is largely independent of the choice of reference station in the context of continental-scale Eulerian transport and inversion studies, but could potentially be reduced by an appropriate choice of reference station in the context of local-scale assessments.
Robust Tracking of Small Displacements with a Bayesian Estimator
Dumont, Douglas M.; Byram, Brett C.
2016-01-01
Radiation-force-based elasticity imaging describes a group of techniques that use acoustic radiation force (ARF) to displace tissue in order to obtain qualitative or quantitative measurements of tissue properties. Because ARF-induced displacements are on the order of micrometers, tracking these displacements in vivo can be challenging. Previously, it has been shown that Bayesian-based estimation can overcome some of the limitations of a traditional displacement estimator like normalized cross-correlation (NCC). In this work, we describe a Bayesian framework that combines a generalized Gaussian-Markov random field (GGMRF) prior with an automated method for selecting the prior’s width. We then evaluate its performance in the context of tracking the micrometer-order displacements encountered in an ARF-based method like acoustic radiation force impulse (ARFI) imaging. The results show that bias, variance, and mean-square error performance vary with prior shape and width, and that an almost one order-of-magnitude reduction in mean-square error can be achieved by the estimator at the automatically-selected prior width. Lesion simulations show that the proposed estimator has a higher contrast-to-noise ratio but lower contrast than NCC, median-filtered NCC, and the previous Bayesian estimator, with a non-Gaussian prior shape having better lesion-edge resolution than a Gaussian prior. In vivo results from a cardiac, radiofrequency ablation ARFI imaging dataset show quantitative improvements in lesion contrast-to-noise ratio over NCC as well as the previous Bayesian estimator. PMID:26529761
Suicidal behaviour across the African continent: a review of the literature
2014-01-01
Background Suicide is a major cause of premature mortality worldwide, but data on its epidemiology in Africa, the world’s second most populous continent, are limited. Methods We systematically reviewed published literature on suicidal behaviour in African countries. We searched PubMed, Web of Knowledge, PsycINFO, African Index Medicus, Eastern Mediterranean Index Medicus and African Journals OnLine and carried out citation searches of key articles. We crudely estimated the incidence of suicide and suicide attempts in Africa based on country-specific data and compared these with published estimates. We also describe common features of suicide and suicide attempts across the studies, including information related to age, sex, methods used and risk factors. Results Regional or national suicide incidence data were available for less than one third (16/53) of African countries containing approximately 60% of Africa’s population; suicide attempt data were available for <20% of countries (7/53). Crude estimates suggest there are over 34,000 (inter-quartile range 13,141 to 63,757) suicides per year in Africa, with an overall incidence rate of 3.2 per 100,000 population. The recent Global Burden of Disease (GBD) estimate of 49,558 deaths is somewhat higher, but falls within the inter-quartile range of our estimate. Suicide rates in men are typically at least three times higher than in women. The most frequently used methods of suicide are hanging and pesticide poisoning. Reported risk factors are similar for suicide and suicide attempts and include interpersonal difficulties, mental and physical health problems, socioeconomic problems and drug and alcohol use/abuse. Qualitative studies are needed to identify additional culturally relevant risk factors and to understand how risk factors may be connected to suicidal behaviour in different socio-cultural contexts. Conclusions Our estimate is somewhat lower than GBD, but still clearly indicates suicidal behaviour is an important public health problem in Africa. More regional studies, in both urban and rural areas, are needed to more accurately estimate the burden of suicidal behaviour across the continent. Qualitative studies are required in addition to quantitative studies. PMID:24927746
Suicidal behaviour across the African continent: a review of the literature.
Mars, Becky; Burrows, Stephanie; Hjelmeland, Heidi; Gunnell, David
2014-06-14
Suicide is a major cause of premature mortality worldwide, but data on its epidemiology in Africa, the world's second most populous continent, are limited. We systematically reviewed published literature on suicidal behaviour in African countries. We searched PubMed, Web of Knowledge, PsycINFO, African Index Medicus, Eastern Mediterranean Index Medicus and African Journals OnLine and carried out citation searches of key articles. We crudely estimated the incidence of suicide and suicide attempts in Africa based on country-specific data and compared these with published estimates. We also describe common features of suicide and suicide attempts across the studies, including information related to age, sex, methods used and risk factors. Regional or national suicide incidence data were available for less than one third (16/53) of African countries containing approximately 60% of Africa's population; suicide attempt data were available for <20% of countries (7/53). Crude estimates suggest there are over 34,000 (inter-quartile range 13,141 to 63,757) suicides per year in Africa, with an overall incidence rate of 3.2 per 100,000 population. The recent Global Burden of Disease (GBD) estimate of 49,558 deaths is somewhat higher, but falls within the inter-quartile range of our estimate. Suicide rates in men are typically at least three times higher than in women. The most frequently used methods of suicide are hanging and pesticide poisoning. Reported risk factors are similar for suicide and suicide attempts and include interpersonal difficulties, mental and physical health problems, socioeconomic problems and drug and alcohol use/abuse. Qualitative studies are needed to identify additional culturally relevant risk factors and to understand how risk factors may be connected to suicidal behaviour in different socio-cultural contexts. Our estimate is somewhat lower than GBD, but still clearly indicates suicidal behaviour is an important public health problem in Africa. More regional studies, in both urban and rural areas, are needed to more accurately estimate the burden of suicidal behaviour across the continent. Qualitative studies are required in addition to quantitative studies.
Health economics of screening for gynaecological cancers.
Kulasingam, Shalini; Havrilesky, Laura
2012-04-01
In this chapter, we summarise findings from recent cost-effectiveness analyses of screening for cervical cancer and ovarian cancer. We begin with a brief summary of key issues that affect the cost-effectiveness of screening, including disease burden, and availability and type of screening tests. For cervical cancer, we discuss the potential effect of human papilloma virus vaccines on screening. Outstanding epidemiological and cost-effectiveness issues are included. For cervical cancer, this includes incorporating the long-term effect of treatment (including adverse birth outcomes in treated women who are of reproductive age) into cost-effectiveness models using newly available trial data to identify the best strategy for incorporating human papilloma virus tests. A second issue is the need for additional data on human papilloma virus vaccines, such as effectiveness of reduced cancer incidence and mortality, effectiveness in previously exposed women and coverage. Definitive data on these parameters will allow us to update model-based analyses to include more realistic estimates, and also potentially dramatically alter our approach to screening. For ovarian cancer, outstanding issues include confirming within the context of a trial that screening is effective for reducing mortality and incorporating tests with high specificity into screening into screening algorithms for ovarian cancer. Copyright © 2011 Elsevier Ltd. All rights reserved.
Contour detection improved by context-adaptive surround suppression.
Sang, Qiang; Cai, Biao; Chen, Hao
2017-01-01
Recently, many image processing applications have taken advantage of a psychophysical and neurophysiological mechanism, called "surround suppression" to extract object contour from a natural scene. However, these traditional methods often adopt a single suppression model and a fixed input parameter called "inhibition level", which needs to be manually specified. To overcome these drawbacks, we propose a novel model, called "context-adaptive surround suppression", which can automatically control the effect of surround suppression according to image local contextual features measured by a surface estimator based on a local linear kernel. Moreover, a dynamic suppression method and its stopping mechanism are introduced to avoid manual intervention. The proposed algorithm is demonstrated and validated by a broad range of experimental results.
ERIC Educational Resources Information Center
Wankel, Charles, Ed.; DeFillippi, Robert, Ed.
This volume demonstrates how technology is impacting management education and learning in a variety of educational contexts. Some of the issues and trends in management education addressed include: technotrends; web-based management learning; the changing nature of the web as a context for learning; online simulations; web-format case studies;…
The organizational context of children's mental health services.
Glisson, Charles
2002-12-01
This paper reviews what is known about the organizational context of children's mental health services and describes organizational constructs, conceptual models, research methods, and intervention strategies that can be used to link organizational context to service quality and outcomes. Using evidence from studies of business and industrial organizations as well as studies of children's service organizations, the paper presents a model of organizational effectiveness that depends on several contextual characteristics that include organizational culture, structure, climate, and work attitudes. These characteristics are believed to affect the adoption of efficacious treatments (EBPs [evidence-based practices]), adherence to treatment protocols, therapeutic alliance, and the availability, responsiveness, and continuity of services. Finally, 10 components of the ARC(Availability Responsiveness and Continuity) organizational intervention are described as examples of strategies that can be used to develop organizational contexts with the prescribed characteristics. Mental health researchers are encouraged to consider including these constructs, conceptual models, research methods, and intervention strategies in dissemination, effectiveness, and implementation studies that address the gap between research-based knowledge about mental health treatment and what is actually offered in the community.
Giordano, Bruno L.; Kayser, Christoph; Rousselet, Guillaume A.; Gross, Joachim; Schyns, Philippe G.
2016-01-01
Abstract We begin by reviewing the statistical framework of information theory as applicable to neuroimaging data analysis. A major factor hindering wider adoption of this framework in neuroimaging is the difficulty of estimating information theoretic quantities in practice. We present a novel estimation technique that combines the statistical theory of copulas with the closed form solution for the entropy of Gaussian variables. This results in a general, computationally efficient, flexible, and robust multivariate statistical framework that provides effect sizes on a common meaningful scale, allows for unified treatment of discrete, continuous, unidimensional and multidimensional variables, and enables direct comparisons of representations from behavioral and brain responses across any recording modality. We validate the use of this estimate as a statistical test within a neuroimaging context, considering both discrete stimulus classes and continuous stimulus features. We also present examples of analyses facilitated by these developments, including application of multivariate analyses to MEG planar magnetic field gradients, and pairwise temporal interactions in evoked EEG responses. We show the benefit of considering the instantaneous temporal derivative together with the raw values of M/EEG signals as a multivariate response, how we can separately quantify modulations of amplitude and direction for vector quantities, and how we can measure the emergence of novel information over time in evoked responses. Open‐source Matlab and Python code implementing the new methods accompanies this article. Hum Brain Mapp 38:1541–1573, 2017. © 2016 Wiley Periodicals, Inc. PMID:27860095
Bonaccorso, Elisa; Guayasamin, Juan M.
2013-01-01
To understand the origin of Pantepui montane biotas, we studied the biogeography of toucanets in the genus Aulacorhynchus. These birds are ideal for analyzing historical relationships among Neotropical montane regions, given their geographic distribution from Mexico south to Bolivia, including northern Venezuela (Cordillera de la Costa), and the Pantepui. Analyses were based on molecular phylogenies using mitochondrial and nuclear DNA sequences. Topology tests were applied to compare alternative hypotheses that may explain the current distribution of Aulacorhynchus toucanets, in the context of previous hypotheses of the origin of Pantepui montane biotas. Biogeographic reconstructions in RASP and Lagrange were used to estimate the ancestral area of the genus, and an analysis in BEAST was used to estimate a time framework for its diversification. A sister relationship between the Pantepui and Andes+Cordillera de la Costa was significantly more likely than topologies indicating other hypothesis for the origin of Pantepui populations. The Andes was inferred as the ancestral area for Aulacorhynchus, and the group has diversified since the late Miocene. The biogeographic patterns found herein, in which the Andes are the source for biotas of other regions, are consistent with those found for flowerpiercers and tanagers, and do not support the hypothesis of the geologically old Pantepui as a source of Neotropical montain diversity. Based on the high potential for cryptic speciation and isolation of Pantepui populations, we consider that phylogenetic studies of additional taxa are important from a conservation perspective. PMID:23840663
Bonaccorso, Elisa; Guayasamin, Juan M
2013-01-01
To understand the origin of Pantepui montane biotas, we studied the biogeography of toucanets in the genus Aulacorhynchus. These birds are ideal for analyzing historical relationships among Neotropical montane regions, given their geographic distribution from Mexico south to Bolivia, including northern Venezuela (Cordillera de la Costa), and the Pantepui. Analyses were based on molecular phylogenies using mitochondrial and nuclear DNA sequences. Topology tests were applied to compare alternative hypotheses that may explain the current distribution of Aulacorhynchus toucanets, in the context of previous hypotheses of the origin of Pantepui montane biotas. Biogeographic reconstructions in RASP and Lagrange were used to estimate the ancestral area of the genus, and an analysis in BEAST was used to estimate a time framework for its diversification. A sister relationship between the Pantepui and Andes+Cordillera de la Costa was significantly more likely than topologies indicating other hypothesis for the origin of Pantepui populations. The Andes was inferred as the ancestral area for Aulacorhynchus, and the group has diversified since the late Miocene. The biogeographic patterns found herein, in which the Andes are the source for biotas of other regions, are consistent with those found for flowerpiercers and tanagers, and do not support the hypothesis of the geologically old Pantepui as a source of Neotropical montain diversity. Based on the high potential for cryptic speciation and isolation of Pantepui populations, we consider that phylogenetic studies of additional taxa are important from a conservation perspective.
Approaches for estimating minimal clinically important differences in systemic lupus erythematosus.
Rai, Sharan K; Yazdany, Jinoos; Fortin, Paul R; Aviña-Zubieta, J Antonio
2015-06-03
A minimal clinically important difference (MCID) is an important concept used to determine whether a medical intervention improves perceived outcomes in patients. Prior to the introduction of the concept in 1989, studies focused primarily on statistical significance. As most recent clinical trials in systemic lupus erythematosus (SLE) have failed to show significant effects, determining a clinically relevant threshold for outcome scores (that is, the MCID) of existing instruments may be critical for conducting and interpreting meaningful clinical trials as well as for facilitating the establishment of treatment recommendations for patients. To that effect, methods to determine the MCID can be divided into two well-defined categories: distribution-based and anchor-based approaches. Distribution-based approaches are based on statistical characteristics of the obtained samples. There are various methods within the distribution-based approach, including the standard error of measurement, the standard deviation, the effect size, the minimal detectable change, the reliable change index, and the standardized response mean. Anchor-based approaches compare the change in a patient-reported outcome to a second, external measure of change (that is, one that is more clearly understood, such as a global assessment), which serves as the anchor. Finally, the Delphi technique can be applied as an adjunct to defining a clinically important difference. Despite an abundance of methods reported in the literature, little work in MCID estimation has been done in the context of SLE. As the MCID can help determine the effect of a given therapy on a patient and add meaning to statistical inferences made in clinical research, we believe there ought to be renewed focus on this area. Here, we provide an update on the use of MCIDs in clinical research, review some of the work done in this area in SLE, and propose an agenda for future research.
NASA Technical Reports Server (NTRS)
Rosello, Anthony David
1995-01-01
A general two tier framework for vehicle health monitoring of Guidance Navigation and Control (GN&C) system actuators, effectors, and propulsion devices is presented. In this context, a top level monitor that estimates jet thrust is designed for the Space Shuttle Reaction Control System (RCS) during the reentry phase of flight. Issues of importance for the use of estimation technologies in vehicle health monitoring are investigated and quantified for the Shuttle RCS demonstration application. These issues include rate of convergence, robustness to unmodeled dynamics, sensor quality, sensor data rates, and information recording objectives. Closed loop simulations indicate that a Kalman filter design is sensitive to modeling error and robust estimators may reduce this sensitivity. Jet plume interaction with the aerodynamic flowfield is shown to be a significant effect adversely impacting the ability to accurately estimate thrust.
Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing
Jung, Jaewook; Sohn, Gunho; Bang, Kiin; Wichmann, Andreas; Armenakis, Costas; Kada, Martin
2016-01-01
A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH) method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1) feature extraction; (2) similarity measure; and matching, and (3) estimating exterior orientation parameters (EOPs) of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process. PMID:27338410
Paulsen, Aksel
2014-01-01
Background and purpose The increased use of patient-reported outcomes (PROs) in orthopedics requires data on estimated minimal clinically important improvements (MCIIs) and patient-acceptable symptom states (PASSs). We wanted to find cut-points corresponding to minimal clinically important PRO change score and the acceptable postoperative PRO score, by estimating MCII and PASS 1 year after total hip arthroplasty (THA) for the Hip Dysfunction and Osteoarthritis Outcome Score (HOOS) and the EQ-5D. Patients and methods THA patients from 16 different departments received 2 PROs and additional questions preoperatively and 1 year postoperatively. The PROs included were the HOOS subscales pain (HOOS Pain), physical function short form (HOOS-PS), and hip-related quality of life (HOOS QoL), and the EQ-5D. MCII and PASS were estimated using multiple anchor-based approaches. Results Of 1,837 patients available, 1,335 answered the preoperative PROs, and 1,288 of them answered the 1-year follow-up. The MCIIs and PASSs were estimated to be: 24 and 91 (HOOS Pain), 23 and 88 (HOOS-PS), 17 and 83 (HOOS QoL), 0.31 and 0.92 (EQ-5D Index), and 23 and 85 (EQ-VAS), respectively. MCIIs corresponded to a 38–55% improvement from mean baseline PRO score and PASSs corresponded to absolute follow-up scores of 57–91% of the maximum score in THA patients 1 year after surgery. Interpretation This study improves the interpretability of PRO scores. The different estimation approaches presented may serve as a guide for future MCII and PASS estimations in other contexts. The cutoff points may serve as reference values in registry settings. PMID:24286564
NASA Astrophysics Data System (ADS)
Barragán, Rosa María; Núñez, José; Arellano, Víctor Manuel; Nieva, David
2016-03-01
Exploration and exploitation of geothermal resources require the estimation of important physical characteristics of reservoirs including temperatures, pressures and in situ two-phase conditions, in order to evaluate possible uses and/or investigate changes due to exploitation. As at relatively high temperatures (>150 °C) reservoir fluids usually attain chemical equilibrium in contact with hot rocks, different models based on the chemistry of fluids have been developed that allow deep conditions to be estimated. Currently either in water-dominated or steam-dominated reservoirs the chemistry of steam has been useful for working out reservoir conditions. In this context, three methods based on the Fischer-Tropsch (FT) and combined H2S-H2 (HSH) mineral-gas reactions have been developed for estimating temperatures and the quality of the in situ two-phase mixture prevailing in the reservoir. For these methods the mineral buffers considered to be controlling H2S-H2 composition of fluids are as follows. The pyrite-magnetite buffer (FT-HSH1); the pyrite-hematite buffer (FT-HSH2) and the pyrite-pyrrhotite buffer (FT-HSH3). Currently from such models the estimations of both, temperature and steam fraction in the two-phase fluid are obtained graphically by using a blank diagram with a background theoretical solution as reference. Thus large errors are involved since the isotherms are highly nonlinear functions while reservoir steam fractions are taken from a logarithmic scale. In order to facilitate the use of the three FT-HSH methods and minimize visual interpolation errors, the EQUILGAS program that numerically solves the equations of the FT-HSH methods was developed. In this work the FT-HSH methods and the EQUILGAS program are described. Illustrative examples for Mexican fields are also given in order to help the users in deciding which method could be more suitable for every specific data set.
NASA Astrophysics Data System (ADS)
Chahbani, Samia
The masses, centers of gravity and moments of inertia are the main parameters in the three phases of the design of the aircraft. They are of extreme importance in the studies of the stability and proper functioning of the aircraft by modeling and simulation methods. Unfortunately, these data are not always available given the confidentiality of aerospace field. A question arises naturally: How to estimate the mass, center of gravity and moments of inertia of an aircraft based on only its geometry? In this context in which this thesis is realized, the masses are estimated by Raymer`s methods. The aircraft described in procedures based on mechanical techniques engineers are used for determining the centers of gravity. The DATCOM is applied for obtaining moments of inertia. Finally, the results obtained are validated by using the flight simulator at the LARCASE corresponding to Cessna Citation X. we conclude with a representation of an analytical model that sum up the different step to follow up for estimating masses, centers of gravity and moments of inertia for any commercial aircraft.
Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis
Ollenschläger, Malte; Roth, Nils; Klucken, Jochen
2017-01-01
Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis. PMID:28832511
Torija, Antonio J; Ruiz, Diego P
2012-10-01
Road traffic has a heavy impact on the urban sound environment, constituting the main source of noise and widely dominating its spectral composition. In this context, our research investigates the use of recorded sound spectra as input data for the development of real-time short-term road traffic flow estimation models. For this, a series of models based on the use of Multilayer Perceptron Neural Networks, multiple linear regression, and the Fisher linear discriminant were implemented to estimate road traffic flow as well as to classify it according to the composition of heavy vehicles and motorcycles/mopeds. In view of the results, the use of the 50-400 Hz and 1-2.5 kHz frequency ranges as input variables in multilayer perceptron-based models successfully estimated urban road traffic flow with an average percentage of explained variance equal to 86%, while the classification of the urban road traffic flow gave an average success rate of 96.1%. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zachary, Wayne; Eggleston, Robert; Donmoyer, Jason; Schremmer, Serge
2003-09-01
Decision-making is strongly shaped and influenced by the work context in which decisions are embedded. This suggests that decision support needs to be anchored by a model (implicit or explicit) of the work process, in contrast to traditional approaches that anchor decision support to either context free decision models (e.g., utility theory) or to detailed models of the external (e.g., battlespace) environment. An architecture for cognitively-based, work centered decision support called the Work-centered Informediary Layer (WIL) is presented. WIL separates decision support into three overall processes that build and dynamically maintain an explicit context model, use the context model to identify opportunities for decision support and tailor generic decision-support strategies to the current context and offer them to the system-user/decision-maker. The generic decision support strategies include such things as activity/attention aiding, decision process structuring, work performance support (selective, contextual automation), explanation/ elaboration, infosphere data retrieval, and what if/action-projection and visualization. A WIL-based application is a work-centered decision support layer that provides active support without intent inferencing, and that is cognitively based without requiring classical cognitive task analyses. Example WIL applications are detailed and discussed.
Natural Forest Biomass Estimation Based on Plantation Information Using PALSAR Data
Avtar, Ram; Suzuki, Rikie; Sawada, Haruo
2014-01-01
Forests play a vital role in terrestrial carbon cycling; therefore, monitoring forest biomass at local to global scales has become a challenging issue in the context of climate change. In this study, we investigated the backscattering properties of Advanced Land Observing Satellite (ALOS) Phased Array L-band Synthetic Aperture Radar (PALSAR) data in cashew and rubber plantation areas of Cambodia. The PALSAR backscattering coefficient (σ0) had different responses in the two plantation types because of differences in biophysical parameters. The PALSAR σ0 showed a higher correlation with field-based measurements and lower saturation in cashew plants compared with rubber plants. Multiple linear regression (MLR) models based on field-based biomass of cashew (C-MLR) and rubber (R-MLR) plants with PALSAR σ0 were created. These MLR models were used to estimate natural forest biomass in Cambodia. The cashew plant-based MLR model (C-MLR) produced better results than the rubber plant-based MLR model (R-MLR). The C-MLR-estimated natural forest biomass was validated using forest inventory data for natural forests in Cambodia. The validation results showed a strong correlation (R2 = 0.64) between C-MLR-estimated natural forest biomass and field-based biomass, with RMSE = 23.2 Mg/ha in deciduous forests. In high-biomass regions, such as dense evergreen forests, this model had a weaker correlation because of the high biomass and the multiple-story tree structure of evergreen forests, which caused saturation of the PALSAR signal. PMID:24465908
Longitudinal studies of anger and attention span: context and informant effects.
Kim, Jungmeen; Mullineaux, Paula Y; Allen, Ben; Deater-Deckard, Kirby
2010-04-01
This study examined stabilities of informant and context (home vs. classroom) latent factors regarding anger and attention. Participants included children from the National Institute of Child Health and Development Study of Early Child Care and Youth Development who were measured at 54 months, first grade, and third grade. Latent factors of anger and attention span were structured using different indicators based on mothers', fathers', caregivers', teachers', and observers' reports. We used structural equation modeling to examine the autoregressive effects within a context (stability), the concurrent associations between home and classroom contexts, and informant effects. The results indicated that for both anger and attention (1) there were significant informant effects that influenced stability in a context, (2) there was higher stability in home context than nonhome context, and (3) stability within a context increased over time. The findings suggested that anger was more prone to context effects and informant effects than attention.
Bourgkard, Eve; Wild, Pascal; Gonzalez, Maria; Févotte, Joëlle; Penven, Emmanuelle; Paris, Christophe
2013-12-01
To describe the performance of a lifelong task-based questionnaire (TBQ) in estimating exposures compared with other approaches in the context of a case-control study. A sample of 93 subjects was randomly selected from a lung cancer case-control study corresponding to 497 jobs. For each job, exposure assessments for asbestos and polycyclic aromatic hydrocarbons (PAHs) were obtained by expertise (TBQ expertise) and by algorithm using the TBQ (TBQ algorithm) as well as by expert appraisals based on all available occupational data (REFERENCE expertise) considered to be the gold standard. Additionally, a Job Exposure Matrix (JEM)-based evaluation for asbestos was also obtained. On the 497 jobs, the various evaluations were contrasted using Cohen's κ coefficient of agreement. Additionally, on the total case-control population, the asbestos dose-response relationship based on the TBQ algorithm was compared with the JEM-based assessment. Regarding asbestos, the TBQ-exposure estimates agreed well with the REFERENCE estimate (TBQ expertise: level-weighted κ (lwk)=0.68; TBQ algorithm: lwk=0.61) but less so with the JEM estimate (TBQ expertise: lwk=0.31; TBQ algorithm: lwk=0.26). Regarding PAHs, the agreements between REFERENCE expertise and TBQ were less good (TBQ expertise: lwk=0.43; TBQ algorithm: lwk=0.36). In the case-control study analysis, the dose-response relationship between lung cancer and cumulative asbestos based on the JEM is less steep than with the TBQ-algorithm exposure assessment and statistically non-significant. Asbestos-exposure estimates based on the TBQ were consistent with the REFERENCE expertise and yielded a steeper dose-response relationship than the JEM. For PAHs, results were less clear.
Adjoint-Based Mesh Adaptation for the Sonic Boom Signature Loudness
NASA Technical Reports Server (NTRS)
Rallabhandi, Sriram K.; Park, Michael A.
2017-01-01
The mesh adaptation functionality of FUN3D is utilized to obtain a mesh optimized to calculate sonic boom ground signature loudness. During this process, the coupling between the discrete-adjoints of the computational fluid dynamics tool FUN3D and the atmospheric propagation tool sBOOM is exploited to form the error estimate. This new mesh adaptation methodology will allow generation of suitable meshes adapted to reduce the estimated errors in the ground loudness, which is an optimization metric employed in supersonic aircraft design. This new output-based adaptation could allow new insights into meshing for sonic boom analysis and design, and complements existing output-based adaptation techniques such as adaptation to reduce estimated errors in off-body pressure functional. This effort could also have implications for other coupled multidisciplinary adjoint capabilities (e.g., aeroelasticity) as well as inclusion of propagation specific parameters such as prevailing winds or non-standard atmospheric conditions. Results are discussed in the context of existing methods and appropriate conclusions are drawn as to the efficacy and efficiency of the developed capability.
A 3-dimensional DTI MRI-based model of GBM growth and response to radiation therapy.
Hathout, Leith; Patel, Vishal; Wen, Patrick
2016-09-01
Glioblastoma (GBM) is both the most common and the most aggressive intra-axial brain tumor, with a notoriously poor prognosis. To improve this prognosis, it is necessary to understand the dynamics of GBM growth, response to treatment and recurrence. The present study presents a mathematical diffusion-proliferation model of GBM growth and response to radiation therapy based on diffusion tensor (DTI) MRI imaging. This represents an important advance because it allows 3-dimensional tumor modeling in the anatomical context of the brain. Specifically, tumor infiltration is guided by the direction of the white matter tracts along which glioma cells infiltrate. This provides the potential to model different tumor growth patterns based on location within the brain, and to simulate the tumor's response to different radiation therapy regimens. Tumor infiltration across the corpus callosum is simulated in biologically accurate time frames. The response to radiation therapy, including changes in cell density gradients and how these compare across different radiation fractionation protocols, can be rendered. Also, the model can estimate the amount of subthreshold tumor which has extended beyond the visible MR imaging margins. When combined with the ability of being able to estimate the biological parameters of invasiveness and proliferation of a particular GBM from serial MRI scans, it is shown that the model has potential to simulate realistic tumor growth, response and recurrence patterns in individual patients. To the best of our knowledge, this is the first presentation of a DTI-based GBM growth and radiation therapy treatment model.
Mensah, Keitly; Maire, Aurélia; Oppert, Jean-Michel; Dugas, Julien; Charreire, Hélène; Weber, Christiane; Simon, Chantal; Nazare, Julie-Anne
2016-08-09
Comprehensive assessment of sedentary behavior (SB) and physical activity (PA), including transport-related activities (TRA), is required to design innovative PA promotion strategies. There are few validated instruments that simultaneously assess the different components of human movement according to their context of practice (e.g. work, transport, leisure). We examined test-retest reliability and validity of the Sedentary, Transportation and Activity Questionnaire (STAQ), a newly developed questionnaire dedicated to assessing context-specific SB, TRA and PA. Ninety six subjects (51 women) kept a contextualized activity-logbook and wore a hip accelerometer (Actigraph GT3X + (TM)) for a 7-day or 14-day period, at the end of which they completed the STAQ. Activity-energy expenditure was measured in a subgroup of 45 subjects using the double labeled water (DLW) method. Test-retest reliability was assessed using intra-class-coefficients (ICC) in a subgroup of 32 subjects who filled the questionnaire twice one month apart. Accelerometry was annotated using the logbook to obtain total and context-specific objective estimates of SB. Spearman correlations, Bland-Altman plots and ICC were used to analyze validity with logbook, accelerometry and DLW data validity criteria. Test-retest reliability was fair for total sitting time (ICC = 0.52), good to excellent for work sitting time (ICC = 0.71), transport-related walking (ICC = 0.61) and car use (ICC = 0.67), and leisure screen-related SB (ICC = 0.64-0.79), but poor for total sitting time during leisure and transport-related contexts. For validity, compared to accelerometry, significant correlations were found for STAQ estimates of total (r = 0.54) and context-specific sitting times with stronger correlations for work sitting time (r = 0.88), and screen times (TV/DVD viewing: r = 0.46; other screens: r = 0.42) than for transport (r = 0.35) or leisure-related sitting-times (r = 0.19). Compared to contextualized logbook, STAQ estimates of TRA was higher for car (r = 0.65) than for active transport (r = 0.41). The questionnaire generally overestimated work- and leisure-related SB and sitting times, while it underestimated total and transport-related sitting times. The STAQ showed acceptable reliability and a good ranking validity for assessment of context-specific SB and TRA. This instrument appears as a useful tool to study SB, TRA and PA in context in adults.
Empirical Bayes estimation of proportions with application to cowbird parasitism rates
Link, W.A.; Hahn, D.C.
1996-01-01
Bayesian models provide a structure for studying collections of parameters such as are considered in the investigation of communities, ecosystems, and landscapes. This structure allows for improved estimation of individual parameters, by considering them in the context of a group of related parameters. Individual estimates are differentially adjusted toward an overall mean, with the magnitude of their adjustment based on their precision. Consequently, Bayesian estimation allows for a more credible identification of extreme values in a collection of estimates. Bayesian models regard individual parameters as values sampled from a specified probability distribution, called a prior. The requirement that the prior be known is often regarded as an unattractive feature of Bayesian analysis and may be the reason why Bayesian analyses are not frequently applied in ecological studies. Empirical Bayes methods provide an alternative approach that incorporates the structural advantages of Bayesian models while requiring a less stringent specification of prior knowledge. Rather than requiring that the prior distribution be known, empirical Bayes methods require only that it be in a certain family of distributions, indexed by hyperparameters that can be estimated from the available data. This structure is of interest per se, in addition to its value in allowing for improved estimation of individual parameters; for example, hypotheses regarding the existence of distinct subgroups in a collection of parameters can be considered under the empirical Bayes framework by allowing the hyperparameters to vary among subgroups. Though empirical Bayes methods have been applied in a variety of contexts, they have received little attention in the ecological literature. We describe the empirical Bayes approach in application to estimation of proportions, using data obtained in a community-wide study of cowbird parasitism rates for illustration. Since observed proportions based on small sample sizes are heavily adjusted toward the mean, extreme values among empirical Bayes estimates identify those species for which there is the greatest evidence of extreme parasitism rates. Applying a subgroup analysis to our data on cowbird parasitism rates, we conclude that parasitism rates for Neotropical Migrants as a group are no greater than those of Resident/Short-distance Migrant species in this forest community. Our data and analyses demonstrate that the parasitism rates for certain Neotropical Migrant species are remarkably low (Wood Thrush and Rose-breasted Grosbeak) while those for others are remarkably high (Ovenbird and Red-eyed Vireo).
Cost Effectiveness of HPV Vaccination: A Systematic Review of Modelling Approaches.
Pink, Joshua; Parker, Ben; Petrou, Stavros
2016-09-01
A large number of economic evaluations have been published that assess alternative possible human papillomavirus (HPV) vaccination strategies. Understanding differences in the modelling methodologies used in these studies is important to assess the accuracy, comparability and generalisability of their results. The aim of this review was to identify published economic models of HPV vaccination programmes and understand how characteristics of these studies vary by geographical area, date of publication and the policy question being addressed. We performed literature searches in MEDLINE, Embase, Econlit, The Health Economic Evaluations Database (HEED) and The National Health Service Economic Evaluation Database (NHS EED). From the 1189 unique studies retrieved, 65 studies were included for data extraction based on a priori eligibility criteria. Two authors independently reviewed these articles to determine eligibility for the final review. Data were extracted from the selected studies, focussing on six key structural or methodological themes covering different aspects of the model(s) used that may influence cost-effectiveness results. More recently published studies tend to model a larger number of HPV strains, and include a larger number of HPV-associated diseases. Studies published in Europe and North America also tend to include a larger number of diseases and are more likely to incorporate the impact of herd immunity and to use more realistic assumptions around vaccine efficacy and coverage. Studies based on previous models often do not include sufficiently robust justifications as to the applicability of the adapted model to the new context. The considerable between-study heterogeneity in economic evaluations of HPV vaccination programmes makes comparisons between studies difficult, as observed differences in cost effectiveness may be driven by differences in methodology as well as by variations in funding and delivery models and estimates of model parameters. Studies should consistently report not only all simplifying assumptions made but also the estimated impact of these assumptions on the cost-effectiveness results.
The prioritisation of paediatrics and palliative care in cancer control plans in Africa.
Weaver, M S; Yao, A J J; Renner, L A; Harif, M; Lam, C G
2015-06-09
Given the burden of childhood cancer and palliative care need in Africa, this paper investigated the paediatric and palliative care elements in cancer control plans. We conducted a comparative content analysis of accessible national cancer control plans in Africa, using a health systems perspective attentive to context, development, scope, and monitoring/evaluation. Burden estimates were derived from World Bank, World Health Organisation, and Worldwide Palliative Care Alliance. Eighteen national plans and one Africa-wide plan (10 English, 9 French) were accessible, representing 9 low-, 4 lower-middle-, and 5 upper-middle-income settings. Ten plans discussed cancer control in the context of noncommunicable diseases. Paediatric cancer was mentioned in 7 national plans, representing 5127 children, or 13% of the estimated continental burden for children aged 0-14 years. Palliative care needs were recognised in 11 national plans, representing 157 490 children, or 24% of the estimated Africa-wide burden for children aged 0-14 years; four plans specified paediatric palliative needs. Palliative care was itemised in four budgets. Sample indicators and equity measures were identified, including those highlighting contextual needs for treatment access and completion. Recognising explicit strategies and funding for paediatric and palliative services may guide prioritised cancer control efforts in resource-limited settings.
Robust Arm and Hand Tracking by Unsupervised Context Learning
Spruyt, Vincent; Ledda, Alessandro; Philips, Wilfried
2014-01-01
Hand tracking in video is an increasingly popular research field due to the rise of novel human-computer interaction methods. However, robust and real-time hand tracking in unconstrained environments remains a challenging task due to the high number of degrees of freedom and the non-rigid character of the human hand. In this paper, we propose an unsupervised method to automatically learn the context in which a hand is embedded. This context includes the arm and any other object that coherently moves along with the hand. We introduce two novel methods to incorporate this context information into a probabilistic tracking framework, and introduce a simple yet effective solution to estimate the position of the arm. Finally, we show that our method greatly increases robustness against occlusion and cluttered background, without degrading tracking performance if no contextual information is available. The proposed real-time algorithm is shown to outperform the current state-of-the-art by evaluating it on three publicly available video datasets. Furthermore, a novel dataset is created and made publicly available for the research community. PMID:25004155
International comparison of experience-based health state values at the population level.
Heijink, Richard; Reitmeir, Peter; Leidl, Reiner
2017-07-07
Decision makers need to know whether health state values, an important component of summary measures of health, are valid for their target population. A key outcome is the individuals' valuation of their current health. This experience-based perspective is increasingly used to derive health state values. This study is the first to compare such experience-based valuations at the population level across countries. We examined the relationship between respondents' self-rated health as measured by the EQ-VAS, and the different dimensions and levels of the EQ-5D-3 L. The dataset included almost 32,000 survey respondents from 15 countries. We estimated generalized linear models with logit link function, including country-specific models and pooled-data models with country effects. The results showed significant and meaningful differences in the valuation of health states and individual health dimensions between countries, even though similarities were present too. Between countries, coefficients correlated positively for the values of mobility, self-care and usual activities, but not for the values of pain and anxiety, thus underlining structural differences. The findings indicate that, ideally, population-specific experience-based value sets are developed and used for the calculation of health outcomes. Otherwise, sensitivity analyses are needed. Furthermore, transferring the results of foreign studies into the national context should be performed with caution. We recommend future studies to investigate the causes of differences in experience-based health state values through a single international study possibly complemented with qualitative research on the determinants of valuation.
NASA Astrophysics Data System (ADS)
Montereale Gavazzi, G.; Madricardo, F.; Janowski, L.; Kruss, A.; Blondel, P.; Sigovini, M.; Foglini, F.
2016-03-01
Recent technological developments of multibeam echosounder systems (MBES) allow mapping of benthic habitats with unprecedented detail. MBES can now be employed in extremely shallow waters, challenging data acquisition (as these instruments were often designed for deeper waters) and data interpretation (honed on datasets with resolution sometimes orders of magnitude lower). With extremely high-resolution bathymetry and co-located backscatter data, it is now possible to map the spatial distribution of fine scale benthic habitats, even identifying the acoustic signatures of single sponges. In this context, it is necessary to understand which of the commonly used segmentation methods is best suited to account for such level of detail. At the same time, new sampling protocols for precisely geo-referenced ground truth data need to be developed to validate the benthic environmental classification. This study focuses on a dataset collected in a shallow (2-10 m deep) tidal channel of the Lagoon of Venice, Italy. Using 0.05-m and 0.2-m raster grids, we compared a range of classifications, both pixel-based and object-based approaches, including manual, Maximum Likelihood Classifier, Jenks Optimization clustering, textural analysis and Object Based Image Analysis. Through a comprehensive and accurately geo-referenced ground truth dataset, we were able to identify five different classes of the substrate composition, including sponges, mixed submerged aquatic vegetation, mixed detritic bottom (fine and coarse) and unconsolidated bare sediment. We computed estimates of accuracy (namely Overall, User, Producer Accuracies and the Kappa statistic) by cross tabulating predicted and reference instances. Overall, pixel based segmentations produced the highest accuracies and the accuracy assessment is strongly dependent on the number of classes chosen for the thematic output. Tidal channels in the Venice Lagoon are extremely important in terms of habitats and sediment distribution, particularly within the context of the new tidal barrier being built. However, they had remained largely unexplored until now, because of the surveying challenges. The application of this remote sensing approach, combined with targeted sampling, opens a new perspective in the monitoring of benthic habitats in view of a knowledge-based management of natural resources in shallow coastal areas.
[How has social status been measured in health research? A review of the international literature].
Cabieses, Báltica; Zitko, Pedro; Pinedo, Rafael; Espinoza, Manuel; Albor, Christo
2011-06-01
Social status (SS) is a multidimensional variable that is used widely in health research. There is no single optimal method for estimating social status. Rather, in each case the measurement may vary depending on the research subject, the base theory considered, the population of interest, the event of interest and, in some cases, the available information. This literature review develops the following topics related to SS measurement, based on the international scientific sources available electronically: i) identification of the role of SS in the context of social epidemiology research, ii) description of the principal indicators and methodological approaches used to measure SS in health research, and iii) analysis of the distinct difficulties of SS measurement in specific populations such as ethnic groups, women, children, the elderly, and in rural vs. urban contexts. The review finally makes it possible to describe some of the implications of SS measurement in Latin American countries.
The Overestimation Phenomenon in a Skill-Based Gaming Context: The Case of March Madness Pools.
Kwak, Dae Hee
2016-03-01
Over 100 million people are estimated to take part in the NCAA Men's Basketball Tournament Championship bracket contests. However, relatively little is known about consumer behavior in skill-based gaming situations (e.g., sports betting). In two studies, we investigated the overestimation phenomenon in the "March Madness" context. In Study 1 (N = 81), we found that individuals who were allowed to make their own predictions were significantly more optimistic about their performance than individuals who did not make their own selections. In Study 2 (N = 197), all subjects participated in a mock competitive bracket pool. In line with the illusion of control theory, results showed that higher self-ratings of probability of winning significantly increased maximum willingness to wager but did not improve actual performance. Lastly, perceptions of high probability of winning significantly contributed to consumers' enjoyment and willingness to participate in a bracket pool in the future.
Validating the LASSO algorithm by unmixing spectral signatures in multicolor phantoms
NASA Astrophysics Data System (ADS)
Samarov, Daniel V.; Clarke, Matthew; Lee, Ji Yoon; Allen, David; Litorja, Maritoni; Hwang, Jeeseong
2012-03-01
As hyperspectral imaging (HSI) sees increased implementation into the biological and medical elds it becomes increasingly important that the algorithms being used to analyze the corresponding output be validated. While certainly important under any circumstance, as this technology begins to see a transition from benchtop to bedside ensuring that the measurements being given to medical professionals are accurate and reproducible is critical. In order to address these issues work has been done in generating a collection of datasets which could act as a test bed for algorithms validation. Using a microarray spot printer a collection of three food color dyes, acid red 1 (AR), brilliant blue R (BBR) and erioglaucine (EG) are mixed together at dierent concentrations in varying proportions at dierent locations on a microarray chip. With the concentration and mixture proportions known at each location, using HSI an algorithm should in principle, based on estimates of abundances, be able to determine the concentrations and proportions of each dye at each location on the chip. These types of data are particularly important in the context of medical measurements as the resulting estimated abundances will be used to make critical decisions which can have a serious impact on an individual's health. In this paper we present a novel algorithm for processing and analyzing HSI data based on the LASSO algorithm (similar to "basis pursuit"). The LASSO is a statistical method for simultaneously performing model estimation and variable selection. In the context of estimating abundances in an HSI scene these so called "sparse" representations provided by the LASSO are appropriate as not every pixel will be expected to contain every endmember. The algorithm we present takes the general framework of the LASSO algorithm a step further and incorporates the rich spatial information which is available in HSI to further improve the estimates of abundance. We show our algorithm's improvement over the standard LASSO using the dye mixture data as the test bed.
Andersson, K G; Roed, J
2006-01-01
In nuclear preparedness, an essential requirement is the ability to adequately predict the likely consequences of a major accident situation. In this context it is very important to evaluate which contributions to dose are important, and which are not likely to have significance. As an example of this type of evaluation, a case study has been conducted to estimate the doses received over the first 17 years after the Chernobyl accident in a dry-contaminated residential area in the Bryansk region in Russia. Methodologies for estimation of doses received through nine different pathways, including contamination of streets, roofs, exterior walls, and landscape, are established, and best estimates are given for each of the dose contributions. Generally, contaminated soil areas were estimated to have given the highest dose contribution, but a number of other contributions to dose, e.g., from contaminated roofs and inhalation of contaminants during the passage of the contaminated plume, were of the same order of magnitude.
2014-01-01
Background Ukraine has one of the most severe HIV epidemics in Eastern Europe, with an estimated 1.6% of the adult population living with the virus. Injection drug use accounts for 36% of new HIV cases. Nongovernmental organizations in Ukraine have little experience with effective, theory-based behavioral risk reduction interventions necessary to reduce the scope of the HIV epidemic among Ukrainians who inject drugs. This study seeks to promote the use of evidence-based HIV prevention strategies among Ukrainian organizations working with drug users. Methods/design This study combines qualitative and quantitative methods to explore a model of HIV prevention intervention development and implementation that disseminates common factors of effective behavioral risk reduction interventions and enables service providers to develop programs that reflect their specific organizational contexts. Eight agencies, located in regions of Ukraine with the highest HIV and drug use rates and selected to represent key organizational context criteria (e.g., agency size, target population, experience with HIV prevention), will be taught common factors as the basis for intervention development. We will use qualitative methods, including interviews and observations, to document the process of intervention development and implementation at each agency. Using risk assessments with intervention participants, we will also assess intervention effectiveness. The primary outcome analyses will determine the extent to which agencies develop and implement an intervention for drug users that incorporates common factors of effective behavioral interventions. Effectiveness analyses will be conducted, and effect size of each intervention will be compared to that of published HIV prevention interventions for drug users with demonstrated effectiveness. This study will explore the role of organizational context on intervention development and implementation, including resource allocation decisions, problem-solving around intervention development, and barriers and facilitators to inclusion of common factors and delivery of a high quality intervention. Discussion This innovative approach to HIV prevention science dissemination and intervention development draws on providers’ ability to quickly develop innovative programs and reach populations in greatest need of services. It has the potential to enhance providers’ ability to use HIV prevention science to develop sustainable interventions in response to a rapidly changing epidemic. PMID:24491185
Issues in assessing the contribution of research and development to productivity growth
NASA Technical Reports Server (NTRS)
Griliches, Z.
1979-01-01
The article outlines the production function approach to the estimation of the returns to R&D and then proceeds to discuss in turn two very difficult problems: the measurement of output in R&D intensive industries and the definition and measurement of the stock of R&D 'capital'. Multicollinearity and simultaneity are taken up in the next section and another section is devoted to estimation and inference problems arising more specifically in the R&D context. Several recent studies of returns to R&D are then surveyed, and the paper concludes with suggestions for ways of expanding the current data base in this field.
2013-01-01
Background When mathematical modelling is applied to many different application areas, a common task is the estimation of states and parameters based on measurements. With this kind of inference making, uncertainties in the time when the measurements have been taken are often neglected, but especially in applications taken from the life sciences, this kind of errors can considerably influence the estimation results. As an example in the context of personalized medicine, the model-based assessment of the effectiveness of drugs is becoming to play an important role. Systems biology may help here by providing good pharmacokinetic and pharmacodynamic (PK/PD) models. Inference on these systems based on data gained from clinical studies with several patient groups becomes a major challenge. Particle filters are a promising approach to tackle these difficulties but are by itself not ready to handle uncertainties in measurement times. Results In this article, we describe a variant of the standard particle filter (PF) algorithm which allows state and parameter estimation with the inclusion of measurement time uncertainties (MTU). The modified particle filter, which we call MTU-PF, also allows the application of an adaptive stepsize choice in the time-continuous case to avoid degeneracy problems. The modification is based on the model assumption of uncertain measurement times. While the assumption of randomness in the measurements themselves is common, the corresponding measurement times are generally taken as deterministic and exactly known. Especially in cases where the data are gained from measurements on blood or tissue samples, a relatively high uncertainty in the true measurement time seems to be a natural assumption. Our method is appropriate in cases where relatively few data are used from a relatively large number of groups or individuals, which introduce mixed effects in the model. This is a typical setting of clinical studies. We demonstrate the method on a small artificial example and apply it to a mixed effects model of plasma-leucine kinetics with data from a clinical study which included 34 patients. Conclusions Comparisons of our MTU-PF with the standard PF and with an alternative Maximum Likelihood estimation method on the small artificial example clearly show that the MTU-PF obtains better estimations. Considering the application to the data from the clinical study, the MTU-PF shows a similar performance with respect to the quality of estimated parameters compared with the standard particle filter, but besides that, the MTU algorithm shows to be less prone to degeneration than the standard particle filter. PMID:23331521
A spline-based non-linear diffeomorphism for multimodal prostate registration.
Mitra, Jhimli; Kato, Zoltan; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Sidibé, Désiré; Ghose, Soumya; Vilanova, Joan C; Comet, Josep; Meriaudeau, Fabrice
2012-08-01
This paper presents a novel method for non-rigid registration of transrectal ultrasound and magnetic resonance prostate images based on a non-linear regularized framework of point correspondences obtained from a statistical measure of shape-contexts. The segmented prostate shapes are represented by shape-contexts and the Bhattacharyya distance between the shape representations is used to find the point correspondences between the 2D fixed and moving images. The registration method involves parametric estimation of the non-linear diffeomorphism between the multimodal images and has its basis in solving a set of non-linear equations of thin-plate splines. The solution is obtained as the least-squares solution of an over-determined system of non-linear equations constructed by integrating a set of non-linear functions over the fixed and moving images. However, this may not result in clinically acceptable transformations of the anatomical targets. Therefore, the regularized bending energy of the thin-plate splines along with the localization error of established correspondences should be included in the system of equations. The registration accuracies of the proposed method are evaluated in 20 pairs of prostate mid-gland ultrasound and magnetic resonance images. The results obtained in terms of Dice similarity coefficient show an average of 0.980±0.004, average 95% Hausdorff distance of 1.63±0.48 mm and mean target registration and target localization errors of 1.60±1.17 mm and 0.15±0.12 mm respectively. Copyright © 2012 Elsevier B.V. All rights reserved.
Espirito Santo, Anelise; Choquette, Anne
2013-06-01
Diaper dermatitis is one of the most common skin problems in children often caused by irritants that promote skin breakdown, such as moisture and faecal enzymes. It has been estimated that the incidence of diaper dermatitis is as high as 50% in children receiving chemotherapy. The scientific literature suggests a variety of preventative measures, but only a minority are systematically tested and supported by clinical evidence. The purpose of this paper is to adapt and implement a skincare guideline to better prevent diaper dermatitis in the paediatric oncology population. The Knowledge to Action process was used to guide the adaptation and implementation of the new guideline. As part of this process, different tools were used to identify and review selected knowledge (Appraisal of Guidelines Research Evaluation instrument), to tailor and adapt knowledge to the local context (ADAPTE process), to implement interventions (Registered Nurses' Association of Ontario toolkit) and to evaluate outcomes (qualitative analysis). The main outcomes measured included implementation of the guideline and nursing practice change. The guideline was successfully implemented as reported by nurses in focus group sessions and as measured by changes in nursing documentation. The implementation of the guideline was successful on the account of the interplay of three core elements: The level and nature of the evidence; the context in which the research was placed; the method in which the process was facilitated. © 2013 The Authors. International Journal of Evidence-Based Healthcare © 2013 The Joanna Briggs Institute.
Human Reliability Assessments: Using the Past (Shuttle) to Predict the Future (Orion)
NASA Technical Reports Server (NTRS)
DeMott, Diana L.; Bigler, Mark A.
2017-01-01
NASA (National Aeronautics and Space Administration) Johnson Space Center (JSC) Safety and Mission Assurance (S&MA) uses two human reliability analysis (HRA) methodologies. The first is a simplified method which is based on how much time is available to complete the action, with consideration included for environmental and personal factors that could influence the human's reliability. This method is expected to provide a conservative value or placeholder as a preliminary estimate. This preliminary estimate or screening value is used to determine which placeholder needs a more detailed assessment. The second methodology is used to develop a more detailed human reliability assessment on the performance of critical human actions. This assessment needs to consider more than the time available, this would include factors such as: the importance of the action, the context, environmental factors, potential human stresses, previous experience, training, physical design interfaces, available procedures/checklists and internal human stresses. The more detailed assessment is expected to be more realistic than that based primarily on time available. When performing an HRA on a system or process that has an operational history, we have information specific to the task based on this history and experience. In the case of a Probabilistic Risk Assessment (PRA) that is based on a new design and has no operational history, providing a "reasonable" assessment of potential crew actions becomes more challenging. To determine what is expected of future operational parameters, the experience from individuals who had relevant experience and were familiar with the system and process previously implemented by NASA was used to provide the "best" available data. Personnel from Flight Operations, Flight Directors, Launch Test Directors, Control Room Console Operators, and Astronauts were all interviewed to provide a comprehensive picture of previous NASA operations. Verification of the assumptions and expectations expressed in the assessments will be needed when the procedures, flight rules, and operational requirements are developed and then finalized.
Human Reliability Assessments: Using the Past (Shuttle) to Predict the Future (Orion)
NASA Technical Reports Server (NTRS)
DeMott, Diana; Bigler, Mark
2016-01-01
NASA (National Aeronautics and Space Administration) Johnson Space Center (JSC) Safety and Mission Assurance (S&MA) uses two human reliability analysis (HRA) methodologies. The first is a simplified method which is based on how much time is available to complete the action, with consideration included for environmental and personal factors that could influence the human's reliability. This method is expected to provide a conservative value or placeholder as a preliminary estimate. This preliminary estimate or screening value is used to determine which placeholder needs a more detailed assessment. The second methodology is used to develop a more detailed human reliability assessment on the performance of critical human actions. This assessment needs to consider more than the time available, this would include factors such as: the importance of the action, the context, environmental factors, potential human stresses, previous experience, training, physical design interfaces, available procedures/checklists and internal human stresses. The more detailed assessment is expected to be more realistic than that based primarily on time available. When performing an HRA on a system or process that has an operational history, we have information specific to the task based on this history and experience. In the case of a Probabilistic Risk Assessment (PRA) that is based on a new design and has no operational history, providing a "reasonable" assessment of potential crew actions becomes more challenging. In order to determine what is expected of future operational parameters, the experience from individuals who had relevant experience and were familiar with the system and process previously implemented by NASA was used to provide the "best" available data. Personnel from Flight Operations, Flight Directors, Launch Test Directors, Control Room Console Operators and Astronauts were all interviewed to provide a comprehensive picture of previous NASA operations. Verification of the assumptions and expectations expressed in the assessments will be needed when the procedures, flight rules and operational requirements are developed and then finalized.
A Theory of Conditional Information with Applications.
1994-03-01
reviewing inStru#66ns, searching dsting data sources, gathering and the collection of elformation. Send comments regarding On burden estimate or any other...residing at the nexus of so final analysis all information has a common context, many intellectual subtleties that have come into scientific namely the...a A b = 0). There is all too little expli- useful in managing data bases, combining data , cit distinction made between absolutely true statements
García Bengoechea, Enrique; Sabiston, Catherine M; Wilson, Philip M
2017-01-01
The aim of this study was to provide initial evidence of validity and reliability of scores derived from the Activity Context in Youth Sport Questionnaire (ACYSQ), an instrument designed to offer a comprehensive assessment of the activities adolescents take part in during sport practices. Two studies were designed for the purposes of item development and selection, and to provide evidence of structural and criterion validity of ACYSQ scores, respectively (N = 334; M age = 14.93, SD = 1.76 years). Confirmatory factor analysis (CFA) supported the adequacy of a 20-item ACYSQ measurement model, which was invariant across gender, and comprised the following dimensions: (1) stimulation; (2) usefulness-value; (3) authenticity; (4) repetition-boredom; and (5) ineffectiveness. Internal consistency reliability estimates and composite reliability estimates for ACYSQ subscale scores ranged from 0.72 to 0.91. In regression analyses, stimulation predicted enjoyment and perceived competence, ineffectiveness was significantly associated with perceived competence and authenticity emerged as a predictor of commitment in sport. These findings indicate that the ACYSQ displays adequate psychometric properties and the use of the instrument may be useful for studying selected activity-based features of the practice environment and their motivational consequences in youth sport.
Brembs, Björn; Hempel de Ibarra, Natalie
2006-01-01
We have used a genetically tractable model system, the fruit fly Drosophila melanogaster to study the interdependence between sensory processing and associative processing on learning performance. We investigated the influence of variations in the physical and predictive properties of color stimuli in several different operant-conditioning procedures on the subsequent learning performance. These procedures included context and stimulus generalization as well as color, compound, and conditional discrimination (colors and patterns). A surprisingly complex dependence of the learning performance on the colors' physical and predictive properties emerged, which was clarified by taking into account the fly-subjective perception of the color stimuli. Based on estimates of the stimuli's color and brightness values, we propose that the different tasks are supported by different parameters of the color stimuli; generalization occurs only if the chromaticity is sufficiently similar, whereas discrimination learning relies on brightness differences.
Evidence and diagnostic reporting in the IHE context.
Loef, Cor; Truyen, Roel
2005-05-01
Capturing clinical observations and findings during the diagnostic imaging process is increasingly becoming a critical step in diagnostic reporting. Standards developers-notably HL7 and DICOM-are making significant progress toward standards that enable exchanging clinical observations and findings among the various information systems of the healthcare enterprise. DICOM-like the HL7 Clinical Document Architecture (CDA) -uses templates and constrained, coded vocabulary (SNOMED, LOINC, etc.). Such a representation facilitates automated software recognition of findings and observations, intrapatient comparison, correlation to norms, and outcomes research. The scope of DICOM Structured Reporting (SR) includes many findings that products routinely create in digital form (measurements, computed estimates, etc.). In the Integrating the Healthcare Enterprise (IHE) framework, two Integration Profiles are defined for clinical data capture and diagnostic reporting: Evidence Document, and Simple Image and Numeric Report. This report describes these two DICOM SR-based integration profiles in the diagnostic reporting process.
Atmospheric Aerosol Properties and Climate Impacts
NASA Technical Reports Server (NTRS)
Chin, Mian; Kahn, Ralph A.; Remer, Lorraine A.; Yu, Hongbin; Rind, David; Feingold, Graham; Quinn, Patricia K.; Schwartz, Stephen E.; Streets, David G.; DeCola, Phillip;
2009-01-01
This report critically reviews current knowledge about global distributions and properties of atmospheric aerosols, as they relate to aerosol impacts on climate. It assesses possible next steps aimed at substantially reducing uncertainties in aerosol radiative forcing estimates. Current measurement techniques and modeling approaches are summarized, providing context. As a part of the Synthesis and Assessment Product in the Climate Change Science Program, this assessment builds upon recent related assessments, including the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4, 2007) and other Climate Change Science Program reports. The objectives of this report are (1) to promote a consensus about the knowledge base for climate change decision support, and (2) to provide a synthesis and integration of the current knowledge of the climate-relevant impacts of anthropogenic aerosols for policy makers, policy analysts, and general public, both within and outside the U.S government and worldwide.
Cumulative effects of exposure to violence on posttraumatic stress in Palestinian and Israeli youth.
Dubow, Eric F; Boxer, Paul; Huesmann, L Rowell; Landau, Simha; Dvir, Shira; Shikaki, Khalil; Ginges, Jeremy
2012-01-01
We examine cumulative and prospective effects of exposure to conflict and violence across four contexts (ethnic-political, community, family, school) on posttraumatic stress (PTS) symptoms in Palestinian and Israeli youth. Interviews were conducted with 600 Palestinian and 901 Israeli (Jewish and Arab) children (ages 8, 11, and 14) and their parents once a year for 3 consecutive years. Palestinian children, males, and older youth were generally at greatest risk for exposure to conflict/violence across contexts. Regression analysis found unique effects of exposure to ethnic-political (Palestinian sample), school (Palestinian and Israeli Jewish samples), and family conflict/violence (Israeli Arab sample) during the first 2 years on PTS symptoms in Year 3, controlling for prior PTS symptoms. Cumulative exposure to violence in more contexts during the first 2 years predicted higher subsequent PTS symptoms than did exposure to violence in fewer contexts, and this was true regardless of the youth's level of prior PTS symptoms. These results highlight the risk that ongoing exposure to violence across multiple contexts in the social ecology poses for the mental health of children in contexts of ethnic-political violence. Researchers and mental health professionals working with war-exposed youth in a given cultural context must assess both war- and non-war-related stressors affecting youth. Based on this assessment, interventions may not be limited to individual-based, war-trauma-focused approaches but also may include school-based, community-based, and family-level interventions.
ERIC Educational Resources Information Center
Paek, Insu; Wilson, Mark
2011-01-01
This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…
Estimating effective data density in a satellite retrieval or an objective analysis
NASA Technical Reports Server (NTRS)
Purser, R. J.; Huang, H.-L.
1993-01-01
An attempt is made to formulate consistent objective definitions of the concept of 'effective data density' applicable both in the context of satellite soundings and more generally in objective data analysis. The definitions based upon various forms of Backus-Gilbert 'spread' functions are found to be seriously misleading in satellite soundings where the model resolution function (expressing the sensitivity of retrieval or analysis to changes in the background error) features sidelobes. Instead, estimates derived by smoothing the trace components of the model resolution function are proposed. The new estimates are found to be more reliable and informative in simulated satellite retrieval problems and, for the special case of uniformly spaced perfect observations, agree exactly with their actual density. The new estimates integrate to the 'degrees of freedom for signal', a diagnostic that is invariant to changes of units or coordinates used.
Estimating Genetic Ancestry Proportions from Faces
Klimentidis, Yann C.; Shriver, Mark D.
2009-01-01
Ethnicity can be a means by which people identify themselves and others. This type of identification mediates many kinds of social interactions and may reflect adaptations to a long history of group living in humans. Recent admixture in the US between groups from different continents, and the historically strong emphasis on phenotypic differences between members of these groups, presents an opportunity to examine the degree of concordance between estimates of group membership based on genetic markers and on visually-based estimates of facial features. We first measured the degree of Native American, European, African and East Asian genetic admixture in a sample of 14 self-identified Hispanic individuals, chosen to cover a broad range of Native American and European genetic admixture proportions. We showed frontal and side-view photographs of the 14 individuals to 241 subjects living in New Mexico, and asked them to estimate the degree of NA admixture for each individual. We assess the overall concordance for each observer based on an aggregated measure of the difference between the observer and the genetic estimates. We find that observers reach a significantly higher degree of concordance than expected by chance, and that the degree of concordance as well as the direction of the discrepancy in estimates differs based on the ethnicity of the observer, but not on the observers' age or sex. This study highlights the potentially high degree of discordance between physical appearance and genetic measures of ethnicity, as well as how perceptions of ethnic affiliation are context-specific. We compare our findings to those of previous studies and discuss their implications. PMID:19223962
ERIC Educational Resources Information Center
Kulik, Anastasia; Neyaskina, Yuliya; Frizen, Marina; Shiryaeva, Olga; Surikova, Yana
2016-01-01
This article presents the results of a detailed empirical research, aimed at studying the quality of life in the context of extreme climatic, geographical and specific sociocultural living conditions. Our research is based on the methodological approach including social, economical, ecological and psychological characteristics and reflecting…
ERIC Educational Resources Information Center
Dupont, Serge; Galand, Benoit; Nils, Frédéric; Hospel, Virginie
2014-01-01
Introduction: The present study aimed to test a theoretically-based model (the self-system model of motivational development) including at the same time the extent to which the social context provides structure, warmth and autonomy support, the students' perceived autonomy, relatedness and competence, and behavioral, cognitive and emotional…
ERIC Educational Resources Information Center
Karns, Gary L.
2005-01-01
Many changes have occurred in the context of marketing education during the past decade, including the increased use of new technology-based and experiential pedagogies. To update the understanding of how students in advanced marketing courses perceive marketing pedagogies in this new context, a replication and extension of Karns's study of…
Lykes, M Brinton; Scheib, Holly
2016-01-01
Recovery from disaster and displacement involves multiple challenges including accompanying survivors, documenting effects, and rethreading community. This paper demonstrates how African-American and Latina community health promoters and white university-based researchers engaged visual methodologies and participatory action research (photoPAR) as resources in cross-community praxis in the wake of Hurricane Katrina and the flooding of New Orleans. Visual techniques, including but not limited to photonarratives, facilitated the health promoters': (1) care for themselves and each other as survivors of and responders to the post-disaster context; (2) critical interrogation of New Orleans' entrenched pre- and post-Katrina structural racism as contributing to the racialised effects of and responses to Katrina; and (3) meaning-making and performances of women's community-based, cross-community health promotion within this post-disaster context. This feminist antiracist participatory action research project demonstrates how visual methodologies contributed to the co-researchers' cross-community self- and other caring, critical bifocality, and collaborative construction of a contextually and culturally responsive model for women's community-based health promotion post 'unnatural disaster'. Selected limitations as well as the potential for future cross-community antiracist feminist photoPAR in post-disaster contexts are discussed.
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
NASA Astrophysics Data System (ADS)
Hahn, Markus; Barrois, Björn; Krüger, Lars; Wöhler, Christian; Sagerer, Gerhard; Kummert, Franz
2010-09-01
This study introduces an approach to model-based 3D pose estimation and instantaneous motion analysis of the human hand-forearm limb in the application context of safe human-robot interaction. 3D pose estimation is performed using two approaches: The Multiocular Contracting Curve Density (MOCCD) algorithm is a top-down technique based on pixel statistics around a contour model projected into the images from several cameras. The Iterative Closest Point (ICP) algorithm is a bottom-up approach which uses a motion-attributed 3D point cloud to estimate the object pose. Due to their orthogonal properties, a fusion of these algorithms is shown to be favorable. The fusion is performed by a weighted combination of the extracted pose parameters in an iterative manner. The analysis of object motion is based on the pose estimation result and the motion-attributed 3D points belonging to the hand-forearm limb using an extended constraint-line approach which does not rely on any temporal filtering. A further refinement is obtained using the Shape Flow algorithm, a temporal extension of the MOCCD approach, which estimates the temporal pose derivative based on the current and the two preceding images, corresponding to temporal filtering with a short response time of two or at most three frames. Combining the results of the two motion estimation stages provides information about the instantaneous motion properties of the object. Experimental investigations are performed on real-world image sequences displaying several test persons performing different working actions typically occurring in an industrial production scenario. In all example scenes, the background is cluttered, and the test persons wear various kinds of clothes. For evaluation, independently obtained ground truth data are used. [Figure not available: see fulltext.
Rolden, Herbert J A; van der Wilt, Gert Jan; Maas, Angela H E M; Grutters, Janneke P C
2018-06-18
As model-based economic evaluations (MBEEs) are widely used to make decisions in the context of policy, it is imperative that they represent clinical practice. Here, we assess the relevance of MBEEs on dabigatran for the prevention of stroke in patients with atrial fibrillation (AF). We performed a systematic review on the basis of a developed questionnaire, tailored to oral anticoagulation in patients with AF. Included studies had a full body text in English, compared dabigatran with a vitamin K antagonist, were not dedicated to one or more subgroup(s), and yielded an incremental cost-effectiveness ratio. The relevance of all MBEEs was assessed on the basis of ten context-independent factors, which encompassed clinical outcomes and treatment duration. The MBEEs performed for the United States were assessed on the basis of seventeen context-dependent factors, which were related to the country's target population and clinical environment. The search yielded twenty-nine MBEEs, of which six were performed for the United States. On average, 54 percent of the context-independent factors were included per study, and 37 percent of the seventeen context-dependent factors in the U.S. The share of relevant factors per study did not increase over time. MBEEs on dabigatran leave out several relevant factors, limiting their usefulness to decision makers. We strongly urge health economic researchers to improve the relevance of their MBEEs by including context-independent relevance factors, and modeling context-dependent factors befitting the decision context concerned.
Sensor data security level estimation scheme for wireless sensor networks.
Ramos, Alex; Filho, Raimir Holanda
2015-01-19
Due to their increasing dissemination, wireless sensor networks (WSNs) have become the target of more and more sophisticated attacks, even capable of circumventing both attack detection and prevention mechanisms. This may cause WSN users, who totally trust these security mechanisms, to think that a sensor reading is secure, even when an adversary has corrupted it. For that reason, a scheme capable of estimating the security level (SL) that these mechanisms provide to sensor data is needed, so that users can be aware of the actual security state of this data and can make better decisions on its use. However, existing security estimation schemes proposed for WSNs fully ignore detection mechanisms and analyze solely the security provided by prevention mechanisms. In this context, this work presents the sensor data security estimator (SDSE), a new comprehensive security estimation scheme for WSNs. SDSE is designed for estimating the sensor data security level based on security metrics that analyze both attack prevention and detection mechanisms. In order to validate our proposed scheme, we have carried out extensive simulations that show the high accuracy of SDSE estimates.
Sensor Data Security Level Estimation Scheme for Wireless Sensor Networks
Ramos, Alex; Filho, Raimir Holanda
2015-01-01
Due to their increasing dissemination, wireless sensor networks (WSNs) have become the target of more and more sophisticated attacks, even capable of circumventing both attack detection and prevention mechanisms. This may cause WSN users, who totally trust these security mechanisms, to think that a sensor reading is secure, even when an adversary has corrupted it. For that reason, a scheme capable of estimating the security level (SL) that these mechanisms provide to sensor data is needed, so that users can be aware of the actual security state of this data and can make better decisions on its use. However, existing security estimation schemes proposed for WSNs fully ignore detection mechanisms and analyze solely the security provided by prevention mechanisms. In this context, this work presents the sensor data security estimator (SDSE), a new comprehensive security estimation scheme for WSNs. SDSE is designed for estimating the sensor data security level based on security metrics that analyze both attack prevention and detection mechanisms. In order to validate our proposed scheme, we have carried out extensive simulations that show the high accuracy of SDSE estimates. PMID:25608215
Orhan, U.; Erdogmus, D.; Roark, B.; Oken, B.; Purwar, S.; Hild, K. E.; Fowler, A.; Fried-Oken, M.
2013-01-01
RSVP Keyboard™ is an electroencephalography (EEG) based brain computer interface (BCI) typing system, designed as an assistive technology for the communication needs of people with locked-in syndrome (LIS). It relies on rapid serial visual presentation (RSVP) and does not require precise eye gaze control. Existing BCI typing systems which uses event related potentials (ERP) in EEG suffer from low accuracy due to low signal-to-noise ratio. Henceforth, RSVP Keyboard™ utilizes a context based decision making via incorporating a language model, to improve the accuracy of letter decisions. To further improve the contributions of the language model, we propose recursive Bayesian estimation, which relies on non-committing string decisions, and conduct an offline analysis, which compares it with the existing naïve Bayesian fusion approach. The results indicate the superiority of the recursive Bayesian fusion and in the next generation of RSVP Keyboard™ we plan to incorporate this new approach. PMID:23366432
Knowledge-based processing for aircraft flight control
NASA Technical Reports Server (NTRS)
Painter, John H.; Glass, Emily; Economides, Gregory; Russell, Paul
1994-01-01
This Contractor Report documents research in Intelligent Control using knowledge-based processing in a manner dual to methods found in the classic stochastic decision, estimation, and control discipline. Such knowledge-based control has also been called Declarative, and Hybid. Software architectures were sought, employing the parallelism inherent in modern object-oriented modeling and programming. The viewpoint adopted was that Intelligent Control employs a class of domain-specific software architectures having features common over a broad variety of implementations, such as management of aircraft flight, power distribution, etc. As much attention was paid to software engineering issues as to artificial intelligence and control issues. This research considered that particular processing methods from the stochastic and knowledge-based worlds are duals, that is, similar in a broad context. They provide architectural design concepts which serve as bridges between the disparate disciplines of decision, estimation, control, and artificial intelligence. This research was applied to the control of a subsonic transport aircraft in the airport terminal area.
Context-based user grouping for multi-casting in heterogeneous radio networks
NASA Astrophysics Data System (ADS)
Mannweiler, C.; Klein, A.; Schneider, J.; Schotten, H. D.
2011-08-01
Along with the rise of sophisticated smartphones and smart spaces, the availability of both static and dynamic context information has steadily been increasing in recent years. Due to the popularity of social networks, these data are complemented by profile information about individual users. Making use of this information by classifying users in wireless networks enables targeted content and advertisement delivery as well as optimizing network resources, in particular bandwidth utilization, by facilitating group-based multi-casting. In this paper, we present the design and implementation of a web service for advanced user classification based on user, network, and environmental context information. The service employs simple and advanced clustering algorithms for forming classes of users. Available service functionalities include group formation, context-aware adaptation, and deletion as well as the exposure of group characteristics. Moreover, the results of a performance evaluation, where the service has been integrated in a simulator modeling user behavior in heterogeneous wireless systems, are presented.
NASA Astrophysics Data System (ADS)
Näsi, R.; Viljanen, N.; Kaivosoja, J.; Hakala, T.; Pandžić, M.; Markelin, L.; Honkavaara, E.
2017-10-01
Multispectral and hyperspectral imaging is usually acquired by satellite and aircraft platforms. Recently, miniaturized hyperspectral 2D frame cameras have showed great potential to precise agriculture estimations and they are feasible to combine with lightweight platforms, such as drones. Drone platform is a flexible tool for remote sensing applications with environment and agriculture. The assessment and comparison of different platforms such as satellite, aircraft and drones with different sensors, such as hyperspectral and RGB cameras is an important task in order to understand the potential of the data provided by these equipment and to select the most appropriate according to the user applications and requirements. In this context, open and permanent test fields are very significant and helpful experimental environment, since they provide a comparative data for different platforms, sensors and users, allowing multi-temporal analyses as well. Objective of this work was to investigate the feasibility of an open permanent test field in context of precision agriculture. Satellite (Sentinel-2), aircraft and drones with hyperspectral and RGB cameras were assessed in this study to estimate biomass, using linear regression models and in-situ samples. Spectral data and 3D information were used and compared in different combinations to investigate the quality of the models. The biomass estimation accuracies using linear regression models were better than 90 % for the drone based datasets. The results showed that the use of spectral and 3D features together improved the estimation model. However, estimation of nitrogen content was less accurate with the evaluated remote sensing sensors. The open and permanent test field showed to be suitable to provide an accurate and reliable reference data for the commercial users and farmers.
Estimating uncertainties in complex joint inverse problems
NASA Astrophysics Data System (ADS)
Afonso, Juan Carlos
2016-04-01
Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related to the forward and statistical models, I will also address other uncertainties associated with data and uncertainty propagation.
Context-based retrieval of functional modules in protein-protein interaction networks.
Dobay, Maria Pamela; Stertz, Silke; Delorenzi, Mauro
2017-03-27
Various techniques have been developed for identifying the most probable interactants of a protein under a given biological context. In this article, we dissect the effects of the choice of the protein-protein interaction network (PPI) and the manipulation of PPI settings on the network neighborhood of the influenza A virus (IAV) network, as well as hits in genome-wide small interfering RNA screen results for IAV host factors. We investigate the potential of context filtering, which uses text mining evidence linked to PPI edges, as a complement to the edge confidence scores typically provided in PPIs for filtering, for obtaining more biologically relevant network neighborhoods. Here, we estimate the maximum performance of context filtering to isolate a Kyoto Encyclopedia of Genes and Genomes (KEGG) network Ki from a union of KEGG networks and its network neighborhood. The work gives insights on the use of human PPIs in network neighborhood approaches for functional inference. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Motion Estimation System Utilizing Point Cloud Registration
NASA Technical Reports Server (NTRS)
Chen, Qi (Inventor)
2016-01-01
A system and method of estimation motion of a machine is disclosed. The method may include determining a first point cloud and a second point cloud corresponding to an environment in a vicinity of the machine. The method may further include generating a first extended gaussian image (EGI) for the first point cloud and a second EGI for the second point cloud. The method may further include determining a first EGI segment based on the first EGI and a second EGI segment based on the second EGI. The method may further include determining a first two dimensional distribution for points in the first EGI segment and a second two dimensional distribution for points in the second EGI segment. The method may further include estimating motion of the machine based on the first and second two dimensional distributions.
NASA Astrophysics Data System (ADS)
Drummond, J. D.; Conrad, A.; Merline, W. J.; Carry, B.; Chapman, C. R.; Weaver, H. A.; Tamblyn, P. M.; Christou, J. C.; Dumas, C.
2010-11-01
Context. Asteroid (21) Lutetia was the target of the ESA Rosetta mission flyby in 2010 July. Aims: We seek the best size estimates of the asteroid, the direction of its spin axis, and its bulk density, assuming its shape is well described by a smooth featureless triaxial ellipsoid. We also aim to evaluate the deviations from this assumption. Methods: We derive these quantities from the outlines of the asteroid in 307 images of its resolved apparent disk obtained with adaptive optics (AO) at Keck II and VLT, and combine these with recent mass determinations to estimate a bulk density. Results: Our best triaxial ellipsoid diameters for Lutetia, based on our AO images alone, are a × b × c = 132 × 101 × 93 km, with uncertainties of 4 × 3 × 13 km including estimated systematics, with a rotational pole within 5° of ECJ2000 [λβ] = [45° - 7°] , or EQJ2000 [RA Dec] = [44° + 9°] . The AO model fit itself has internal precisions of 1 × 1 × 8 km, but it is evident both from this model derived from limited viewing aspects and the radius vector model given in a companion paper, that Lutetia significantly departs from an idealized ellipsoid. In particular, the long axis may be overestimated from the AO images alone by about 10 km. Therefore, we combine the best aspects of the radius vector and ellipsoid model into a hybrid ellipsoid model, as our final result, of diameters 124 ± 5 × 101 ± 4 × 93 ± 13 km that can be used to estimate volumes, sizes, and projected areas. The adopted pole position is within 5° of [λβ] = [52° - 6°] or [RA Dec] = [52° + 12°]. Conclusions: Using two separately determined masses and the volume of our hybrid model, we estimate a density of 3.5±1.1 or 4.3±0.8 g cm-3. From the density evidence alone, we argue that this favors an enstatite-chondrite composition, although other compositions are formally allowed at the extremes (low-porosity CV/CO carbonaceous chondrite or high-porosity metallic). We discuss this in the context of other evidence. Based on observations collected at the W. M. Keck Observatory and the European Southern Observatory Very Large Telescope (program ID:079.C-0493, PI: E. Dotto). The W. M. Keck Observatory is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.
Adaptive Elastic Net for Generalized Methods of Moments.
Caner, Mehmet; Zhang, Hao Helen
2014-01-30
Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.
Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.
2006-01-01
The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.
Kim, Tae-Goun
2009-10-01
This article develops a dynamic model of efficient use of exhaustible marine sand resources in the context of marine mining externalities. The classical Hotelling extraction model is applied to sand mining in Ongjin, Korea and extended to include the estimated marginal external costs that mining imposes on marine fisheries. The socially efficient sand extraction plan is compared with the extraction paths suggested by scientific research. If marginal environmental costs are correctly estimated, the developed efficient extraction plan considering the resource rent may increase the social welfare and reduce the conflicts among the marine sand resource users. The empirical results are interpreted with an emphasis on guidelines for coastal resource management policy.
Art critic: Multisignal vision and speech interaction system in a gaming context.
Reale, Michael J; Liu, Peng; Yin, Lijun; Canavan, Shaun
2013-12-01
True immersion of a player within a game can only occur when the world simulated looks and behaves as close to reality as possible. This implies that the game must correctly read and understand, among other things, the player's focus, attitude toward the objects/persons in focus, gestures, and speech. In this paper, we proposed a novel system that integrates eye gaze estimation, head pose estimation, facial expression recognition, speech recognition, and text-to-speech components for use in real-time games. Both the eye gaze and head pose components utilize underlying 3-D models, and our novel head pose estimation algorithm uniquely combines scene flow with a generic head model. The facial expression recognition module uses the local binary patterns with three orthogonal planes approach on the 2-D shape index domain rather than the pixel domain, resulting in improved classification. Our system has also been extended to use a pan-tilt-zoom camera driven by the Kinect, allowing us to track a moving player. A test game, Art Critic, is also presented, which not only demonstrates the utility of our system but also provides a template for player/non-player character (NPC) interaction in a gaming context. The player alters his/her view of the 3-D world using head pose, looks at paintings/NPCs using eye gaze, and makes an evaluation based on the player's expression and speech. The NPC artist will respond with facial expression and synthetic speech based on its personality. Both qualitative and quantitative evaluations of the system are performed to illustrate the system's effectiveness.
Benkert, Pascal; Schwede, Torsten; Tosatto, Silvio Ce
2009-05-20
The selection of the most accurate protein model from a set of alternatives is a crucial step in protein structure prediction both in template-based and ab initio approaches. Scoring functions have been developed which can either return a quality estimate for a single model or derive a score from the information contained in the ensemble of models for a given sequence. Local structural features occurring more frequently in the ensemble have a greater probability of being correct. Within the context of the CASP experiment, these so called consensus methods have been shown to perform considerably better in selecting good candidate models, but tend to fail if the best models are far from the dominant structural cluster. In this paper we show that model selection can be improved if both approaches are combined by pre-filtering the models used during the calculation of the structural consensus. Our recently published QMEAN composite scoring function has been improved by including an all-atom interaction potential term. The preliminary model ranking based on the new QMEAN score is used to select a subset of reliable models against which the structural consensus score is calculated. This scoring function called QMEANclust achieves a correlation coefficient of predicted quality score and GDT_TS of 0.9 averaged over the 98 CASP7 targets and perform significantly better in selecting good models from the ensemble of server models than any other groups participating in the quality estimation category of CASP7. Both scoring functions are also benchmarked on the MOULDER test set consisting of 20 target proteins each with 300 alternatives models generated by MODELLER. QMEAN outperforms all other tested scoring functions operating on individual models, while the consensus method QMEANclust only works properly on decoy sets containing a certain fraction of near-native conformations. We also present a local version of QMEAN for the per-residue estimation of model quality (QMEANlocal) and compare it to a new local consensus-based approach. Improved model selection is obtained by using a composite scoring function operating on single models in order to enrich higher quality models which are subsequently used to calculate the structural consensus. The performance of consensus-based methods such as QMEANclust highly depends on the composition and quality of the model ensemble to be analysed. Therefore, performance estimates for consensus methods based on large meta-datasets (e.g. CASP) might overrate their applicability in more realistic modelling situations with smaller sets of models based on individual methods.
ERIC Educational Resources Information Center
Jackson, Dan; Bowden, Jack; Baker, Rose
2015-01-01
Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment…
Representing Plant Hydraulics in a Global Model: Updates to the Community Land Model
NASA Astrophysics Data System (ADS)
Kennedy, D.; Swenson, S. C.; Oleson, K. W.; Lawrence, D. M.; Fisher, R.; Gentine, P.
2017-12-01
In previous versions, the Community Land Model has used soil moisture to stand in for plant water status, with transpiration and photosynthesis driven directly by soil water potential. This eschews significant literature demonstrating the importance of plant hydraulic traits in the dynamics of water flow through the soil-plant-atmosphere continuum and in the regulation of stomatal aperture. In this study we install a simplified hydraulic framework to represent vegetation water potential and to regulate root water uptake and turbulent fluxes. Plant hydraulics allow for a more explicit representation of plant water status, which improves the physical basis for many processes represented in CLM. This includes root water uptake and the attenuation of photosynthesis and transpiration with drought. Model description is accompanied by results from a point simulation based at the Caxiuanã flux tower site in Eastern Amazonia, covering a throughfall exclusion experiment from 2001-2003. Including plant hydraulics improves the response to drought forcing compared to previous versions of CLM. Parameter sensitivity is examined at the same site and presented in the context of estimating hydraulic parameters in a global model.
Serious Mental Illness and Nursing Home Quality of Care
Rahman, Momotazur; Grabowski, David C; Intrator, Orna; Cai, Shubing; Mor, Vincent
2013-01-01
Objective To estimate the effect of a nursing home's share of residents with a serious mental illness (SMI) on the quality of care. Data Sources Secondary nursing home level data over the period 2000 through 2008 obtained from the Minimum Data Set, OSCAR, and Medicare claims. Study Design We employ an instrumental variables approach to address the potential endogeneity of the share of SMI residents in nursing homes in a model including nursing home and year fixed effects. Principal Findings An increase in the share of SMI nursing home residents positively affected the hospitalization rate among non-SMI residents and negatively affected staffing skill mix and level. We did not observe a statistically significant effect on inspection-based health deficiencies or the hospitalization rate for SMI residents. Conclusions Across the majority of indicators, a greater SMI share resulted in lower nursing home quality. Given the increased prevalence of nursing home residents with SMI, policy makers and providers will need to adjust practices in the context of this new patient population. Reforms may include more stringent preadmission screening, new regulations, reimbursement changes, and increased reporting and oversight. PMID:23278400
NASA Astrophysics Data System (ADS)
Levesque, M.
Artificial satellites, and particularly space junk, drift continuously from their known orbits. In the surveillance-of-space context, they must be observed frequently to ensure that the corresponding orbital parameter database entries are up-to-date. Autonomous ground-based optical systems are periodically tasked to observe these objects, calculate the difference between their predicted and real positions and update object orbital parameters. The real satellite positions are provided by the detection of the satellite streaks in the astronomical images specifically acquired for this purpose. This paper presents the image processing techniques used to detect and extract the satellite positions. The methodology includes several processing steps including: image background estimation and removal, star detection and removal, an iterative matched filter for streak detection, and finally false alarm rejection algorithms. This detection methodology is able to detect very faint objects. Simulated data were used to evaluate the methodology's performance and determine the sensitivity limits where the algorithm can perform detection without false alarm, which is essential to avoid corruption of the orbital parameter database.
Xiang, Yezi; Huang, Chien-Hsun; Hu, Yi; Wen, Jun; Li, Shisheng; Yi, Tingshuang; Chen, Hongyi; Xiang, Jun; Ma, Hong
2017-02-01
Fruits are the defining feature of angiosperms, likely have contributed to angiosperm successes by protecting and dispersing seeds, and provide foods to humans and other animals, with many morphological types and important ecological and agricultural implications. Rosaceae is a family with ∼3000 species and an extraordinary spectrum of distinct fruits, including fleshy peach, apple, and strawberry prized by their consumers, as well as dry achenetum and follicetum with features facilitating seed dispersal, excellent for studying fruit evolution. To address Rosaceae fruit evolution and other questions, we generated 125 new transcriptomic and genomic datasets and identified hundreds of nuclear genes to reconstruct a well-resolved Rosaceae phylogeny with highly supported monophyly of all subfamilies and tribes. Molecular clock analysis revealed an estimated age of ∼101.6 Ma for crown Rosaceae and divergence times of tribes and genera, providing a geological and climate context for fruit evolution. Phylogenomic analysis yielded strong evidence for numerous whole genome duplications (WGDs), supporting the hypothesis that the apple tribe had a WGD and revealing another one shared by fleshy fruit-bearing members of this tribe, with moderate support for WGDs in the peach tribe and other groups. Ancestral character reconstruction for fruit types supports independent origins of fleshy fruits from dry-fruit ancestors, including the evolution of drupes (e.g., peach) and pomes (e.g., apple) from follicetum, and drupetum (raspberry and blackberry) from achenetum. We propose that WGDs and environmental factors, including animals, contributed to the evolution of the many fruits in Rosaceae, which provide a foundation for understanding fruit evolution. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Yin, Zheng; Zhou, Xiaobo; Bakal, Chris; Li, Fuhai; Sun, Youxian; Perrimon, Norbert; Wong, Stephen TC
2008-01-01
Background The recent emergence of high-throughput automated image acquisition technologies has forever changed how cell biologists collect and analyze data. Historically, the interpretation of cellular phenotypes in different experimental conditions has been dependent upon the expert opinions of well-trained biologists. Such qualitative analysis is particularly effective in detecting subtle, but important, deviations in phenotypes. However, while the rapid and continuing development of automated microscope-based technologies now facilitates the acquisition of trillions of cells in thousands of diverse experimental conditions, such as in the context of RNA interference (RNAi) or small-molecule screens, the massive size of these datasets precludes human analysis. Thus, the development of automated methods which aim to identify novel and biological relevant phenotypes online is one of the major challenges in high-throughput image-based screening. Ideally, phenotype discovery methods should be designed to utilize prior/existing information and tackle three challenging tasks, i.e. restoring pre-defined biological meaningful phenotypes, differentiating novel phenotypes from known ones and clarifying novel phenotypes from each other. Arbitrarily extracted information causes biased analysis, while combining the complete existing datasets with each new image is intractable in high-throughput screens. Results Here we present the design and implementation of a novel and robust online phenotype discovery method with broad applicability that can be used in diverse experimental contexts, especially high-throughput RNAi screens. This method features phenotype modelling and iterative cluster merging using improved gap statistics. A Gaussian Mixture Model (GMM) is employed to estimate the distribution of each existing phenotype, and then used as reference distribution in gap statistics. This method is broadly applicable to a number of different types of image-based datasets derived from a wide spectrum of experimental conditions and is suitable to adaptively process new images which are continuously added to existing datasets. Validations were carried out on different dataset, including published RNAi screening using Drosophila embryos [Additional files 1, 2], dataset for cell cycle phase identification using HeLa cells [Additional files 1, 3, 4] and synthetic dataset using polygons, our methods tackled three aforementioned tasks effectively with an accuracy range of 85%–90%. When our method is implemented in the context of a Drosophila genome-scale RNAi image-based screening of cultured cells aimed to identifying the contribution of individual genes towards the regulation of cell-shape, it efficiently discovers meaningful new phenotypes and provides novel biological insight. We also propose a two-step procedure to modify the novelty detection method based on one-class SVM, so that it can be used to online phenotype discovery. In different conditions, we compared the SVM based method with our method using various datasets and our methods consistently outperformed SVM based method in at least two of three tasks by 2% to 5%. These results demonstrate that our methods can be used to better identify novel phenotypes in image-based datasets from a wide range of conditions and organisms. Conclusion We demonstrate that our method can detect various novel phenotypes effectively in complex datasets. Experiment results also validate that our method performs consistently under different order of image input, variation of starting conditions including the number and composition of existing phenotypes, and dataset from different screens. In our findings, the proposed method is suitable for online phenotype discovery in diverse high-throughput image-based genetic and chemical screens. PMID:18534020
ERIC Educational Resources Information Center
Artino, Anthony R., Jr.
2007-01-01
The present report presents an annotated bibliography of peer-reviewed articles that employed theories of self-regulation to understand how adults learn in various contexts. Seven scholarly articles, published between 2000 and 2006, were reviewed and summarized. Articles reviewed include (1) Self-regulation in a Web-based Course: A Case Study (J.…
Tree-centric mapping of forest carbon density from airborne laser scanning and hyperspectral data.
Dalponte, Michele; Coomes, David A
2016-10-01
Forests are a major component of the global carbon cycle, and accurate estimation of forest carbon stocks and fluxes is important in the context of anthropogenic global change. Airborne laser scanning (ALS) data sets are increasingly recognized as outstanding data sources for high-fidelity mapping of carbon stocks at regional scales.We develop a tree-centric approach to carbon mapping, based on identifying individual tree crowns (ITCs) and species from airborne remote sensing data, from which individual tree carbon stocks are calculated. We identify ITCs from the laser scanning point cloud using a region-growing algorithm and identifying species from airborne hyperspectral data by machine learning. For each detected tree, we predict stem diameter from its height and crown-width estimate. From that point on, we use well-established approaches developed for field-based inventories: above-ground biomasses of trees are estimated using published allometries and summed within plots to estimate carbon density.We show this approach is highly reliable: tests in the Italian Alps demonstrated a close relationship between field- and ALS-based estimates of carbon stocks ( r 2 = 0·98). Small trees are invisible from the air, and a correction factor is required to accommodate this effect.An advantage of the tree-centric approach over existing area-based methods is that it can produce maps at any scale and is fundamentally based on field-based inventory methods, making it intuitive and transparent. Airborne laser scanning, hyperspectral sensing and computational power are all advancing rapidly, making it increasingly feasible to use ITC approaches for effective mapping of forest carbon density also inside wider carbon mapping programs like REDD++.
BECon: a tool for interpreting DNA methylation findings from blood in the context of brain.
Edgar, R D; Jones, M J; Meaney, M J; Turecki, G; Kobor, M S
2017-08-01
Tissue differences are one of the largest contributors to variability in the human DNA methylome. Despite the tissue-specific nature of DNA methylation, the inaccessibility of human brain samples necessitates the frequent use of surrogate tissues such as blood, in studies of associations between DNA methylation and brain function and health. Results from studies of surrogate tissues in humans are difficult to interpret in this context, as the connection between blood-brain DNA methylation is tenuous and not well-documented. Here, we aimed to provide a resource to the community to aid interpretation of blood-based DNA methylation results in the context of brain tissue. We used paired samples from 16 individuals from three brain regions and whole blood, run on the Illumina 450 K Human Methylation Array to quantify the concordance of DNA methylation between tissues. From these data, we have made available metrics on: the variability of cytosine-phosphate-guanine dinucleotides (CpGs) in our blood and brain samples, the concordance of CpGs between blood and brain, and estimations of how strongly a CpG is affected by cell composition in both blood and brain through the web application BECon (Blood-Brain Epigenetic Concordance; https://redgar598.shinyapps.io/BECon/). We anticipate that BECon will enable biological interpretation of blood-based human DNA methylation results, in the context of brain.
Evaluation of Biomonitoring Data from the CDC National ...
BACKGROUND: Biomonitoring data reported in the National Report on Human Exposure to Environmental Chemicals (NER) provide information on the presence and concentrations of more than 400 chemicals in human blood and urine. Biomonitoring Equivalents (BEs) and other risk assessment-based values now allow interpretation of these biomonitoring data in a public health risk context. OBJECTIVES: Compare the measured biomarker concentrations in the NER with BEs and similar risk assessment values to provide an across-chemical risk assessment perspective on the measured levels for approximately 130 analytes in the NER. METHODS: Available risk assessment-based biomarker screening values, including BEs and Human Biomonitoring-I (HBM-I) values from the German Human Biomonitoring Commission, were identified. Geometric mean and 95th percentile population biomarker concentrations from the NER were compared to the available screening values to generate chemical-specific hazard quotients (HQ) or cancer risk estimates. CONCLUSIONS: Several analytes in the NER approach or exceed HQ values of 1 or cancer risks greater than 1x 10-4 at the geometric mean or 95th percentile, suggesting exposure levels exceed what is considered safe in a large fraction of the population. Analytes of concern include acrylamide, dioxin-like chemicals, benzene, xylene, several metals, di-2(ethylhexyl)phthalate, and some legacy organochlorine pesticides. This analysis provides for the first time a mean
NASA Astrophysics Data System (ADS)
Joe, Y. J.; Seokhoon, Y.; Nam, S. I.; Polyak, L.; Niessen, F.
2017-12-01
For regional context of the Quaternary history of Arctic marine glaciations, such as glacial events in northern North America and on the Siberian and Chukchi margins, we used CHIRP sub-bottom profiles (SBP) along with sediment cores, including a 14-m long piston core ARA06-04JPC taken from the Chukchi abyssal plain during the RV Araon expedition in 2015. Based on core correlation with earlier developed Arctic Ocean stratigraphies using distribution of various sedimentary proxies, core 04JPC is estimated to extend to at least Marine Isotope Stage 13 (>0.5 Ma). The stratigraphy developed for SBP lines from the Chukchi abyssal plain to surrounding slopes can be divided into four major seismostratigraphic units (SSU 1-4). SBP records from the abyssal plain show well preserved stratification, whereas on the surrounding slopes this pattern is disrupted by lens-shaped, acoustically transparent sedimentary bodies interpreted as glaciogenic debris flow deposits. Based on the integration of sediment physical property and SBP data, we conclude that these debris flows were generated during several ice-sheet grounding events on the Chukchi and East Siberian margins, including adjacent ridges and plateaus, during the middle to late Quaternary.
Pseudo-Boltzmann model for modeling the junctionless transistors
NASA Astrophysics Data System (ADS)
Avila-Herrera, F.; Cerdeira, A.; Roldan, J. B.; Sánchez-Moreno, P.; Tienda-Luna, I. M.; Iñiguez, B.
2014-05-01
Calculation of the carrier concentrations in semiconductors using the Fermi-Dirac integral requires complex numerical calculations; in this context, practically all analytical device models are based on Boltzmann statistics, even though it is known that it leads to an over-estimation of carriers densities for high doping concentrations. In this paper, a new approximation to Fermi-Dirac integral, called Pseudo-Boltzmann model, is presented for modeling junctionless transistors with high doping concentrations.
Setting population targets for mammals using body mass as a predictor of population persistence.
Hilbers, Jelle P; Santini, Luca; Visconti, Piero; Schipper, Aafke M; Pinto, Cecilia; Rondinini, Carlo; Huijbregts, Mark A J
2017-04-01
Conservation planning and biodiversity assessments need quantitative targets to optimize planning options and assess the adequacy of current species protection. However, targets aiming at persistence require population-specific data, which limit their use in favor of fixed and nonspecific targets, likely leading to unequal distribution of conservation efforts among species. We devised a method to derive equitable population targets; that is, quantitative targets of population size that ensure equal probabilities of persistence across a set of species and that can be easily inferred from species-specific traits. In our method, we used models of population dynamics across a range of life-history traits related to species' body mass to estimate minimum viable population targets. We applied our method to a range of body masses of mammals, from 2 g to 3825 kg. The minimum viable population targets decreased asymptotically with increasing body mass and were on the same order of magnitude as minimum viable population estimates from species- and context-specific studies. Our approach provides a compromise between pragmatic, nonspecific population targets and detailed context-specific estimates of population viability for which only limited data are available. It enables a first estimation of species-specific population targets based on a readily available trait and thus allows setting equitable targets for population persistence in large-scale and multispecies conservation assessments and planning. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
Optimization of multi-environment trials for genomic selection based on crop models.
Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J
2017-08-01
We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.
The Crosstown initiative: art, community, and placemaking in Memphis.
Thomas, Elizabeth; Pate, Sarah; Ranson, Anna
2015-03-01
This case study examines an arts organization at the center of an urban neighborhood revitalization effort and its contributions to creative placemaking and inclusive community building. The study documents innovative arts practices and explores their meaning for a local context, an understudied city in the Mid-South region of the United States. It builds on the research team's ongoing work as teachers, students, and scholars in partnership with the arts organization. It includes systematic participant observation, interviews with stakeholders, and a review of historical and contemporary media coverage. We found that the organization and its practices provided a rich context for exploring an expanded sense of community including bridging social capital and place-based frameworks. Analysis suggests that the organization's intentional arts based practices bring multiple understandings of community and art into meaningful dialogue through the generation of creative and social friction. These practices illustrate one context-specific strategy addressing the tensions in a community-diversity dialectic (Townley et al. in Am J Commun Psychol 47:69-85, 2011).
Li, Miao; Anderson, James G
2016-08-01
Drawing on the life course perspective and the assumptive world theory, this paper examines whether pre-migration trauma exposure is associated with psychological distress through post-migration perceived discrimination for Asian American immigrants. The study is based on cross-sectional data from the National Latino and Asian American Study (N = 1639). Structural equation model is used to estimate the relationship between pre-migration trauma, post-migration perceived discrimination, and psychological distress. Additional models are estimated to explore possible variations across ethnic groups as well as across different types of pre-migration trauma experience. Pre-migration trauma exposure is associated with higher levels of psychological distress, both directly and indirectly through higher level of perceived discrimination, even after controlling for demographic/acculturative factors and post-migration trauma exposure. This pattern holds for the following sub-types of pre-migration trauma: political trauma, crime victimization, physical violence, accidental trauma, and relational trauma. Multi-group analyses show that this pattern holds for all Asian immigrant subgroups except the Vietnamese. Studies of immigrant mental health primarily focus on post-migration stressors. Few studies have considered the link between pre- and post-migration contexts in assessing mental health outcomes. The study illustrates the usefulness of bridging the pre- and post-migration context in identifying the mental health risks along the immigrant life course.
Nakagawa, Fumiyo
2017-01-28
Migrants account for a significant number of people living with HIV in Europe, and it is important to fully consider this population in national estimates. Using a novel approach with the UK as an example, we present key public health measures of the HIV epidemic, taking into account both in-country infections and infections likely to have been acquired abroad. Mathematical model calibrated to extensive data sources. An individual-based stochastic simulation model is used to calibrate to routinely collected surveillance data in the UK. Data on number of new HIV diagnoses, number of deaths, CD4 cell count at diagnosis, as well as time of arrival into the UK for migrants and the annual number of people receiving care were used. An estimated 106 400 (90% plausibility range: 88 700-124 600) people were living with HIV in the UK in 2013. Twenty-three percent of these people, 24 600 (15 000-36 200) were estimated to be undiagnosed; this number has remained stable over the last decade. An estimated 32% of the total undiagnosed population had CD4 cell count less than 350 cells/μl in 2013. Twenty-five and 23% of black African men and women heterosexuals living with HIV were undiagnosed respectively. We have shown a working example to characterize the HIV population in a European context which incorporates migrants from countries with generalized epidemics. Despite all aspects of HIV care being free and widely available to anyone in need in the UK, there is still a substantial number of people who are not yet diagnosed and thus not in care.
Age Assessment in Children: A Novel Cameriere's Stratagem.
Attiguppe, Prabhakar Ramasetty; Yavagal, Chandrashekar; Maganti, Rekhamani; Mythri, P
2016-01-01
Age is one of the essential factors in establishing the identity of a person, especially in children. Age estimation plays an important part in treatment planning, forensic dentistry, legal issues, and paleodemographic research. The present study was an attempt to estimate the chronological age in children of Davangere population by using Cameriere's India specific formula. This was a retrospective observational study to estimate the chronological age in children of Davangere population. A total of 150 panoramic radiographs of patients aged between 6 and 15 years, including both sexes, were selected. Age was calculated by measuring open apices of seven right or left mandibular teeth using Adobe Photoshop software. Statistical analysis was performed to derive a regression equation for estimation of age, which showed that, of the variables X 1 , X 2 , X 3 , X 4 , X 5 , X 6 , X 7 , s, N 0 , the variables N 0 and X 4 were statistically noteworthy. Hence, these two variables were used to derive the linear regression formula: Age = 10.522 + 0.712(N 0 ) - 5.040(X 4 ). The model was found to be statistically significant, F(2, 147) = 207.96, p < 0.001, and accounted for approximately 74% of the variance of age (R 2 = 0.739, adjusted R 2 = 0.735). Cameriere's method can be used for age assessment in children for forensic as well as legal contexts and based on these variables a reliable age estimation equation could be proposed specifically for Davangere population. Attiguppe PR, Yavagal C, Maganti R, Mythri P. Age Assessment in Children: A Novel Cameriere's Stratagem. Int J Clin Pediatr Dent 2016;9(4):330-334.
Age Assessment in Children: A Novel Cameriere’s Stratagem
Attiguppe, Prabhakar Ramasetty; Yavagal, Chandrashekar; Mythri, P
2016-01-01
Aim Age is one of the essential factors in establishing the identity of a person, especially in children. Age estimation plays an important part in treatment planning, forensic dentistry, legal issues, and paleodemographic research. The present study was an attempt to estimate the chronological age in children of Davangere population by using Cameriere’s India specific formula. Materials and methods This was a retrospective observational study to estimate the chronological age in children of Davangere population. A total of 150 panoramic radiographs of patients aged between 6 and 15 years, including both sexes, were selected. Age was calculated by measuring open apices of seven right or left mandibular teeth using Adobe Photoshop software. Results Statistical analysis was performed to derive a regression equation for estimation of age, which showed that, of the variables X1, X2, X3, X4, X5, X6, X7, s, N0, the variables N0 and X4 were statistically noteworthy. Hence, these two variables were used to derive the linear regression formula: Age = 10.522 + 0.712(N0) - 5.040(X4). The model was found to be statistically significant, F(2, 147) = 207.96, p < 0.001, and accounted for approximately 74% of the variance of age (R2 = 0.739, adjusted R2 = 0.735). Conclusion Cameriere’s method can be used for age assessment in children for forensic as well as legal contexts and based on these variables a reliable age estimation equation could be proposed specifically for Davangere population. How to cite this article Attiguppe PR, Yavagal C, Maganti R, Mythri P. Age Assessment in Children: A Novel Cameriere’s Stratagem. Int J Clin Pediatr Dent 2016;9(4):330-334. PMID:28127165
Inverse MDS: Inferring Dissimilarity Structure from Multiple Item Arrangements
Kriegeskorte, Nikolaus; Mur, Marieke
2012-01-01
The pairwise dissimilarities of a set of items can be intuitively visualized by a 2D arrangement of the items, in which the distances reflect the dissimilarities. Such an arrangement can be obtained by multidimensional scaling (MDS). We propose a method for the inverse process: inferring the pairwise dissimilarities from multiple 2D arrangements of items. Perceptual dissimilarities are classically measured using pairwise dissimilarity judgments. However, alternative methods including free sorting and 2D arrangements have previously been proposed. The present proposal is novel (a) in that the dissimilarity matrix is estimated by “inverse MDS” based on multiple arrangements of item subsets, and (b) in that the subsets are designed by an adaptive algorithm that aims to provide optimal evidence for the dissimilarity estimates. The subject arranges the items (represented as icons on a computer screen) by means of mouse drag-and-drop operations. The multi-arrangement method can be construed as a generalization of simpler methods: It reduces to pairwise dissimilarity judgments if each arrangement contains only two items, and to free sorting if the items are categorically arranged into discrete piles. Multi-arrangement combines the advantages of these methods. It is efficient (because the subject communicates many dissimilarity judgments with each mouse drag), psychologically attractive (because dissimilarities are judged in context), and can characterize continuous high-dimensional dissimilarity structures. We present two procedures for estimating the dissimilarity matrix: a simple weighted-aligned-average of the partial dissimilarity matrices and a computationally intensive algorithm, which estimates the dissimilarity matrix by iteratively minimizing the error of MDS-predictions of the subject’s arrangements. The Matlab code for interactive arrangement and dissimilarity estimation is available from the authors upon request. PMID:22848204
Allen, Carrie; Zarowitz, Barbara; O'Shea, Terrence; Peterson, Edward; Yonan, Charles; Waterman, Fanta
Pseudobulbar Affect (PBA) is a neurologic condition characterized by involuntary outbursts of crying and/or laughing disproportionate to patient mood or social context. Although an estimated 9% of nursing home residents have symptoms suggestive of PBA, they are not routinely screened. Our goal was to develop an electronic screening tool based upon characteristics common to nursing home residents with PBA identified through medical record data. Nursing home residents with PBA treated with dextromethorphan hydrobromide/quinidine sulfate (n = 140) were compared to age-, gender-, and dementia-diagnosis-matched controls without PBA or treatment (n = 140). Comparative categories included diagnoses, medication use and symptom documentation. Using a multivariable regression and best decision rule analysis, we found PBA in nursing home residents was associated with chart documentation of uncontrollable crying, presence of a neurologic disorder (e.g., Parkinson's disease), or by the documented presence of at least 2 of the following: stroke, severe cognitive impairment, and schizophrenia. Based on these risk factors, an electronic screening tool was created. Copyright © 2017 Elsevier Inc. All rights reserved.
The FAOSTAT Emissions database for AFOLU: Updates for 1961-2015
NASA Astrophysics Data System (ADS)
Tubiello, F.
2017-12-01
The FAO computes GHG emissions for agriculture and land use since 2012. Data are disseminate in FAOSTAT, with country detail and global coverage, based on IPCC 2006 Tier 1 and underlying FAOSTAT activity data, complemented by use of geospatial maps for specific land use/land cover dynamics. Methods for capacity development with countries based on teh FAOSTAT approach are discussed in the context of suporting the enhanced transprency framework of teh Paris agreement. New updates to 2015 are discussed, including findings on peat emissions and initial projections of land use emissions to 2030 for south east asia. It is found that, by considering the time dynamics of land use change in Indonesia relative to palm oil cultivation, result in a doubling of the earlier estimates of emissions from drained organic soils that had informed the IPCC AR5. The 2030 projections show that, within a scenario approach, teh bulk of AFOLU emissoins increase in south east asia is also linked primarily to palm oil dynamics, with a diminishing impact towards 2030 due to markeyt dynamics.
Sedentary behaviors in fifth-grade boys and girls: where, with whom, and why?
Taverno Ross, Sharon E; Byun, Wonwoo; Dowda, Marsha; McIver, Kerry L; Saunders, Ruth P; Pate, Russell R
2013-12-01
An understanding of the context surrounding screen- and non-screen-based sedentary behavior would facilitate efforts to reduce children's overall sedentary behavior. This study examined the prevalence of specific types of sedentary behavior in children, the social and physical contexts surrounding these behaviors, and differences by gender. Participants included 686 fifth graders participating in the Transitions and Activity Changes in Kids Study (TRACK). The Physical Activity Choices instrument measured child participation in seven sedentary behaviors, the social (i.e., with whom) and physical (i.e., where) contexts, and perceptions (i.e., why) of those behaviors. Analysis included mixed-model regression adjusted for race/ethnicity, BMI, and socioeconomic status. Children participated in both screen- and non-screen-based sedentary behaviors at very high frequencies. The most popular activities included watching television or videos, listening to music, playing video games (boys only), and talking on the phone or texting (girls only). Children engaged in sedentary behaviors most often at home, at school, or in their neighborhood. In general, the patterns of social context for the behaviors were similar for boys and girls, with the exception of video game playing. Girls perceived listening to music and talking on the phone or texting to be more fun than boys; children did not differ in their other perceptions (i.e., how much choice or how important) of the behaviors. Multi-level interventions that target reducing sedentary behavior in the home, neighborhood, and school context may be most effective; however, the approach needed will likely differ by gender.
Visualization and Ontology of Geospatial Intelligence
NASA Astrophysics Data System (ADS)
Chan, Yupo
Recent events have deepened our conviction that many human endeavors are best described in a geospatial context. This is evidenced in the prevalence of location-based services, as afforded by the ubiquitous cell phone usage. It is also manifested by the popularity of such internet engines as Google Earth. As we commute to work, travel on business or pleasure, we make decisions based on the geospatial information provided by such location-based services. When corporations devise their business plans, they also rely heavily on such geospatial data. By definition, local, state and federal governments provide services according to geographic boundaries. One estimate suggests that 85 percent of data contain spatial attributes.
Implementing the undergraduate mini-CEX: a tailored approach at Southampton University.
Hill, Faith; Kendall, Kathleen; Galbraith, Kevin; Crossley, Jim
2009-04-01
The mini-clinical evaluation exercise (mini-CEX) is widely used in the UK to assess clinical competence, but there is little evidence regarding its implementation in the undergraduate setting. This study aimed to estimate the validity and reliability of the undergraduate mini-CEX and discuss the challenges involved in its implementation. A total of 3499 mini-CEX forms were completed. Validity was assessed by estimating associations between mini-CEX score and a number of external variables, examining the internal structure of the instrument, checking competency domain response rates and profiles against expectations, and by qualitative evaluation of stakeholder interviews. Reliability was evaluated by overall reliability coefficient (R), estimation of the standard error of measurement (SEM), and from stakeholders' perceptions. Variance component analysis examined the contribution of relevant factors to students' scores. Validity was threatened by various confounding variables, including: examiner status; case complexity; attachment specialty; patient gender, and case focus. Factor analysis suggested that competency domains reflect a single latent variable. Maximum reliability can be achieved by aggregating scores over 15 encounters (R = 0.73; 95% confidence interval [CI] +/- 0.28 based on a 6-point assessment scale). Examiner stringency contributed 29% of score variation and student attachment aptitude 13%. Stakeholder interviews revealed staff development needs but the majority perceived the mini-CEX as more reliable and valid than the previous long case. The mini-CEX has good overall utility for assessing aspects of the clinical encounter in an undergraduate setting. Strengths include fidelity, wide sampling, perceived validity, and formative observation and feedback. Reliability is limited by variable examiner stringency, and validity by confounding variables, but these should be viewed within the context of overall assessment strategies.
Favato, Giampiero; Easton, Tania; Vecchiato, Riccardo; Noikokyris, Emmanouil
2017-05-09
The protective (herd) effect of the selective vaccination of pubertal girls against human papillomavirus (HPV) implies a high probability that one of the two partners involved in intercourse is immunised, hence preventing the other from this sexually transmitted infection. The dynamic transmission models used to inform immunisation policy should include consideration of sexual behaviours and population mixing in order to demonstrate an ecological validity, whereby the scenarios modelled remain faithful to the real-life social and cultural context. The primary aim of this review is to test the ecological validity of the universal HPV vaccination cost-effectiveness modelling available in the published literature. The research protocol related to this systematic review has been registered in the International Prospective Register of Systematic Reviews (PROSPERO: CRD42016034145). Eight published economic evaluations were reviewed. None of the studies showed due consideration of the complexities of human sexual behaviour and the impact this may have on the transmission of HPV. Our findings indicate that all the included models might be affected by a different degree of ecological bias, which implies an inability to reflect the natural demographic and behavioural trends in their outcomes and, consequently, to accurately inform public healthcare policy. In particular, ecological bias have the effect to over-estimate the preference-based outcomes of selective immunisation. A relatively small (15-20%) over-estimation of quality-adjusted life years (QALYs) gained with selective immunisation programmes could induce a significant error in the estimate of cost-effectiveness of universal immunisation, by inflating its incremental cost effectiveness ratio (ICER) beyond the acceptability threshold. The results modelled here demonstrate the limitations of the cost-effectiveness studies for HPV vaccination, and highlight the concern that public healthcare policy might have been built upon incomplete studies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Testing the Bouchet-Morton Complementary Hypothesis at Harvard Forest using Sap Flux Data
NASA Astrophysics Data System (ADS)
Pettijohn, J. C.; Salvucci, G. D.; Phillips, N. G.; Daley, M. J.
2005-12-01
The Bouchet-Morton Complementary Relationship (CR) states that at a given surface moisture availability (MA), changes in actual evapotranspiration (ETa) are reflected in changes in potential evapotranspiration (ETp) such that ETa + ETp = 2ET0, where ET0 is an assumed equilibrium evaporation condition at which ETa = ETp = ET0 at maximum MA. Whereas ETp conceptually includes a potential transpiration component, existing CR model estimates of ET_ {p are based upon the Penman combination equation for open water evaporation (ETp,Pen). Recent CR investigations for a temperate grassland at FIFE suggest, however, that the convergence between ETa and ETp,Pen will only occur if a maximum canopy conductance is included in the estimation of ETp. The purpose of this study was to conduct a field investigation at Harvard Forest to test the hypothesis that a CR-type relationship should occur between red maple ( Acer rubrum L.) actual transpiration and red maple potential transpiration, i.e., transpiration given unlimited root- zone MA via localized irrigation. Just as pan evaporation (ETp,Pen) is a physical gauge of ETp, we therefore question whether a well- irrigated maple is a potential transpirator. Daily averages of whole-tree transpiration for our co- occurring irrigated red maple network and reference network were calculated using high-frequency constant-heat sap flux sensor (i.e., Granier-type) measurements. Soil moisture, temperature and matric potential parameters were measured using Campbell Scientific sensors. Preliminary results suggest that the relationship between potential and actual transpiration differs significantly from ETa and ETp,Pen in the context of CR, adding useful insight into both ETp estimation and the understanding of physiological response to MA variability.
CAREX Canada: an enhanced model for assessing occupational carcinogen exposure
Peters, Cheryl E; Ge, Calvin B; Hall, Amy L; Davies, Hugh W; Demers, Paul A
2015-01-01
Objectives To estimate the numbers of workers exposed to known and suspected occupational carcinogens in Canada, building on the methods of CARcinogen EXposure (CAREX) projects in the European Union (EU). Methods CAREX Canada consists of estimates of the prevalence and level of exposure to occupational carcinogens. CAREX Canada includes occupational agents evaluated by the International Agency for Research on Cancer as known, probable or possible human carcinogens that were present and feasible to assess in Canadian workplaces. A Canadian Workplace Exposure Database was established to identify the potential for exposure in particular industries and occupations, and to create exposure level estimates among priority agents, where possible. CAREX EU data were reviewed for relevance to the Canadian context and the proportion of workers likely to be exposed by industry and occupation in Canada was assigned using expert assessment and agreement by a minimum of two occupational hygienists. These proportions were used to generate prevalence estimates by linkage with the Census of Population for 2006, and these estimates are available by industry, occupation, sex and province. Results CAREX Canada estimated the number of workers exposed to 44 known, probable and suspected carcinogens. Estimates of levels of exposure were further developed for 18 priority agents. Common exposures included night shift work (1.9 million exposed), solar ultraviolet radiation exposure (1.5 million exposed) and diesel engine exhaust (781 000 exposed). Conclusions A substantial proportion of Canadian workers are exposed to known and suspected carcinogens at work. PMID:24969047
Mishra, Sharmistha; Mountain, Elisa; Pickles, Michael; Vickerman, Peter; Shastri, Suresh; Gilks, Charles; Dhingra, Nandini K; Washington, Reynold; Becker, Marissa L; Blanchard, James F; Alary, Michel; Boily, Marie-Claude
2014-01-01
To compare the potential population-level impact of expanding antiretroviral treatment (ART) in HIV epidemics concentrated among female sex workers (FSWs) and clients, with and without existing condom-based FSW interventions. Mathematical model of heterosexual HIV transmission in south India. We simulated HIV epidemics in three districts to assess the 10-year impact of existing ART programs (ART eligibility at CD4 cell count ≤350) beyond that achieved with high condom use, and the incremental benefit of expanding ART by either increasing ART eligibility, improving access to care, or prioritizing ART expansion to FSWs/clients. Impact was estimated in the total population (including FSWs and clients). In the presence of existing condom-based interventions, existing ART programs (medium-to-good coverage) were predicted to avert 11-28% of remaining HIV infections between 2014 and 2024. Increasing eligibility to all risk groups prevented an incremental 1-15% over existing ART programs, compared with 29-53% when maximizing access to all risk groups. If there was no condom-based intervention, and only poor ART coverage, then expanding ART prevented a larger absolute number but a smaller relative fraction of HIV infections for every additional person-year of ART. Across districts and baseline interventions, for every additional person-year of treatment, prioritizing access to FSWs was most efficient (and resource saving), followed by prioritizing access to FSWs and clients. The relative and absolute benefit of ART expansion depends on baseline condom use, ART coverage, and epidemic size. In south India, maximizing FSWs' access to care, followed by maximizing clients' access are the most efficient ways to expand ART for HIV prevention, across baseline intervention context.
Methods for measuring utilization of mental health services in two epidemiologic studies
NOVINS, DOUGLAS K.; BEALS, JANETTE; CROY, CALVIN; MANSON, SPERO M.
2015-01-01
Objectives of Study Psychiatric epidemiologic studies often include two or more sets of questions regarding service utilization, but the agreement across these different questions and the factors associated with their endorsement have not been examined. The objectives of this study were to describe the agreement of different sets of mental health service utilization questions that were included in the American Indian Service Utilization Psychiatric Epidemiology Risk and Protective Factors Project (AI-SUPERPFP), and compare the results to similar questions included in the baseline National Comorbidity Survey (NCS). Methods Responses to service utilization questions by 2878 AI-SUPERPFP and 5877 NCS participants were examined by calculating estimates of service use and agreement (κ) across the different sets of questions. Logistic regression models were developed to identify factors associated with endorsement of specific sets of questions. Results In both studies, estimates of mental health service utilization varied across the different sets of questions. Agreement across the different question sets was marginal to good (κ = 0.27–0.69). Characteristics of identified service users varied across the question sets. Limitations Neither survey included data to examine the validity of participant responses to service utilization questions. Recommendations for Further Research Question wording and placement appear to impact estimates of service utilization in psychiatric epidemiologic studies. Given the importance of these estimates for policy-making, further research into the validity of survey responses as well as impacts of question wording and context on rates of service utilization is warranted. PMID:18767205
[Neglected infectious diseases: an ongoing challenge for public health and equity in Peru].
Cabezas-Sánchez, César
2014-04-01
Neglected Infectious Diseases (NID) affect more than one billion people worldwide, and are associated with poverty, geographic isolation of populations, social stigma, lack of precise data on estimates on both the global and local burden of disease (underreporting of the diseases), inadequate financial and political resources to effective control measures, lack of lobbying on behalf of the most vulnerable population, as well as scarce drug and diagnostic methods development. In this article we describe the relationship between NID, poverty and inequality, we propose a new concept of disease in the tropics, expanding the list of diseases that share characteristics with NID in the Peruvian context, discuss the limited availability of drugs and diagnostic tests to properly deal with these diseases, as well as highlight the contributions by the Peruvian National Institute of Health, and as final thoughts, we state that the solution for the prevention and control of NID must include an integrated approach, including the social determinants of health in the context of the fight against poverty and inequality.
On Estimating End-to-End Network Path Properties
NASA Technical Reports Server (NTRS)
Allman, Mark; Paxson, Vern
1999-01-01
The more information about current network conditions available to a transport protocol, the more efficiently it can use the network to transfer its data. In networks such as the Internet, the transport protocol must often form its own estimates of network properties based on measurements per-formed by the connection endpoints. We consider two basic transport estimation problems: determining the setting of the retransmission timer (RTO) for are reliable protocol, and estimating the bandwidth available to a connection as it begins. We look at both of these problems in the context of TCP, using a large TCP measurement set [Pax97b] for trace-driven simulations. For RTO estimation, we evaluate a number of different algorithms, finding that the performance of the estimators is dominated by their minimum values, and to a lesser extent, the timer granularity, while being virtually unaffected by how often round-trip time measurements are made or the settings of the parameters in the exponentially-weighted moving average estimators commonly used. For bandwidth estimation, we explore techniques previously sketched in the literature [Hoe96, AD98] and find that in practice they perform less well than anticipated. We then develop a receiver-side algorithm that performs significantly better.
Human Pose Estimation from Monocular Images: A Comprehensive Survey
Gong, Wenjuan; Zhang, Xuena; Gonzàlez, Jordi; Sobral, Andrews; Bouwmans, Thierry; Tu, Changhe; Zahzah, El-hadi
2016-01-01
Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used. PMID:27898003