Dahabreh, Issa J.; Sheldrick, Radley C.; Paulus, Jessica K.; Chung, Mei; Varvarigou, Vasileia; Jafri, Haseeb; Rassen, Jeremy A.; Trikalinos, Thomas A.; Kitsios, Georgios D.
2012-01-01
Aims Randomized controlled trials (RCTs) are the gold standard for assessing the efficacy of therapeutic interventions because randomization protects from biases inherent in observational studies. Propensity score (PS) methods, proposed as a potential solution to confounding of the treatment–outcome association, are widely used in observational studies of therapeutic interventions for acute coronary syndromes (ACS). We aimed to systematically assess agreement between observational studies using PS methods and RCTs on therapeutic interventions for ACS. Methods and results We searched for observational studies of interventions for ACS that used PS methods to estimate treatment effects on short- or long-term mortality. Using a standardized algorithm, we matched observational studies to RCTs based on patients’ characteristics, interventions, and outcomes (‘topics’), and we compared estimates of treatment effect between the two designs. When multiple observational studies or RCTs were identified for the same topic, we performed a meta-analysis and used the summary relative risk for comparisons. We matched 21 observational studies investigating 17 distinct clinical topics to 63 RCTs (median = 3 RCTs per observational study) for short-term (7 topics) and long-term (10 topics) mortality. Estimates from PS analyses differed statistically significantly from randomized evidence in two instances; however, observational studies reported more extreme beneficial treatment effects compared with RCTs in 13 of 17 instances (P = 0.049). Sensitivity analyses limited to large RCTs, and using alternative meta-analysis models yielded similar results. Conclusion For the treatment of ACS, observational studies using PS methods produce treatment effect estimates that are of more extreme magnitude compared with those from RCTs, although the differences are rarely statistically significant. PMID:22711757
Bayesian data analysis in observational comparative effectiveness research: rationale and examples.
Olson, William H; Crivera, Concetta; Ma, Yi-Wen; Panish, Jessica; Mao, Lian; Lynch, Scott M
2013-11-01
Many comparative effectiveness research and patient-centered outcomes research studies will need to be observational for one or both of two reasons: first, randomized trials are expensive and time-consuming; and second, only observational studies can answer some research questions. It is generally recognized that there is a need to increase the scientific validity and efficiency of observational studies. Bayesian methods for the design and analysis of observational studies are scientifically valid and offer many advantages over frequentist methods, including, importantly, the ability to conduct comparative effectiveness research/patient-centered outcomes research more efficiently. Bayesian data analysis is being introduced into outcomes studies that we are conducting. Our purpose here is to describe our view of some of the advantages of Bayesian methods for observational studies and to illustrate both realized and potential advantages by describing studies we are conducting in which various Bayesian methods have been or could be implemented.
Borah, Bijan J; Moriarty, James P; Crown, William H; Doshi, Jalpa A
2014-01-01
Propensity score (PS) methods have proliferated in recent years in observational studies in general and in observational comparative effectiveness research (CER) in particular. PS methods are an important set of tools for estimating treatment effects in observational studies, enabling adjustment for measured confounders in an easy-to-understand and transparent way. This article demonstrates how PS methods have been used to address specific CER questions from 2001 through to 2012 by identifying six impactful studies from this period. This article also discusses areas for improvement, including data infrastructure, and a unified set of guidelines in terms of PS implementation and reporting, which will boost confidence in evidence generated through observational CER using PS methods.
A Comparative Investigation of the Efficiency of Two Classroom Observational Methods.
ERIC Educational Resources Information Center
Kissel, Mary Ann
The problem of this study was to determine whether Method A is a more efficient observational method for obtaining activity type behaviors in an individualized classroom than Method B. Method A requires the observer to record the activities of the entire class at given intervals while Method B requires only the activities of selected individuals…
Evaluation of internal noise methods for Hotelling observers
NASA Astrophysics Data System (ADS)
Zhang, Yani; Pham, Binh T.; Eckstein, Miguel P.
2005-04-01
Including internal noise in computer model observers to degrade model observer performance to human levels is a common method to allow for quantitatively comparisons of human and model performance. In this paper, we studied two different types of methods for injecting internal noise to Hotelling model observers. The first method adds internal noise to the output of the individual channels: a) Independent non-uniform channel noise, b) Independent uniform channel noise. The second method adds internal noise to the decision variable arising from the combination of channel responses: a) internal noise standard deviation proportional to decision variable's standard deviation due to the external noise, b) internal noise standard deviation proportional to decision variable's variance caused by the external noise. We tested the square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO). The studied task was detection of a filling defect of varying size/shape in one of four simulated arterial segment locations with real x-ray angiography backgrounds. Results show that the internal noise method that leads to the best prediction of human performance differs across the studied models observers. The CHO model best predicts human observer performance with the channel internal noise. The HO and LGHO best predict human observer performance with the decision variable internal noise. These results might help explain why previous studies have found different results on the ability of each Hotelling model to predict human performance. Finally, the present results might guide researchers with the choice of method to include internal noise into their Hotelling models.
Morgan, Sonya J; Pullon, Susan R H; Macdonald, Lindsay M; McKinlay, Eileen M; Gray, Ben V
2017-06-01
Case study research is a comprehensive method that incorporates multiple sources of data to provide detailed accounts of complex research phenomena in real-life contexts. However, current models of case study research do not particularly distinguish the unique contribution observation data can make. Observation methods have the potential to reach beyond other methods that rely largely or solely on self-report. This article describes the distinctive characteristics of case study observational research, a modified form of Yin's 2014 model of case study research the authors used in a study exploring interprofessional collaboration in primary care. In this approach, observation data are positioned as the central component of the research design. Case study observational research offers a promising approach for researchers in a wide range of health care settings seeking more complete understandings of complex topics, where contextual influences are of primary concern. Future research is needed to refine and evaluate the approach.
ERIC Educational Resources Information Center
Shih, Jeffrey C.; Ing, Marsha; Tarr, James E.
2013-01-01
One method to investigate classroom quality is for a person to observe what is happening in the classroom. However, this method raises practical and technical concerns such as how many observations to collect, when to collect these observations and who should collect these observations. The purpose of this study is to provide empirical evidence to…
Asan, Onur; Montague, Enid
2014-01-01
The purpose of this paper is to describe the use of video-based observation research methods in primary care environment and highlight important methodological considerations and provide practical guidance for primary care and human factors researchers conducting video studies to understand patient-clinician interaction in primary care settings. We reviewed studies in the literature which used video methods in health care research, and we also used our own experience based on the video studies we conducted in primary care settings. This paper highlighted the benefits of using video techniques, such as multi-channel recording and video coding, and compared "unmanned" video recording with the traditional observation method in primary care research. We proposed a list that can be followed step by step to conduct an effective video study in a primary care setting for a given problem. This paper also described obstacles, researchers should anticipate when using video recording methods in future studies. With the new technological improvements, video-based observation research is becoming a promising method in primary care and HFE research. Video recording has been under-utilised as a data collection tool because of confidentiality and privacy issues. However, it has many benefits as opposed to traditional observations, and recent studies using video recording methods have introduced new research areas and approaches.
Asan, Onur; Montague, Enid
2015-01-01
Objective The purpose of this paper is to describe the use of video-based observation research methods in primary care environment and highlight important methodological considerations and provide practical guidance for primary care and human factors researchers conducting video studies to understand patient-clinician interaction in primary care settings. Methods We reviewed studies in the literature which used video methods in health care research and, we also used our own experience based on the video studies we conducted in primary care settings. Results This paper highlighted the benefits of using video techniques such as multi-channel recording and video coding and compared “unmanned” video recording with the traditional observation method in primary care research. We proposed a list, which can be followed step by step to conduct an effective video study in a primary care setting for a given problem. This paper also described obstacles researchers should anticipate when using video recording methods in future studies. Conclusion With the new technological improvements, video-based observation research is becoming a promising method in primary care and HFE research. Video recording has been under-utilized as a data collection tool because of confidentiality and privacy issues. However, it has many benefits as opposed to traditional observations, and recent studies using video recording methods have introduced new research areas and approaches. PMID:25479346
NASA Astrophysics Data System (ADS)
Nuruki, Atsuo; Shimozono, Tomoyuki; Kawabata, Takuro; Yamada, Masafumi; Yunokuchi, Kazutomo; Maruyama, Atsuo
Recently, it often said that it is one of the means that the observational learning promotes the acquisition of sports and athletic skills. We think that the inexperienced person can efficiently acquire athletic skills by using the observational method of the expert as an index of the observational method in the observational learning. Then, in the present study, the expert and inexperienced person's glance characteristic were compared, and it was examined whether the observational method of the expert was able to be used as an index of the observational method of the inexperienced person. The glance characteristics are a glance transition, glance total moved distance, the gazing duration, moreover glance moved distance and radial velocity between each gaze points. Additionally, we investigated whether there was a change in physical performance before and after the observational learning, and two different observational learning groups (the expert's observational method group, the free observation group). In result, it was clarified that the expert concentrated, observed a constant part of the movement, and the inexperienced person was observing the entire movement. Moreover, the result that glance total moved distance was shorter than the inexperienced person, and expert's gazing duration was longer than the inexperienced person. It was clarified that the expert was efficiently emphatically observing the point of the movement from these results. In addition, the inexperienced persons have advanced physical performance through the observational learning. Then the expert's observational method group advanced physical performance better than the free observation group. Therefore we suggested that the observational method of the expert be able to be used as an index of the method of observing the inexperienced person.
Laenkholm, Anne-Vibeke; Grabau, Dorthe; Møller Talman, Maj-Lis; Balslev, Eva; Bak Jylling, Anne Marie; Tabor, Tomasz Piotr; Johansen, Morten; Brügmann, Anja; Lelkaitis, Giedrius; Di Caterino, Tina; Mygind, Henrik; Poulsen, Thomas; Mertz, Henrik; Søndergaard, Gorm; Bruun Rasmussen, Birgitte
2018-01-01
In 2011, the St. Gallen Consensus Conference introduced the use of pathology to define the intrinsic breast cancer subtypes by application of immunohistochemical (IHC) surrogate markers ER, PR, HER2 and Ki67 with a specified Ki67 cutoff (>14%) for luminal B-like definition. Reports concerning impaired reproducibility of Ki67 estimation and threshold inconsistency led to the initiation of this quality assurance study (2013-2015). The aim of the study was to investigate inter-observer variation for Ki67 estimation in malignant breast tumors by two different quantification methods (assessment method and count method) including measure of agreement between methods. Fourteen experienced breast pathologists from 12 pathology departments evaluated 118 slides from a consecutive series of malignant breast tumors. The staining interpretation was performed according to both the Danish and Swedish guidelines. Reproducibility was quantified by intra-class correlation coefficient (ICC) and Lights Kappa with dichotomization of observations at the larger than (>) 20% threshold. The agreement between observations by the two quantification methods was evaluated by Bland-Altman plot. For the fourteen raters the median ranged from 20% to 40% by the assessment method and from 22.5% to 36.5% by the count method. Light's Kappa was 0.664 for observation by the assessment method and 0.649 by the count method. The ICC was 0.82 (95% CI: 0.77-0.86) by the assessment method vs. 0.84 (95% CI: 0.80-0.87) by the count method. Although the study in general showed a moderate to good inter-observer agreement according to both ICC and Lights Kappa, still major discrepancies were identified in especially the mid-range of observations. Consequently, for now Ki67 estimation is not implemented in the DBCG treatment algorithm.
BAGLIO, MICHELLE L.; BAXTER, SUZANNE DOMEL; GUINN, CAROLINE H.; THOMPSON, WILLIAM O.; SHAFFER, NICOLE M.; FRYE, FRANCESCA H. A.
2005-01-01
This article (a) provides a general review of interobserver reliability (IOR) and (b) describes our method for assessing IOR for items and amounts consumed during school meals for a series of studies regarding the accuracy of fourth-grade children's dietary recalls validated with direct observation of school meals. A widely used validation method for dietary assessment is direct observation of meals. Although many studies utilize several people to conduct direct observations, few published studies indicate whether IOR was assessed. Assessment of IOR is necessary to determine that the information collected does not depend on who conducted the observation. Two strengths of our method for assessing IOR are that IOR was assessed regularly throughout the data collection period and that IOR was assessed for foods at the item and amount level instead of at the nutrient level. Adequate agreement among observers is essential to the reasoning behind using observation as a validation tool. Readers are encouraged to question the results of studies that fail to mention and/or to include the results for assessment of IOR when multiple people have conducted observations. PMID:15354155
ERIC Educational Resources Information Center
Ball, Anna L.; Bowling, Amanda M.; Sharpless, Justin D.
2016-01-01
School Based Agricultural Education (SBAE) teachers can use coaching behaviors, along with their agricultural content knowledge to help their Career Development Event (CDE) teams succeed. This mixed methods, collective case study observed three SBAE teachers preparing multiple CDEs throughout the CDE season. The teachers observed had a previous…
Goethe's Theory of Color and Scientific Intuition
ERIC Educational Resources Information Center
Zajonc, Arthur G.
1976-01-01
Summarizes Goethe's color studies and his methods of study. It is proposed that the act of accurate qualitative observation creates the capability in the observer for an intuitive understanding of the physical laws underlying the phenomena under observation. The use of such a method as a basis for laboratory instruction is discussed. (Author/CP)
Innovative research methods for studying treatments for rare diseases: methodological review.
Gagne, Joshua J; Thompson, Lauren; O'Keefe, Kelly; Kesselheim, Aaron S
2014-11-24
To examine methods for generating evidence on health outcomes in patients with rare diseases. Methodological review of existing literature. PubMed, Embase, and Academic Search Premier searched for articles describing innovative approaches to randomized trial design and analysis methods and methods for conducting observational research in patients with rare diseases. We assessed information related to the proposed methods, the specific rare disease being studied, and outcomes from the application of the methods. We summarize methods with respect to their advantages in studying health outcomes in rare diseases and provide examples of their application. We identified 46 articles that proposed or described methods for studying patient health outcomes in rare diseases. Articles covered a wide range of rare diseases and most (72%) were published in 2008 or later. We identified 16 research strategies for studying rare disease. Innovative clinical trial methods minimize sample size requirements (n=4) and maximize the proportion of patients who receive active treatment (n=2), strategies crucial to studying small populations of patients with limited treatment choices. No studies describing unique methods for conducting observational studies in patients with rare diseases were identified. Though numerous studies apply unique clinical trial designs and considerations to assess patient health outcomes in rare diseases, less attention has been paid to innovative methods for studying rare diseases using observational data. © Gagne et al 2014.
Evaluation of nursing faculty through observation.
Crawford, L H
1998-10-01
The purpose of this study was to assess current use and faculty perceptions of classroom observation as a method of faculty evaluation in schools of nursing. Baccalaureate schools of nursing were surveyed to determine current use of classroom observation and its worth from the perception of administrators and faculty. Although most schools used classroom observation as a method of faculty evaluation, further clarification and research is needed in the following areas: purpose of classroom observation; number of observations necessary; weight given to classroom observation in relation to other evaluation methods; and tools used.
Local Linear Observed-Score Equating
ERIC Educational Resources Information Center
Wiberg, Marie; van der Linden, Wim J.
2011-01-01
Two methods of local linear observed-score equating for use with anchor-test and single-group designs are introduced. In an empirical study, the two methods were compared with the current traditional linear methods for observed-score equating. As a criterion, the bias in the equated scores relative to true equating based on Lord's (1980)…
Anguera, M Teresa; Camerino, Oleguer; Castañer, Marta; Sánchez-Algarra, Pedro; Onwuegbuzie, Anthony J
2017-01-01
Mixed methods studies are been increasingly applied to a diversity of fields. In this paper, we discuss the growing use-and enormous potential-of mixed methods research in the field of sport and physical activity. A second aim is to contribute to strengthening the characteristics of mixed methods research by showing how systematic observation offers rigor within a flexible framework that can be applied to a wide range of situations. Observational methodology is characterized by high scientific rigor and flexibility throughout its different stages and allows the objective study of spontaneous behavior in natural settings, with no external influence. Mixed methods researchers need to take bold yet thoughtful decisions regarding both substantive and procedural issues. We present three fundamental and complementary ideas to guide researchers in this respect: we show why studies of sport and physical activity that use a mixed methods research approach should be included in the field of mixed methods research, we highlight the numerous possibilities offered by observational methodology in this field through the transformation of descriptive data into quantifiable code matrices, and we discuss possible solutions for achieving true integration of qualitative and quantitative findings.
ERIC Educational Resources Information Center
Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.
2016-01-01
Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…
Interaction of depth probes and style of depiction
van Doorn, Andrea J.; Koenderink, Jan J.; Leyssen, Mieke H. R.; Wagemans, Johan
2012-01-01
We study the effect of stylistic differences on the nature of pictorial spaces as they appear to an observer when looking into a picture. Four pictures chosen from diverse styles of depiction were studied by 2 different methods. Each method addresses pictorial depth but draws on a different bouquet of depth cues. We find that the depth structures are very similar for 8 observers, apart from an idiosyncratic depth scaling (up to a factor of 3). The differences between observers generalize over (very different) pictures and (very different) methods. They are apparently characteristic of the person. The differences between depths as sampled by the 2 methods depend upon the style of the picture. This is the case for all observers except one. PMID:23145306
Comparison of Observational Methods and Their Relation to Ratings of Engagement in Young Children
ERIC Educational Resources Information Center
Wood, Brenna K.; Hojnoski, Robin L.; Laracy, Seth D.; Olson, Christopher L.
2016-01-01
Although, collectively, results of earlier direct observation studies suggest momentary time sampling (MTS) may offer certain technical advantages over whole-interval (WIR) and partial-interval (PIR) recording, no study has compared these methods for measuring engagement in young children in naturalistic environments. This study compared direct…
Zhu, Hong; Xu, Xiaohan; Ahn, Chul
2017-01-01
Paired experimental design is widely used in clinical and health behavioral studies, where each study unit contributes a pair of observations. Investigators often encounter incomplete observations of paired outcomes in the data collected. Some study units contribute complete pairs of observations, while the others contribute either pre- or post-intervention observations. Statistical inference for paired experimental design with incomplete observations of continuous outcomes has been extensively studied in literature. However, sample size method for such study design is sparsely available. We derive a closed-form sample size formula based on the generalized estimating equation approach by treating the incomplete observations as missing data in a linear model. The proposed method properly accounts for the impact of mixed structure of observed data: a combination of paired and unpaired outcomes. The sample size formula is flexible to accommodate different missing patterns, magnitude of missingness, and correlation parameter values. We demonstrate that under complete observations, the proposed generalized estimating equation sample size estimate is the same as that based on the paired t-test. In the presence of missing data, the proposed method would lead to a more accurate sample size estimate comparing with the crude adjustment. Simulation studies are conducted to evaluate the finite-sample performance of the generalized estimating equation sample size formula. A real application example is presented for illustration.
Anguera, M. Teresa; Camerino, Oleguer; Castañer, Marta; Sánchez-Algarra, Pedro; Onwuegbuzie, Anthony J.
2017-01-01
Mixed methods studies are been increasingly applied to a diversity of fields. In this paper, we discuss the growing use—and enormous potential—of mixed methods research in the field of sport and physical activity. A second aim is to contribute to strengthening the characteristics of mixed methods research by showing how systematic observation offers rigor within a flexible framework that can be applied to a wide range of situations. Observational methodology is characterized by high scientific rigor and flexibility throughout its different stages and allows the objective study of spontaneous behavior in natural settings, with no external influence. Mixed methods researchers need to take bold yet thoughtful decisions regarding both substantive and procedural issues. We present three fundamental and complementary ideas to guide researchers in this respect: we show why studies of sport and physical activity that use a mixed methods research approach should be included in the field of mixed methods research, we highlight the numerous possibilities offered by observational methodology in this field through the transformation of descriptive data into quantifiable code matrices, and we discuss possible solutions for achieving true integration of qualitative and quantitative findings. PMID:29312061
de Glas, N A; Kiderlen, M; de Craen, A J M; Hamaker, M E; Portielje, J E A; van de Velde, C J H; Liefers, G J; Bastiaannet, E
2015-03-01
Solid evidence of treatment effects in older women with breast cancer is lacking, as they are generally underrepresented in randomized clinical trials on which guideline recommendations are based. An alternative way to study treatment effects in older patients could be to use data from observational studies. However, using appropriate methods in analyzing observational data is a key condition in order to draw valid conclusions, as directly comparing treatments generally results in biased estimates due to confounding by indication. The aim of this systematic review was to investigate the methods that have been used in observational studies that assessed the effects of breast cancer treatment on survival, breast cancer survival and recurrence in older patients (aged 65 years and older). Studies were identified through systematic review of the literature published between January 1st 2009 and December 13th 2013 in the PubMed database and EMBASe. Finally, 31 studies fulfilled the inclusion criteria. Of these, 22 studies directly compared two treatments. Fifteen out of these 22 studies addressed the problem of confounding by indication, while seven studies did not. Nine studies used some form of instrumental variable analysis. In conclusion, the vast majority of observational studies that investigate treatment effects in older breast cancer patients compared treatments directly. These studies are therefore likely to be biased. Observational research will be essential to improve treatment and outcome of older breast cancer patients, but the use of accurate methods is essential to draw valid conclusions from this type of data. Copyright © 2015 Elsevier Ltd. All rights reserved.
Observational methods in comparative effectiveness research.
Concato, John; Lawler, Elizabeth V; Lew, Robert A; Gaziano, J Michael; Aslan, Mihaela; Huang, Grant D
2010-12-01
Comparative effectiveness research (CER) may be defined informally as an assessment of available options for treating specific medical conditions in selected groups of patients. In this context, the most prominent features of CER are the various patient populations, medical ailments, and treatment options involved in any particular project. Yet, each research investigation also has a corresponding study design or "architecture," and in patient-oriented research a common distinction used to describe such designs are randomized controlled trials (RCTs) versus observational studies. The purposes of this overview, with regard to CER, are to (1) understand how observational studies can provide accurate results, comparable to RCTs; (2) recognize strategies used in selected newer methods for conducting observational studies; (3) review selected observational studies from the Veterans Health Administration; and (4) appreciate the importance of fundamental methodological principles when conducting or evaluating individual studies. Published by Elsevier Inc.
HARBO, a simple computer-aided observation method for recording work postures.
Wiktorin, C; Mortimer, M; Ekenvall, L; Kilbom, A; Hjelm, E W
1995-12-01
The aim of the study was to present an observation method focusing on the positions of the hands relative to the body and to evaluate whether this simple observation technique gives a reliable estimate of the total time spent in each of five work postures during one workday. In the first part of the study the interobserver reliability of the observation method was tested with eight blue-collar workers. In the second part the observed time spent with work above the shoulder level was tested in relation to an upper-arm position analyzer, and observed time spent in work below knuckle level was tested in relation to a trunk flexion analyzer, both with 72 blue-collar workers. The interobserver reliability for full-day registrations was high. The intraclass correlation coefficients ranged from 0.99 to 1.00. The observed duration of work with hands above shoulder level correlated well with the measured duration of pronounced arm elevation (> 75 degrees). The product moment correlation coefficient was 0.97. The observed duration of work with hands below knuckle level correlated well with the measured duration of pronounced trunk flexion angles (> 40 degrees). The product moment correlation coefficient was 0.98. The present observation method, designed to make postural observations continuously for several hours, is easy to learn and seems reliable.
NASA Astrophysics Data System (ADS)
Tavakoli Taba, Seyedamir; Hossain, Liaquat; Heard, Robert; Brennan, Patrick; Lee, Warwick; Lewis, Sarah
2017-03-01
Rationale and objectives: Observer performance has been widely studied through examining the characteristics of individuals. Applying a systems perspective, while understanding of the system's output, requires a study of the interactions between observers. This research explains a mixed methods approach to applying a social network analysis (SNA), together with a more traditional approach of examining personal/ individual characteristics in understanding observer performance in mammography. Materials and Methods: Using social networks theories and measures in order to understand observer performance, we designed a social networks survey instrument for collecting personal and network data about observers involved in mammography performance studies. We present the results of a study by our group where 31 Australian breast radiologists originally reviewed 60 mammographic cases (comprising of 20 abnormal and 40 normal cases) and then completed an online questionnaire about their social networks and personal characteristics. A jackknife free response operating characteristic (JAFROC) method was used to measure performance of radiologists. JAFROC was tested against various personal and network measures to verify the theoretical model. Results: The results from this study suggest a strong association between social networks and observer performance for Australian radiologists. Network factors accounted for 48% of variance in observer performance, in comparison to 15.5% for the personal characteristics for this study group. Conclusion: This study suggest a strong new direction for research into improving observer performance. Future studies in observer performance should consider social networks' influence as part of their research paradigm, with equal or greater vigour than traditional constructs of personal characteristics.
Wood, Jonathan S; Donnell, Eric T; Porter, Richard J
2015-02-01
A variety of different study designs and analysis methods have been used to evaluate the performance of traffic safety countermeasures. The most common study designs and methods include observational before-after studies using the empirical Bayes method and cross-sectional studies using regression models. The propensity scores-potential outcomes framework has recently been proposed as an alternative traffic safety countermeasure evaluation method to address the challenges associated with selection biases that can be part of cross-sectional studies. Crash modification factors derived from the application of all three methods have not yet been compared. This paper compares the results of retrospective, observational evaluations of a traffic safety countermeasure using both before-after and cross-sectional study designs. The paper describes the strengths and limitations of each method, focusing primarily on how each addresses site selection bias, which is a common issue in observational safety studies. The Safety Edge paving technique, which seeks to mitigate crashes related to roadway departure events, is the countermeasure used in the present study to compare the alternative evaluation methods. The results indicated that all three methods yielded results that were consistent with each other and with previous research. The empirical Bayes results had the smallest standard errors. It is concluded that the propensity scores with potential outcomes framework is a viable alternative analysis method to the empirical Bayes before-after study. It should be considered whenever a before-after study is not possible or practical. Copyright © 2014 Elsevier Ltd. All rights reserved.
The ratio method: A new tool to study one-neutron halo nuclei
Capel, Pierre; Johnson, R. C.; Nunes, F. M.
2013-10-02
Recently a new observable to study halo nuclei was introduced, based on the ratio between breakup and elastic angular cross sections. This new observable is shown by the analysis of specific reactions to be independent of the reaction mechanism and to provide nuclear-structure information of the projectile. Here we explore the details of this ratio method, including the sensitivity to binding energy and angular momentum of the projectile. We also study the reliability of the method with breakup energy. Lastly, we provide guidelines and specific examples for experimentalists who wish to apply this method.
Keshavarzi, Sareh; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Pakfetrat, Maryam
2012-01-01
BACKGROUND. In many studies with longitudinal data, time-dependent covariates can only be measured intermittently (not at all observation times), and this presents difficulties for standard statistical analyses. This situation is common in medical studies, and methods that deal with this challenge would be useful. METHODS. In this study, we performed the seemingly unrelated regression (SUR) based models, with respect to each observation time in longitudinal data with intermittently observed time-dependent covariates and further compared these models with mixed-effect regression models (MRMs) under three classic imputation procedures. Simulation studies were performed to compare the sample size properties of the estimated coefficients for different modeling choices. RESULTS. In general, the proposed models in the presence of intermittently observed time-dependent covariates showed a good performance. However, when we considered only the observed values of the covariate without any imputations, the resulted biases were greater. The performances of the proposed SUR-based models in comparison with MRM using classic imputation methods were nearly similar with approximately equal amounts of bias and MSE. CONCLUSION. The simulation study suggests that the SUR-based models work as efficiently as MRM in the case of intermittently observed time-dependent covariates. Thus, it can be used as an alternative to MRM.
Neděla, Vilém; Tihlaříková, Eva; Hřib, Jiří
2015-01-01
The use of non-standard low-temperature conditions in environmental scanning electron microscopy might be promising for the observation of coniferous tissues in their native state. This study is aimed to analyse and evaluate the method based on the principle of low-temperature sample stabilization. We demonstrate that the upper mucous layer is sublimed and a microstructure of the sample surface can be observed with higher resolution at lower gas pressure conditions, thanks to a low-temperature method. An influence of the low-temperature method on sample stability was also studied. The results indicate that high-moisture conditions are not suitable for this method and often cause the collapse of samples. The potential improvement of stability to beam damage has been demonstrated by long-time observation at different operation parameters. We finally show high applicability of the low-temperature method on different types of conifers and Oxalis acetosella. © 2014 Wiley Periodicals, Inc.
The Use of Social Media in Recruitment for Medical Research Studies: A Scoping Review
2016-01-01
Background Recruiting an adequate number of participants into medical research studies is challenging for many researchers. Over the past 10 years, the use of social media websites has increased in the general population. Consequently, social media websites are a new, powerful method for recruiting participants into such studies. Objective The objective was to answer the following questions: (1) Is the use of social media more effective at research participant recruitment than traditional methods? (2) Does social media recruit a sample of research participants comparable to that recruited via other methods? (3) Is social media more cost-effective at research participant recruitment than traditional methods? Methods Using the MEDLINE, PsycINFO, and EMBASE databases, all medical research studies that used social media and at least one other method for recruitment were identified. These studies were then categorized as either interventional studies or observational studies. For each study, the effectiveness of recruitment, demographic characteristics of the participants, and cost-effectiveness of recruitment using social media were evaluated and compared with that of the other methods used. The social media sites used in recruitment were identified, and if a study stated that the target population was “difficult to reach” as identified by the authors of the study, this was noted. Results Out of 30 studies, 12 found social media to be the most effective recruitment method, 15 did not, and 3 found social media to be equally effective as another recruitment method. Of the 12 studies that found social media to be the best recruitment method, 8 were observational studies while 4 were interventional studies. Of the 15 studies that did not find social media to be the best recruitment method, 7 were interventional studies while 8 were observational studies. In total, 8 studies stated that the target population was “hard-to-reach,” and 6 of these studies found social media to be the most effective recruitment method. Out of 14 studies that reported demographic data for participants, 2 studies found that social media recruited a sample comparable to that recruited via traditional methods and 12 did not. Out of 13 studies that reported cost-effectiveness, 5 studies found social media to be the most cost-effective recruitment method, 7 did not, and 1 study found social media equally cost-effective as compared with other methods. Conclusions Only 12 studies out of 30 found social media to be the most effective recruitment method. There is evidence that social media can be the best recruitment method for hard-to-reach populations and observational studies. With only 30 studies having compared recruitment through social media with other methods, more studies need to be done that report the effectiveness of recruitment for each strategy, demographics of participants recruited, and cost-effectiveness of each method. PMID:27821383
Detection of medication-related problems in hospital practice: a review
Manias, Elizabeth
2013-01-01
This review examines the effectiveness of detection methods in terms of their ability to identify and accurately determine medication-related problems in hospitals. A search was conducted of databases from inception to June 2012. The following keywords were used in combination: medication error or adverse drug event or adverse drug reaction, comparison, detection, hospital and method. Seven detection methods were considered: chart review, claims data review, computer monitoring, direct care observation, interviews, prospective data collection and incident reporting. Forty relevant studies were located. Detection methods that were better able to identify medication-related problems compared with other methods tested in the same study included chart review, computer monitoring, direct care observation and prospective data collection. However, only small numbers of studies were involved in comparisons with direct care observation (n = 5) and prospective data collection (n = 6). There was little focus on detecting medication-related problems during various stages of the medication process, and comparisons associated with the seriousness of medication-related problems were examined in 19 studies. Only 17 studies involved appropriate comparisons with a gold standard, which provided details about sensitivities and specificities. In view of the relatively low identification of medication-related problems with incident reporting, use of this method in tracking trends over time should be met with some scepticism. Greater attention should be placed on combining methods, such as chart review and computer monitoring in examining trends. More research is needed on the use of claims data, direct care observation, interviews and prospective data collection as detection methods. PMID:23194349
The Use of Social Media in Recruitment for Medical Research Studies: A Scoping Review.
Topolovec-Vranic, Jane; Natarajan, Karthik
2016-11-07
Recruiting an adequate number of participants into medical research studies is challenging for many researchers. Over the past 10 years, the use of social media websites has increased in the general population. Consequently, social media websites are a new, powerful method for recruiting participants into such studies. The objective was to answer the following questions: (1) Is the use of social media more effective at research participant recruitment than traditional methods? (2) Does social media recruit a sample of research participants comparable to that recruited via other methods? (3) Is social media more cost-effective at research participant recruitment than traditional methods? Using the MEDLINE, PsycINFO, and EMBASE databases, all medical research studies that used social media and at least one other method for recruitment were identified. These studies were then categorized as either interventional studies or observational studies. For each study, the effectiveness of recruitment, demographic characteristics of the participants, and cost-effectiveness of recruitment using social media were evaluated and compared with that of the other methods used. The social media sites used in recruitment were identified, and if a study stated that the target population was "difficult to reach" as identified by the authors of the study, this was noted. Out of 30 studies, 12 found social media to be the most effective recruitment method, 15 did not, and 3 found social media to be equally effective as another recruitment method. Of the 12 studies that found social media to be the best recruitment method, 8 were observational studies while 4 were interventional studies. Of the 15 studies that did not find social media to be the best recruitment method, 7 were interventional studies while 8 were observational studies. In total, 8 studies stated that the target population was "hard-to-reach," and 6 of these studies found social media to be the most effective recruitment method. Out of 14 studies that reported demographic data for participants, 2 studies found that social media recruited a sample comparable to that recruited via traditional methods and 12 did not. Out of 13 studies that reported cost-effectiveness, 5 studies found social media to be the most cost-effective recruitment method, 7 did not, and 1 study found social media equally cost-effective as compared with other methods. Only 12 studies out of 30 found social media to be the most effective recruitment method. There is evidence that social media can be the best recruitment method for hard-to-reach populations and observational studies. With only 30 studies having compared recruitment through social media with other methods, more studies need to be done that report the effectiveness of recruitment for each strategy, demographics of participants recruited, and cost-effectiveness of each method. ©Jane Topolovec-Vranic, Karthik Natarajan. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 07.11.2016.
Laitinen, Heleena; Kaunonen, Marja; Astedt-Kurki, Päivi
2014-11-01
To give clarity to the analysis of participant observation in nursing when implementing the grounded theory method. Participant observation (PO) is a method of collecting data that reveals the reality of daily life in a specific context. In grounded theory, interviews are the primary method of collecting data but PO gives a distinctive insight, revealing what people are really doing, instead of what they say they are doing. However, more focus is needed on the analysis of PO. An observational study carried out to gain awareness of nursing care and its electronic documentation in four acute care wards in hospitals in Finland. Discussion of using the grounded theory method and PO as a data collection tool. The following methodological tools are discussed: an observational protocol, jotting of notes, microanalysis, the use of questioning, constant comparison, and writing and illustrating. Each tool has specific significance in collecting and analysing data, working in constant interaction. Grounded theory and participant observation supplied rich data and revealed the complexity of the daily reality of acute care. In this study, the methodological tools provided a base for the study at the research sites and outside. The process as a whole was challenging. It was time-consuming and it required rigorous and simultaneous data collection and analysis, including reflective writing. Using these methodological tools helped the researcher stay focused from data collection and analysis to building theory. Using PO as a data collection method in qualitative nursing research provides insights. It is not commonly discussed in nursing research and therefore this study can provide insight, which cannot be seen or revealed by using other data collection methods. Therefore, this paper can produce a useful tool for those who intend to use PO and grounded theory in their nursing research.
Sargeant, J M; O'Connor, A M; Dohoo, I R; Erb, H N; Cevallos, M; Egger, M; Ersbøll, A K; Martin, S W; Nielsen, L R; Pearl, D L; Pfeiffer, D U; Sanchez, J; Torrence, M E; Vigre, H; Waldner, C; Ward, M P
2016-12-01
Reporting of observational studies in veterinary research presents challenges that often are not addressed in published reporting guidelines. Our objective was to develop an extension of the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) statement that addresses unique reporting requirements for observational studies in veterinary medicine related to health, production, welfare, and food safety. We conducted a consensus meeting with 17 experts in Mississauga, Canada. Experts completed a premeeting survey about whether items in the STROBE statement should be modified or added to address unique issues related to observational studies in animal species with health, production, welfare, or food safety outcomes. During the meeting, each STROBE item was discussed to determine whether or not rewording was recommended, and whether additions were warranted. Anonymous voting was used to determine consensus. Six items required no modifications or additions. Modifications or additions were made to the STROBE items 1 (title and abstract), 3 (objectives), 5 (setting), 6 (participants), 7 (variables), 8 (data sources and measurement), 9 (bias), 10 (study size), 12 (statistical methods), 13 (participants), 14 (descriptive data), 15 (outcome data), 16 (main results), 17 (other analyses), 19 (limitations), and 22 (funding). The methods and processes used were similar to those used for other extensions of the STROBE statement. The use of this STROBE statement extension should improve reporting of observational studies in veterinary research by recognizing unique features of observational studies involving food-producing and companion animals, products of animal origin, aquaculture, and wildlife.
Comparison of methods used for estimating pharmacist counseling behaviors.
Schommer, J C; Sullivan, D L; Wiederholt, J B
1994-01-01
To compare the rates reported for provision of types of information conveyed by pharmacists among studies for which different methods of estimation were used and different dispensing situations were studied. Empiric studies conducted in the US, reported from 1982 through 1992, were selected from International Pharmaceutical Abstracts, MEDLINE, and noncomputerized sources. Empiric studies were selected for review if they reported the provision of at least three types of counseling information. Four components of methods used for estimating pharmacist counseling behaviors were extracted and summarized in a table: (1) sample type and area, (2) sampling unit, (3) sample size, and (4) data collection method. In addition, situations that were investigated in each study were compiled. Twelve studies met our inclusion criteria. Patients were interviewed via telephone in four studies and were surveyed via mail in two studies. Pharmacists were interviewed via telephone in one study and surveyed via mail in two studies. For three studies, researchers visited pharmacy sites for data collection using the shopper method or observation method. Studies with similar methods and situations provided similar results. Data collected by using patient surveys, pharmacist surveys, and observation methods can provide useful estimations of pharmacist counseling behaviors if researchers measure counseling for specific, well-defined dispensing situations.
ERIC Educational Resources Information Center
Klemm, Janina; Neuhaus, Birgit J.
2017-01-01
Observation is one of the basic methods in science. It is not only an epistemological method itself, but also an important competence for other methods like experimenting or comparing. However, there is little knowledge about the relation with affective factors of this inquiry method. In our study, we would like to find out about the relations of…
Eliasson, Kristina; Palm, Peter; Nyman, Teresia; Forsman, Mikael
2017-07-01
A common way to conduct practical risk assessments is to observe a job and report the observed long term risks for musculoskeletal disorders. The aim of this study was to evaluate the inter- and intra-observer reliability of ergonomists' risk assessments without the support of an explicit risk assessment method. Twenty-one experienced ergonomists assessed the risk level (low, moderate, high risk) of eight upper body regions, as well as the global risk of 10 video recorded work tasks. Intra-observer reliability was assessed by having nine of the ergonomists repeat the procedure at least three weeks after the first assessment. The ergonomists made their risk assessment based on his/her experience and knowledge. The statistical parameters of reliability included agreement in %, kappa, linearly weighted kappa, intraclass correlation and Kendall's coefficient of concordance. The average inter-observer agreement of the global risk was 53% and the corresponding weighted kappa (K w ) was 0.32, indicating fair reliability. The intra-observer agreement was 61% and 0.41 (K w ). This study indicates that risk assessments of the upper body, without the use of an explicit observational method, have non-acceptable reliability. It is therefore recommended to use systematic risk assessment methods to a higher degree. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Observing system simulation experiments with multiple methods
NASA Astrophysics Data System (ADS)
Ishibashi, Toshiyuki
2014-11-01
An observing System Simulation Experiment (OSSE) is a method to evaluate impacts of hypothetical observing systems on analysis and forecast accuracy in numerical weather prediction (NWP) systems. Since OSSE requires simulations of hypothetical observations, uncertainty of OSSE results is generally larger than that of observing system experiments (OSEs). To reduce such uncertainty, OSSEs for existing observing systems are often carried out as calibration of the OSSE system. The purpose of this study is to achieve reliable OSSE results based on results of OSSEs with multiple methods. There are three types of OSSE methods. The first one is the sensitivity observing system experiment (SOSE) based OSSE (SOSEOSSE). The second one is the ensemble of data assimilation cycles (ENDA) based OSSE (ENDA-OSSE). The third one is the nature-run (NR) based OSSE (NR-OSSE). These three OSSE methods have very different properties. The NROSSE evaluates hypothetical observations in a virtual (hypothetical) world, NR. The ENDA-OSSE is very simple method but has a sampling error problem due to a small size ensemble. The SOSE-OSSE requires a very highly accurate analysis field as a pseudo truth of the real atmosphere. We construct these three types of OSSE methods in the Japan meteorological Agency (JMA) global 4D-Var experimental system. In the conference, we will present initial results of these OSSE systems and their comparisons.
Integrating qualitative research into occupational health: a case study among hospital workers.
Gordon, Deborah R; Ames, Genevieve M; Yen, Irene H; Gillen, Marion; Aust, Birgit; Rugulies, Reiner; Frank, John W; Blanc, Paul D
2005-04-01
We sought to better use qualitative approaches in occupational health research and integrate them with quantitative methods. We systematically reviewed, selected, and adapted qualitative research methods as part of a multisite study of the predictors and outcomes of work-related musculoskeletal disorders among hospital workers in two large urban tertiary hospitals. The methods selected included participant observation; informal, open-ended, and semistructured interviews with individuals or small groups; and archival study. The nature of the work and social life of the hospitals and the foci of the study all favored using more participant observation methods in the case study than initially anticipated. Exploiting the full methodological spectrum of qualitative methods in occupational health is increasingly relevant. Although labor-intensive, these approaches may increase the yield of established quantitative approaches otherwise used in isolation.
Comparison of three methods for evaluation of work postures in a truck assembly plant.
Zare, Mohsen; Biau, Sophie; Brunet, Rene; Roquelaure, Yves
2017-11-01
This study compared the results of three risk assessment tools (self-reported questionnaire, observational tool, direct measurement method) for the upper limbs and back in a truck assembly plant at two cycle times (11 and 8 min). The weighted Kappa factor showed fair agreement between the observational and direct measurement method for the arm (0.39) and back (0.47). The weighted Kappa factor for these methods was poor for the neck (0) and wrist (0) but the observed proportional agreement (P o ) was 0.78 for the neck and 0.83 for the wrist. The weighted Kappa factor between questionnaire and direct measurement showed poor or slight agreement (0) for different body segments in both cycle times. The results revealed moderate agreement between the observational tool and the direct measurement method, and poor agreement between the self-reported questionnaire and direct measurement. Practitioner Summary: This study provides risk exposure measurement by different common ergonomic methods in the field. The results help to develop valid measurements and improve exposure evaluation. Hence, the ergonomist/practitioners should apply the methods with caution, or at least knowing what the issues/errors are.
Wu, Robert; Glen, Peter; Ramsay, Tim; Martel, Guillaume
2014-06-28
Observational studies dominate the surgical literature. Statistical adjustment is an important strategy to account for confounders in observational studies. Research has shown that published articles are often poor in statistical quality, which may jeopardize their conclusions. The Statistical Analyses and Methods in the Published Literature (SAMPL) guidelines have been published to help establish standards for statistical reporting.This study will seek to determine whether the quality of statistical adjustment and the reporting of these methods are adequate in surgical observational studies. We hypothesize that incomplete reporting will be found in all surgical observational studies, and that the quality and reporting of these methods will be of lower quality in surgical journals when compared with medical journals. Finally, this work will seek to identify predictors of high-quality reporting. This work will examine the top five general surgical and medical journals, based on a 5-year impact factor (2007-2012). All observational studies investigating an intervention related to an essential component area of general surgery (defined by the American Board of Surgery), with an exposure, outcome, and comparator, will be included in this systematic review. Essential elements related to statistical reporting and quality were extracted from the SAMPL guidelines and include domains such as intent of analysis, primary analysis, multiple comparisons, numbers and descriptive statistics, association and correlation analyses, linear regression, logistic regression, Cox proportional hazard analysis, analysis of variance, survival analysis, propensity analysis, and independent and correlated analyses. Each article will be scored as a proportion based on fulfilling criteria in relevant analyses used in the study. A logistic regression model will be built to identify variables associated with high-quality reporting. A comparison will be made between the scores of surgical observational studies published in medical versus surgical journals. Secondary outcomes will pertain to individual domains of analysis. Sensitivity analyses will be conducted. This study will explore the reporting and quality of statistical analyses in surgical observational studies published in the most referenced surgical and medical journals in 2013 and examine whether variables (including the type of journal) can predict high-quality reporting.
Topical Review: Families Coping With Child Trauma: A Naturalistic Observation Methodology
Barrett, Anna; Bowles, Peter; Conroy, Rowena; Mehl, Matthias R.
2016-01-01
Objective To introduce a novel, naturalistic observational methodology (the Electronically Activated Recorder; EAR) as an opportunity to better understand the central role of the family environment in children’s recovery from trauma. Methods Discussion of current research methods and a systematic literature review of EAR studies on health and well-being. Results Surveys, experience sampling, and the EAR method each provide different opportunities and challenges for studying family interactions. We identified 17 articles describing relevant EAR studies. These investigated questions of emotional well-being, communicative behaviors, and interpersonal relationships, predominantly in adults. 5 articles reported innovative research in children, triangulating EAR-observed behavioral data (e.g., on child conflict at home) with neuroendocrine assay, sociodemographic information, and parent report. Finally, we discussed psychometric, practical, and ethical considerations for conducting EAR research with children and families. Conclusions Naturalistic observation methods such as the EAR have potential for pediatric psychology studies regarding trauma and the family environment. PMID:25797943
Ozturk, Orgul D; McInnes, Melayne M; Blake, Christine E; Frongillo, Edward A; Jones, Sonya J
2016-01-01
The objective of this study is to develop a structured observational method for the systematic assessment of the food-choice architecture that can be used to identify key points for behavioral economic intervention intended to improve the health quality of children's diets. We use an ethnographic approach with observations at twelve elementary schools to construct our survey instrument. Elements of the structured observational method include decision environment, salience, accessibility/convenience, defaults/verbal prompts, number of choices, serving ware/method/packaging, and social/physical eating environment. Our survey reveals important "nudgeable" components of the elementary school food-choice architecture, including precommitment and default options on the lunch line.
Observational evidence and strength of evidence domains: case examples
2014-01-01
Background Systematic reviews of healthcare interventions most often focus on randomized controlled trials (RCTs). However, certain circumstances warrant consideration of observational evidence, and such studies are increasingly being included as evidence in systematic reviews. Methods To illustrate the use of observational evidence, we present case examples of systematic reviews in which observational evidence was considered as well as case examples of individual observational studies, and how they demonstrate various strength of evidence domains in accordance with current Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Center (EPC) methods guidance. Results In the presented examples, observational evidence is used when RCTs are infeasible or raise ethical concerns, lack generalizability, or provide insufficient data. Individual study case examples highlight how observational evidence may fulfill required strength of evidence domains, such as study limitations (reduced risk of selection, detection, performance, and attrition); directness; consistency; precision; and reporting bias (publication, selective outcome reporting, and selective analysis reporting), as well as additional domains of dose-response association, plausible confounding that would decrease the observed effect, and strength of association (magnitude of effect). Conclusions The cases highlighted in this paper demonstrate how observational studies may provide moderate to (rarely) high strength evidence in systematic reviews. PMID:24758494
Observational Studies: Cohort and Case-Control Studies
Song, Jae W.; Chung, Kevin C.
2010-01-01
Observational studies are an important category of study designs. To address some investigative questions in plastic surgery, randomized controlled trials are not always indicated or ethical to conduct. Instead, observational studies may be the next best method to address these types of questions. Well-designed observational studies have been shown to provide results similar to randomized controlled trials, challenging the belief that observational studies are second-rate. Cohort studies and case-control studies are two primary types of observational studies that aid in evaluating associations between diseases and exposures. In this review article, we describe these study designs, methodological issues, and provide examples from the plastic surgery literature. PMID:20697313
International Halley watch amateur observers' manual for scientific comet studies. Part 1: Methods
NASA Technical Reports Server (NTRS)
Edberg, S. J.
1983-01-01
The International Halley Watch is described as well as comets and observing techniques. Information on periodic Comet Halley's apparition for its 1986 perihelion passage is provided. Instructions are given for observation projects valuable to the International Halley Watch in six areas of study: (1) visual observations; (2) photography; (3) astrometry; (4) spectroscopic observations; (5) photoelectric photometry; and (6) meteor observations.
Normal values of 3 methods to determine patellar height in children from 6 to 12 years.
Vergara-Amador, E; Davalos Herrera, D; Guevara, O A
2018-03-26
The aim of the study was to compare three methods for high-score measurement in children, Caton-Deschamps, Blackburne-Peel and Koshino-Sugimoto, to determine the normal value of each method in a group of normal children. A cross-sectional study on knee x-rays of normal children. Three orthopaedic surgeons measured the Caton-Deschamps, Blackburne-Peel and Koshino-Sugimoto indices. Concordance was assessed using the intraclass correlation coefficient. For interobserver variability, the measurements of each observer for each index were compared and for intraobserver variability, the coefficient between the 2 measurements was calculated by the same observer at 2 different times. 140 knee X-rays divided into 4 age groups were obtained. For the Blackburne-Peel index, an average median of the 3 observers was obtained of 1.07 and with P5-P95 (0.76-1.60). For the Caton-Deschamps index, an average median of the three observers of 1.22 was obtained and with P5-P95 (0.91-1.70). For the Koshino-Sugimoto index, we obtained an average median of the 3 observers of 1.16 and with P5-P95 (0.99-1.36). This study shows that the Koshino-Sugimoto index had the highest reliability, reproducibility and similarity in the population studied, both intra-observer and inter-observer. The other methods evaluated also had variability indices to be taken into account, but were inferior to the Koshino-Sugimoto index. Copyright © 2018 SECOT. Publicado por Elsevier España, S.L.U. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berdeyans, D.; Bocharov, V.I.; Lobachevskii, L.A.
Ionosphere observations by the OBS method were performed to study ionospheric conditions under which radio waves in the decameter range propagate on Cuba--Soviet Union paths. The results of observations in the summer of 1973 are reported. The distance--frequency and distance--time characteristics of back-scattered signals in the sounding direction for each day of observation are discussed. (JFP)
NASA Technical Reports Server (NTRS)
Bolshakov, A. A.
1985-01-01
The study of Earth from space with specialized satellites, and from manned orbiting stations, has become important in the space programs. The broad complex of methods used for probing Earth from space are different methods of the study of ocean, dynamics. The different methods of ocean observation are described.
Methods of Teaching Reading to EFL Learners: A Case Study
ERIC Educational Resources Information Center
Sanjaya, Dedi; Rahmah; Sinulingga, Johan; Lubis, Azhar Aziz; Yusuf, Muhammad
2014-01-01
Methods of teaching reading skill are not the same in different countries. It depends on the condition and situation of the learners. Observing the method of teaching in Malaysia was the purpose of this study and the result of the study shows that there are 5 methods that are applied in classroom activities namely Grammar Translation Method (GTM),…
ERIC Educational Resources Information Center
Hsiao, Yu-Yu; Kwok, Oi-Man; Lai, Mark H. C.
2018-01-01
Path models with observed composites based on multiple items (e.g., mean or sum score of the items) are commonly used to test interaction effects. Under this practice, researchers generally assume that the observed composites are measured without errors. In this study, we reviewed and evaluated two alternative methods within the structural…
Barriers and Incentives to Computer Usage in Teaching
1988-09-29
classes with one or two computers. Research Methods The two major methods of data-gathering employed in this study were intensive and extensive classroom ... observation and repeated extended interviews with students and teachers. Administrators were also interviewed when appropriate. Classroom observers used
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Lei; Feng, Li; Liu, Siming
We present a detailed study of an Earth-directed coronal mass ejection (full-halo CME) event that happened on 2011 February 15, making use of white-light observations by three coronagraphs and radio observations by Wind /WAVES. We applied three different methods to reconstruct the propagation direction and traveling distance of the CME and its driven shock. We measured the kinematics of the CME leading edge from white-light images observed by Solar Terrestrial Relations Observatory ( STEREO ) A and B , tracked the CME-driven shock using the frequency drift observed by Wind /WAVES together with an interplanetary density model, and obtained themore » equivalent scattering centers of the CME by the polarization ratio (PR) method. For the first time, we applied the PR method to different features distinguished from LASCO/C2 polarimetric observations and calculated their projections onto white-light images observed by STEREO-A and STEREO-B . By combining the graduated cylindrical shell (GCS) forward modeling with the PR method, we proposed a new GCS-PR method to derive 3D parameters of a CME observed from a single perspective at Earth. Comparisons between different methods show a good degree of consistence in the derived 3D results.« less
Primary mass discrimination of high energy cosmic rays using PNN and k-NN methods
NASA Astrophysics Data System (ADS)
Rastegarzadeh, G.; Nemati, M.
2018-02-01
Probabilistic neural network (PNN) and k-Nearest Neighbors (k-NN) methods are widely used data classification techniques. In this paper, these two methods have been used to classify the Extensive Air Shower (EAS) data sets which were simulated using the CORSIKA code for three primary cosmic rays. The primaries are proton, oxygen and iron nuclei at energies of 100 TeV-10 PeV. This study is performed in the following of the investigations into the primary cosmic ray mass sensitive observables. We propose a new approach for measuring the mass sensitive observables of EAS in order to improve the primary mass separation. In this work, the EAS observables measurement has performed locally instead of total measurements. Also the relationships between the included number of observables in the classification methods and the prediction accuracy have been investigated. We have shown that the local measurements and inclusion of more mass sensitive observables in the classification processes can improve the classifying quality and also we have shown that muons and electrons energy density can be considered as primary mass sensitive observables in primary mass classification. Also it must be noted that this study is performed for Tehran observation level without considering the details of any certain EAS detection array.
The Association between Observed Parental Emotion Socialization and Adolescent Self-Medication
ERIC Educational Resources Information Center
Hersh, Matthew A.; Hussong, Andrea M.
2009-01-01
The current study examined the moderating influence of observed parental emotion socialization (PES) on self-medication in adolescents. Strengths of the study include the use of a newly developed observational coding system further extending the study of PES to adolescence, the use of an experience sampling method to assess the daily covariation…
A Kalman Filter for SINS Self-Alignment Based on Vector Observation.
Xu, Xiang; Xu, Xiaosu; Zhang, Tao; Li, Yao; Tong, Jinwu
2017-01-29
In this paper, a self-alignment method for strapdown inertial navigation systems based on the q -method is studied. In addition, an improved method based on integrating gravitational apparent motion to form apparent velocity is designed, which can reduce the random noises of the observation vectors. For further analysis, a novel self-alignment method using a Kalman filter based on adaptive filter technology is proposed, which transforms the self-alignment procedure into an attitude estimation using the observation vectors. In the proposed method, a linear psuedo-measurement equation is adopted by employing the transfer method between the quaternion and the observation vectors. Analysis and simulation indicate that the accuracy of the self-alignment is improved. Meanwhile, to improve the convergence rate of the proposed method, a new method based on parameter recognition and a reconstruction algorithm for apparent gravitation is devised, which can reduce the influence of the random noises of the observation vectors. Simulations and turntable tests are carried out, and the results indicate that the proposed method can acquire sound alignment results with lower standard variances, and can obtain higher alignment accuracy and a faster convergence rate.
A Kalman Filter for SINS Self-Alignment Based on Vector Observation
Xu, Xiang; Xu, Xiaosu; Zhang, Tao; Li, Yao; Tong, Jinwu
2017-01-01
In this paper, a self-alignment method for strapdown inertial navigation systems based on the q-method is studied. In addition, an improved method based on integrating gravitational apparent motion to form apparent velocity is designed, which can reduce the random noises of the observation vectors. For further analysis, a novel self-alignment method using a Kalman filter based on adaptive filter technology is proposed, which transforms the self-alignment procedure into an attitude estimation using the observation vectors. In the proposed method, a linear psuedo-measurement equation is adopted by employing the transfer method between the quaternion and the observation vectors. Analysis and simulation indicate that the accuracy of the self-alignment is improved. Meanwhile, to improve the convergence rate of the proposed method, a new method based on parameter recognition and a reconstruction algorithm for apparent gravitation is devised, which can reduce the influence of the random noises of the observation vectors. Simulations and turntable tests are carried out, and the results indicate that the proposed method can acquire sound alignment results with lower standard variances, and can obtain higher alignment accuracy and a faster convergence rate. PMID:28146059
A picture's worth a thousand words: a food-selection observational method.
Carins, Julia E; Rundle-Thiele, Sharyn R; Parkinson, Joy E
2016-05-04
Issue addressed: Methods are needed to accurately measure and describe behaviour so that social marketers and other behaviour change researchers can gain consumer insights before designing behaviour change strategies and so, in time, they can measure the impact of strategies or interventions when implemented. This paper describes a photographic method developed to meet these needs. Methods: Direct observation and photographic methods were developed and used to capture food-selection behaviour and examine those selections according to their healthfulness. Four meals (two lunches and two dinners) were observed at a workplace buffet-style cafeteria over a 1-week period. The healthfulness of individual meals was assessed using a classification scheme developed for the present study and based on the Australian Dietary Guidelines. Results: Approximately 27% of meals (n = 168) were photographed. Agreement was high between raters classifying dishes using the scheme, as well as between researchers when coding photographs. The subset of photographs was representative of patterns observed in the entire dining room. Diners chose main dishes in line with the proportions presented, but in opposition to the proportions presented for side dishes. Conclusions: The present study developed a rigorous observational method to investigate food choice behaviour. The comprehensive food classification scheme produced consistent classifications of foods. The photographic data collection method was found to be robust and accurate. Combining the two observation methods allows researchers and/or practitioners to accurately measure and interpret food selections. Consumer insights gained suggest that, in this setting, increasing the availability of green (healthful) offerings for main dishes would assist in improving healthfulness, whereas other strategies (e.g. promotion) may be needed for side dishes. So what?: Visual observation methods that accurately measure and interpret food-selection behaviour provide both insight for those developing healthy eating interventions and a means to evaluate the effect of implemented interventions on food selection.
Dale, Ann Marie; Ekenga, Christine C; Buckner-Petty, Skye; Merlino, Linda; Thiese, Matthew S; Bao, Stephen; Meyers, Alysha Rose; Harris-Adamson, Carisa; Kapellusch, Jay; Eisen, Ellen A; Gerr, Fred; Hegmann, Kurt T; Silverstein, Barbara; Garg, Arun; Rempel, David; Zeringue, Angelique; Evanoff, Bradley A
2018-03-29
There is growing use of a job exposure matrix (JEM) to provide exposure estimates in studies of work-related musculoskeletal disorders; few studies have examined the validity of such estimates, nor did compare associations obtained with a JEM with those obtained using other exposures. This study estimated upper extremity exposures using a JEM derived from a publicly available data set (Occupational Network, O*NET), and compared exposure-disease associations for incident carpal tunnel syndrome (CTS) with those obtained using observed physical exposure measures in a large prospective study. 2393 workers from several industries were followed for up to 2.8 years (5.5 person-years). Standard Occupational Classification (SOC) codes were assigned to the job at enrolment. SOC codes linked to physical exposures for forceful hand exertion and repetitive activities were extracted from O*NET. We used multivariable Cox proportional hazards regression models to describe exposure-disease associations for incident CTS for individually observed physical exposures and JEM exposures from O*NET. Both exposure methods found associations between incident CTS and exposures of force and repetition, with evidence of dose-response. Observed associations were similar across the two methods, with somewhat wider CIs for HRs calculated using the JEM method. Exposures estimated using a JEM provided similar exposure-disease associations for CTS when compared with associations obtained using the 'gold standard' method of individual observation. While JEMs have a number of limitations, in some studies they can provide useful exposure estimates in the absence of individual-level observed exposures. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Spatio-Temporal Evolutions of Non-Orthogonal Equatorial Wave Modes Derived from Observations
NASA Astrophysics Data System (ADS)
Barton, C.; Cai, M.
2015-12-01
Equatorial waves have been studied extensively due to their importance to the tropical climate and weather systems. Historically, their activity is diagnosed mainly in the wavenumber-frequency domain. Recently, many studies have projected observational data onto parabolic cylinder functions (PCF), which represent the meridional structure of individual wave modes, to attain time-dependent spatial wave structures. In this study, we propose a methodology that seeks to identify individual wave modes in instantaneous fields of observations by determining their projections on PCF modes according to the equatorial wave theory. The new method has the benefit of yielding a closed system with a unique solution for all waves' spatial structures, including IG waves, for a given instantaneous observed field. We have applied our method to the ERA-Interim reanalysis dataset in the tropical stratosphere where the wave-mean flow interaction mechanism for the quasi-biennial oscillation (QBO) is well-understood. We have confirmed the continuous evolution of the selection mechanism for equatorial waves in the stratosphere from observations as predicted by the theory for the QBO. This also validates the proposed method for decomposition of observed tropical wave fields into non-orthogonal equatorial wave modes.
NASA Astrophysics Data System (ADS)
Klemm, Janina; Neuhaus, Birgit J.
2017-05-01
Observation is one of the basic methods in science. It is not only an epistemological method itself, but also an important competence for other methods like experimenting or comparing. However, there is little knowledge about the relation with affective factors of this inquiry method. In our study, we would like to find out about the relations of emotional well-being and involvement with children's observation competency. Seventy preschool children participated in our test observing a living mouse, a snail and a fish. From their behaviour in the test situation, we coded their observation competency as well as their emotional well-being and involvement. The data show that both emotional well-being and involvement are significant predictors of children's observation competency. Further analyses confirm our hypothesis of a mediating role of involvement between well-being and the performance in the observation task. In conclusion, theoretical and practical implications of these results are discussed.
Software sensors for bioprocesses.
Bogaerts, Ph; Vande Wouwer, A
2003-10-01
State estimation is a significant problem in biotechnological processes, due to the general lack of hardware sensor measurements of the variables describing the process dynamics. The objective of this paper is to review a number of software sensor design methods, including extended Kalman filters, receding-horizon observers, asymptotic observers, and hybrid observers, which can be efficiently applied to bioprocesses. These several methods are illustrated with simulation and real-life case studies.
ERIC Educational Resources Information Center
McCaffrey, Daniel F.; Ridgeway, Greg; Morral, Andrew R.
2004-01-01
Causal effect modeling with naturalistic rather than experimental data is challenging. In observational studies participants in different treatment conditions may also differ on pretreatment characteristics that influence outcomes. Propensity score methods can theoretically eliminate these confounds for all observed covariates, but accurate…
Wendling, T; Jung, K; Callahan, A; Schuler, A; Shah, N H; Gallego, B
2018-06-03
There is growing interest in using routinely collected data from health care databases to study the safety and effectiveness of therapies in "real-world" conditions, as it can provide complementary evidence to that of randomized controlled trials. Causal inference from health care databases is challenging because the data are typically noisy, high dimensional, and most importantly, observational. It requires methods that can estimate heterogeneous treatment effects while controlling for confounding in high dimensions. Bayesian additive regression trees, causal forests, causal boosting, and causal multivariate adaptive regression splines are off-the-shelf methods that have shown good performance for estimation of heterogeneous treatment effects in observational studies of continuous outcomes. However, it is not clear how these methods would perform in health care database studies where outcomes are often binary and rare and data structures are complex. In this study, we evaluate these methods in simulation studies that recapitulate key characteristics of comparative effectiveness studies. We focus on the conditional average effect of a binary treatment on a binary outcome using the conditional risk difference as an estimand. To emulate health care database studies, we propose a simulation design where real covariate and treatment assignment data are used and only outcomes are simulated based on nonparametric models of the real outcomes. We apply this design to 4 published observational studies that used records from 2 major health care databases in the United States. Our results suggest that Bayesian additive regression trees and causal boosting consistently provide low bias in conditional risk difference estimates in the context of health care database studies. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Kvitle, Anne Kristin
2018-05-01
Color is part of the visual variables in map, serving an aesthetic part and as a guide of attention. Impaired color vision affects the ability to distinguish colors, which makes the task of decoding the map colors difficult. Map reading is reported as a challenging task for these observers, especially when the size of stimuli is small. The aim of this study is to review existing methods for map design for color vision deficient users. A systematic review of research literature and case studies of map design for CVD observers has been conducted in order to give an overview of current knowledge and future research challenges. In addition, relevant research on simulations of CVD and color image enhancement for these observers from other fields of industry is included. The study identified two main approaches: pre-processing by using accessible colors and post-processing by using enhancement methods. Some of the methods may be applied for maps, but requires tailoring of test images according to map types.
The Dependability of Classroom Observations.
ERIC Educational Resources Information Center
Hiatt, Diana Buell; Keesling, J. Ward
A generalizability study of timed observations was conducted in 25 primary grade classes to observe teachers' use of time--for instruction, evaluation of instruction, and classroom management--according to the hour and day observed. Observational methods used by on-site researchers included videotape, checklists, running documentaries, frequency…
Sargeant, J M; O'Connor, A M; Dohoo, I R; Erb, H N; Cevallos, M; Egger, M; Ersbøll, A K; Martin, S W; Nielsen, L R; Pearl, D L; Pfeiffer, D U; Sanchez, J; Torrence, M E; Vigre, H; Waldner, C; Ward, M P
2016-11-01
Reporting of observational studies in veterinary research presents challenges that often are not addressed in published reporting guidelines. To develop an extension of the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) statement that addresses unique reporting requirements for observational studies in veterinary medicine related to health, production, welfare, and food safety. Consensus meeting of experts. Mississauga, Canada. Seventeen experts from North America, Europe, and Australia. Experts completed a pre-meeting survey about whether items in the STROBE statement should be modified or added to address unique issues related to observational studies in animal species with health, production, welfare, or food safety outcomes. During the meeting, each STROBE item was discussed to determine whether or not rewording was recommended and whether additions were warranted. Anonymous voting was used to determine consensus. Six items required no modifications or additions. Modifications or additions were made to the STROBE items 1 (title and abstract), 3 (objectives), 5 (setting), 6 (participants), 7 (variables), 8 (data sources/measurement), 9 (bias), 10 (study size), 12 (statistical methods), 13 (participants), 14 (descriptive data), 15 (outcome data), 16 (main results), 17 (other analyses), 19 (limitations), and 22 (funding). The methods and processes used were similar to those used for other extensions of the STROBE statement. The use of this STROBE statement extension should improve reporting of observational studies in veterinary research by recognizing unique features of observational studies involving food-producing and companion animals, products of animal origin, aquaculture, and wildlife. Copyright © 2016 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.
Thom, Howard H Z; Capkun, Gorana; Cerulli, Annamaria; Nixon, Richard M; Howard, Luke S
2015-04-12
Network meta-analysis (NMA) is a methodology for indirectly comparing, and strengthening direct comparisons of two or more treatments for the management of disease by combining evidence from multiple studies. It is sometimes not possible to perform treatment comparisons as evidence networks restricted to randomized controlled trials (RCTs) may be disconnected. We propose a Bayesian NMA model that allows to include single-arm, before-and-after, observational studies to complete these disconnected networks. We illustrate the method with an indirect comparison of treatments for pulmonary arterial hypertension (PAH). Our method uses a random effects model for placebo improvements to include single-arm observational studies into a general NMA. Building on recent research for binary outcomes, we develop a covariate-adjusted continuous-outcome NMA model that combines individual patient data (IPD) and aggregate data from two-arm RCTs with the single-arm observational studies. We apply this model to a complex comparison of therapies for PAH combining IPD from a phase-III RCT of imatinib as add-on therapy for PAH and aggregate data from RCTs and single-arm observational studies, both identified by a systematic review. Through the inclusion of observational studies, our method allowed the comparison of imatinib as add-on therapy for PAH with other treatments. This comparison had not been previously possible due to the limited RCT evidence available. However, the credible intervals of our posterior estimates were wide so the overall results were inconclusive. The comparison should be treated as exploratory and should not be used to guide clinical practice. Our method for the inclusion of single-arm observational studies allows the performance of indirect comparisons that had previously not been possible due to incomplete networks composed solely of available RCTs. We also built on many recent innovations to enable researchers to use both aggregate data and IPD. This method could be used in similar situations where treatment comparisons have not been possible due to restrictions to RCT evidence and where a mixture of aggregate data and IPD are available.
Satellite-Scale Snow Water Equivalent Assimilation into a High-Resolution Land Surface Model
NASA Technical Reports Server (NTRS)
De Lannoy, Gabrielle J.M.; Reichle, Rolf H.; Houser, Paul R.; Arsenault, Kristi R.; Verhoest, Niko E.C.; Paulwels, Valentijn R.N.
2009-01-01
An ensemble Kalman filter (EnKF) is used in a suite of synthetic experiments to assimilate coarse-scale (25 km) snow water equivalent (SWE) observations (typical of satellite retrievals) into fine-scale (1 km) model simulations. Coarse-scale observations are assimilated directly using an observation operator for mapping between the coarse and fine scales or, alternatively, after disaggregation (re-gridding) to the fine-scale model resolution prior to data assimilation. In either case observations are assimilated either simultaneously or independently for each location. Results indicate that assimilating disaggregated fine-scale observations independently (method 1D-F1) is less efficient than assimilating a collection of neighboring disaggregated observations (method 3D-Fm). Direct assimilation of coarse-scale observations is superior to a priori disaggregation. Independent assimilation of individual coarse-scale observations (method 3D-C1) can bring the overall mean analyzed field close to the truth, but does not necessarily improve estimates of the fine-scale structure. There is a clear benefit to simultaneously assimilating multiple coarse-scale observations (method 3D-Cm) even as the entire domain is observed, indicating that underlying spatial error correlations can be exploited to improve SWE estimates. Method 3D-Cm avoids artificial transitions at the coarse observation pixel boundaries and can reduce the RMSE by 60% when compared to the open loop in this study.
Topical Review: Families Coping With Child Trauma: A Naturalistic Observation Methodology.
Alisic, Eva; Barrett, Anna; Bowles, Peter; Conroy, Rowena; Mehl, Matthias R
2016-01-01
To introduce a novel, naturalistic observational methodology (the Electronically Activated Recorder; EAR) as an opportunity to better understand the central role of the family environment in children's recovery from trauma. Discussion of current research methods and a systematic literature review of EAR studies on health and well-being. Surveys, experience sampling, and the EAR method each provide different opportunities and challenges for studying family interactions. We identified 17 articles describing relevant EAR studies. These investigated questions of emotional well-being, communicative behaviors, and interpersonal relationships, predominantly in adults. 5 articles reported innovative research in children, triangulating EAR-observed behavioral data (e.g., on child conflict at home) with neuroendocrine assay, sociodemographic information, and parent report. Finally, we discussed psychometric, practical, and ethical considerations for conducting EAR research with children and families. Naturalistic observation methods such as the EAR have potential for pediatric psychology studies regarding trauma and the family environment. © The Author 2015. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Research Methods in Healthcare Epidemiology and Antimicrobial Stewardship-Observational Studies.
Snyder, Graham M; Young, Heather; Varman, Meera; Milstone, Aaron M; Harris, Anthony D; Munoz-Price, Silvia
2016-10-01
Observational studies compare outcomes among subjects with and without an exposure of interest, without intervention from study investigators. Observational studies can be designed as a prospective or retrospective cohort study or as a case-control study. In healthcare epidemiology, these observational studies often take advantage of existing healthcare databases, making them more cost-effective than clinical trials and allowing analyses of rare outcomes. This paper addresses the importance of selecting a well-defined study population, highlights key considerations for study design, and offers potential solutions including biostatistical tools that are applicable to observational study designs. Infect Control Hosp Epidemiol 2016;1-6.
ERIC Educational Resources Information Center
Wang, Ze; Rohrer, David; Chuang, Chi-ching; Fujiki, Mayo; Herman, Keith; Reinke, Wendy
2015-01-01
This study compared 5 scoring methods in terms of their statistical assumptions. They were then used to score the Teacher Observation of Classroom Adaptation Checklist, a measure consisting of 3 subscales and 21 Likert-type items. The 5 methods used were (a) sum/average scores of items, (b) latent factor scores with continuous indicators, (c)…
Improving methods to evaluate the impacts of plant invasions: lessons from 40 years of research
Stricker, Kerry Bohl; Hagan, Donald; Flory, S. Luke
2015-01-01
Methods used to evaluate the ecological impacts of biological invasions vary widely from broad-scale observational studies to removal experiments in invaded communities and experimental additions in common gardens and greenhouses. Different methods provide information at diverse spatial and temporal scales with varying levels of reliability. Thus, here we provide a synthetic and critical review of the methods used to evaluate the impacts of plant invasions and provide recommendations for future research. We review the types of methods available and report patterns in methods used, including the duration and spatial scale of studies and plant functional groups examined, from 410 peer-reviewed papers published between 1971 and 2011. We found that there has been a marked increase in papers published on plant invasion impacts since 2003 and that more than half of all studies employed observational methods while <5 % included predictive modelling. Most of the studies were temporally and spatially restricted with 51 % of studies lasting <1 year and almost half of all studies conducted in plots or mesocosms <1 m2. There was also a bias in life form studied: more than 60 % of all studies evaluated impacts of invasive forbs and graminoids while <16 % focused on invasive trees. To more effectively quantify invasion impacts, we argue that longer-term experimental research and more studies that use predictive modelling and evaluate impacts of invasions on ecosystem processes and fauna are needed. Combining broad-scale observational studies with experiments and predictive modelling may provide the most insight into invasion impacts for policy makers and land managers seeking to reduce the effects of plant invasions. PMID:25829379
Zhang, Zhiyong; Yuan, Ke-Hai
2016-06-01
Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation methods for alpha and omega often implicitly assume that data are complete and normally distributed. This study proposes robust procedures to estimate both alpha and omega as well as corresponding standard errors and confidence intervals from samples that may contain potential outlying observations and missing values. The influence of outlying observations and missing data on the estimates of alpha and omega is investigated through two simulation studies. Results show that the newly developed robust method yields substantially improved alpha and omega estimates as well as better coverage rates of confidence intervals than the conventional nonrobust method. An R package coefficientalpha is developed and demonstrated to obtain robust estimates of alpha and omega.
Empirical Performance of Covariates in Education Observational Studies
ERIC Educational Resources Information Center
Wong, Vivian C.; Valentine, Jeffrey C.; Miller-Bains, Kate
2017-01-01
This article summarizes results from 12 empirical evaluations of observational methods in education contexts. We look at the performance of three common covariate-types in observational studies where the outcome is a standardized reading or math test. They are: pretest measures, local geographic matching, and rich covariate sets with a strong…
Evolution of Observation: Implementing Programmatic Change
ERIC Educational Resources Information Center
Bender-Slack, Delane A.; Young, Teresa
2013-01-01
In the study reported, we examine the evolution of preservice teacher observation, focusing on the essential nature of observation to preservice teachers' learning about teaching while in the field. The study was 3 years long, and it involved 79 preservice teachers during semester-long language arts methods courses in early childhood and…
ERIC Educational Resources Information Center
Pustejovsky, James E.; Runyon, Christopher
2014-01-01
Direct observation recording procedures produce reductive summary measurements of an underlying stream of behavior. Previous methodological studies of these recording procedures have employed simulation methods for generating random behavior streams, many of which amount to special cases of a statistical model known as the alternating renewal…
Studying Triggers for Interest and Engagement Using Observational Methods
ERIC Educational Resources Information Center
Renninger, K. Ann; Bachrach, Jessica E.
2015-01-01
In this article, we discuss the contribution of observational methods to understanding the processes involved in triggering interest and establishing engagement. We begin by reviewing the literatures on interest and engagement, noting their similarities, differences, and the utility to each of better understanding the triggering process. We then…
Regression analysis of sparse asynchronous longitudinal data.
Cao, Hongyuan; Zeng, Donglin; Fine, Jason P
2015-09-01
We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus.
How to Quantify Penile Corpus Cavernosum Structures with Histomorphometry: Comparison of Two Methods
Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José
2015-01-01
The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared. PMID:26413547
Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José
2015-01-01
The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared.
NASA Astrophysics Data System (ADS)
Saravanakkumar, D.; Sivaranjani, S.; Kaviyarasu, K.; Ayeshamariam, A.; Ravikumar, B.; Pandiarajan, S.; Veeralakshmi, C.; Jayachandran, M.; Maaza, M.
2018-03-01
Pure ZnO, ZnO–CuO nanocomposites can be synthesized by using a modified perfume spray pyrolysis method (MSP). The crystallite size of the nanoparticles (NPs) has been observed by X-ray diffraction pattern and is nearly 36 nm. Morphological studies have been analyzed by using Field Emission Scanning Electron Microscopy (FESEM) and its elemental analysis was reported by Elemental X-ray Analysis (EDX); these studies confirmed that ZnO and CuO have hexagonal structure and monoclinic structure respectively. Fourier Transform Infrared (FTIR) spectra revealed that the presence of functional frequencies of ZnO and CuO were observed at 443 and 616 cm‑1. The average bandgap value at 3.25 eV using UV–vis spectra for the entitled composite has described a blue shift that has been observed here. The antibacterial study against both gram positive and negative bacteria has been studied by the disc diffusion method. To the best of our knowledge, it is the first report on ZnO–CuO nanocomposite synthesized by a modified perfume spray pyrolysis method.
ERIC Educational Resources Information Center
Dekker, Vera; Nauta, Maaike H.; Mulder, Erik J.; Sytema, Sjoerd; de Bildt, Annelies
2016-01-01
The Social skills Observation Measure (SOM) is a direct observation method for social skills used in naturalistic everyday situations in school. This study describes the development of the SOM and investigates its psychometric properties in 86 children with Autism spectrum disorder, aged 9.8-13.1 years. The interrater reliability was found to be…
NASA Astrophysics Data System (ADS)
Srivastava, D. C.
2016-12-01
A Genetic Algorithm Method for Direct estimation of paleostress states from heterogeneous fault-slip observationsDeepak C. Srivastava, Prithvi Thakur and Pravin K. GuptaDepartment of Earth Sciences, Indian Institute of Technology Roorkee, Roorkee 247667, India. Abstract Paleostress estimation from a group of heterogeneous fault-slip observations entails first the classification of the observations into homogeneous fault sets and then a separate inversion of each homogeneous set. This study combines these two issues into a nonlinear inverse problem and proposes a heuristic search method that inverts the heterogeneous fault-slip observations. The method estimates different paleostress states in a group of heterogeneous fault-slip observations and classifies it into homogeneous sets as a byproduct. It uses the genetic algorithm operators, elitism, selection, encoding, crossover and mutation. These processes translate into a guided search that finds successively fitter solutions and operate iteratively until the termination criteria is met and the globally fittest stress tensors are obtained. We explain the basic steps of the algorithm on a working example and demonstrate validity of the method on several synthetic and a natural group of heterogeneous fault-slip observations. The method is independent of any user-defined bias or any entrapment of solution in a local optimum. It succeeds even in the difficult situations where other classification methods are found to fail.
Hello World! - Experiencing Usability Methods without Usability Expertise
NASA Astrophysics Data System (ADS)
Eriksson, Elina; Cajander, Åsa; Gulliksen, Jan
How do you do usability work when no usability expertise is available? What happens in an organization when system developers, with no previous HCI knowledge, after a 3-day course, start applying usability methods, and particularly field studies? In order to answer these questions qualitative data were gathered through participatory observations, a feed back survey, field study documentation and interviews from 47 system developers from a public authority. Our results suggest that field studies enhance the developer’s understanding of the user perspective, and provide a more holistic overview of the use situation, but that some developers were unable to interpret their observations and see solutions to the users’ problems. The field study method was very much appreciated and has now become standard operating procedure within the organization. However, although field studies may be useful, it does not replace the need for usability pro fes sion als, as their knowledge is essential for more complex observations, analysis and for keeping the focus on usability.
NASA Astrophysics Data System (ADS)
Liu, J. H.; Hu, J.; Li, Z. W.
2018-04-01
Three-dimensional (3-D) deformation fields with respect to the October 2016's Central Tottori earthquake are extracted in this paper from ALOS-2 conducted Interferometric Synthetic Aperture Radar (InSAR) observations with four different incline angles, i.e., ascending/descending and left-/right-looking. In particular, the Strain Model and Variance Component Estimation (SM-VCE) method is developed to integrate the heterogeneous InSAR observations without being affected by the coverage inconformity of SAR images associated with the earthquake focal area. Compare with classical weighted least squares (WLS) method, SM-VCE method is capable for the retrieval of more accurate and complete deformation field of Central Tottori earthquake, as indicated by the comparison with the GNSS observations. In addition, accuracies of heterogeneous InSAR observations and 3-D deformations on each point are quantitatively provided by the SM-VCE method.
Observer roles that optimise learning in healthcare simulation education: a systematic review.
O'Regan, Stephanie; Molloy, Elizabeth; Watterson, Leonie; Nestel, Debra
2016-01-01
Simulation is widely used in health professional education. The convention that learners are actively involved may limit access to this educational method. The aim of this paper is to review the evidence for learning methods that employ directed observation as an alternative to hands-on participation in scenario-based simulation training. We sought studies that included either direct comparison of the learning outcomes of observers with those of active participants or identified factors important for the engagement of observers in simulation. We systematically searched health and education databases and reviewed journals and bibliographies for studies investigating or referring to observer roles in simulation using mannequins, simulated patients or role play simulations. A quality framework was used to rate the studies. We sought studies that included either direct comparison of the learning outcomes of observers with those of active participants or identified factors important for the engagement of observers in simulation. We systematically searched health and education databases and reviewed journals and bibliographies for studies investigating or referring to observer roles in simulation using mannequins, simulated patients or role play simulations. A quality framework was used to rate the studies. Nine studies met the inclusion criteria. Five studies suggest learning outcomes in observer roles are as good or better than hands-on roles in simulation. Four studies document learner satisfaction in observer roles. Five studies used a tool to guide observers. Eight studies involved observers in the debrief. Learning and satisfaction in observer roles is closely associated with observer tools, learner engagement, role clarity and contribution to the debrief. Learners that valued observer roles described them as affording an overarching view, examination of details from a distance, and meaningful feedback during the debrief. Learners who did not value observer roles described them as passive, or boring when compared to hands-on engagement in the simulation encounter. Learning outcomes and role satisfaction for observers is improved through learner engagement and the use of observer tools. The value that students attach to observer roles appear contingent on role clarity, use of observer tools, and inclusion of observers' perspectives in the debrief.
Omorczyk, Jarosław; Nosiadek, Leszek; Ambroży, Tadeusz; Nosiadek, Andrzej
2015-01-01
The main aim of this study was to verify the usefulness of selected simple methods of recording and fast biomechanical analysis performed by judges of artistic gymnastics in assessing a gymnast's movement technique. The study participants comprised six artistic gymnastics judges, who assessed back handsprings using two methods: a real-time observation method and a frame-by-frame video analysis method. They also determined flexion angles of knee and hip joints using the computer program. In the case of the real-time observation method, the judges gave a total of 5.8 error points with an arithmetic mean of 0.16 points for the flexion of the knee joints. In the high-speed video analysis method, the total amounted to 8.6 error points and the mean value amounted to 0.24 error points. For the excessive flexion of hip joints, the sum of the error values was 2.2 error points and the arithmetic mean was 0.06 error points during real-time observation. The sum obtained using frame-by-frame analysis method equaled 10.8 and the mean equaled 0.30 error points. Error values obtained through the frame-by-frame video analysis of movement technique were higher than those obtained through the real-time observation method. The judges were able to indicate the number of the frame in which the maximal joint flexion occurred with good accuracy. Using the real-time observation method as well as the high-speed video analysis performed without determining the exact angle for assessing movement technique were found to be insufficient tools for improving the quality of judging.
Field test comparison of two dermal tolerance assessment methods of hand hygiene products.
Girard, R; Carré, E; Pires-Cronenberger, S; Bertin-Mandy, M; Favier-Bulit, M C; Coyault, C; Coudrais, S; Billard, M; Regard, A; Kerhoas, A; Valdeyron, M L; Cracco, B; Misslin, P
2008-06-01
This study aimed to compare the sensitivity and workload requirement of two dermal tolerance assessment methods of hand hygiene products, in order to select a suitable pilot testing method for field tests. An observer-rating method and a self-assessment method were compared in 12 voluntary hospital departments (autumn/winter of 2005-2006). Three test-periods of three weeks were separated by two-week intervals during which the routine products were reintroduced. The observer rating method scored dryness and irritation on four-point scales. In the self-assessment method, the user rated appearance, intactness, moisture content, and sensation on a visual analogue scale which was converted into a 10-point numerical scale. Eleven products (soaps) were tested (223/250 complete reports for observer rating, 131/251 for self-assessment). Two products were significantly less well tolerated than the routine product according to the observers, four products according to the self-assessments. There was no significant difference between the two methods when products were classified according to tolerance (Fisher's test: P=0.491). For the symptom common to both assessment methods (dryness), there is a good correlation between the two methods (Spearman's Rho: P=0.032). The workload was higher for observer rating method (288 h of observer time plus 122 h of prevention team and pharmacist time compared with 15 h of prevention team and pharmacist time for self-assessment). In conclusion, the self-assessment method was considered more suitable for pilot testing, although further time should be allocated for educational measures as the return rate of complete self-assessment forms was poor.
Austin, Peter C; Schuster, Tibor; Platt, Robert W
2015-10-15
Estimating statistical power is an important component of the design of both randomized controlled trials (RCTs) and observational studies. Methods for estimating statistical power in RCTs have been well described and can be implemented simply. In observational studies, statistical methods must be used to remove the effects of confounding that can occur due to non-random treatment assignment. Inverse probability of treatment weighting (IPTW) using the propensity score is an attractive method for estimating the effects of treatment using observational data. However, sample size and power calculations have not been adequately described for these methods. We used an extensive series of Monte Carlo simulations to compare the statistical power of an IPTW analysis of an observational study with time-to-event outcomes with that of an analysis of a similarly-structured RCT. We examined the impact of four factors on the statistical power function: number of observed events, prevalence of treatment, the marginal hazard ratio, and the strength of the treatment-selection process. We found that, on average, an IPTW analysis had lower statistical power compared to an analysis of a similarly-structured RCT. The difference in statistical power increased as the magnitude of the treatment-selection model increased. The statistical power of an IPTW analysis tended to be lower than the statistical power of a similarly-structured RCT.
Development of Creative Behavior Observation Form: A Study on Validity and Reliability
ERIC Educational Resources Information Center
Dere, Zeynep; Ömeroglu, Esra
2018-01-01
This study, Creative Behavior Observation Form was developed to assess creativity of the children. While the study group on the reliability and validity of Creative Behavior Observation Form was being developed, 257 children in total who were at the ages of 5-6 were used as samples with stratified sampling method. Content Validity Index (CVI) and…
Cross-comparison and evaluation of air pollution field estimation methods
NASA Astrophysics Data System (ADS)
Yu, Haofei; Russell, Armistead; Mulholland, James; Odman, Talat; Hu, Yongtao; Chang, Howard H.; Kumar, Naresh
2018-04-01
Accurate estimates of human exposure is critical for air pollution health studies and a variety of methods are currently being used to assign pollutant concentrations to populations. Results from these methods may differ substantially, which can affect the outcomes of health impact assessments. Here, we applied 14 methods for developing spatiotemporal air pollutant concentration fields of eight pollutants to the Atlanta, Georgia region. These methods include eight methods relying mostly on air quality observations (CM: central monitor; SA: spatial average; IDW: inverse distance weighting; KRIG: kriging; TESS-D: discontinuous tessellation; TESS-NN: natural neighbor tessellation with interpolation; LUR: land use regression; AOD: downscaled satellite-derived aerosol optical depth), one using the RLINE dispersion model, and five methods using a chemical transport model (CMAQ), with and without using observational data to constrain results. The derived fields were evaluated and compared. Overall, all methods generally perform better at urban than rural area, and for secondary than primary pollutants. We found the CM and SA methods may be appropriate only for small domains, and for secondary pollutants, though the SA method lead to large negative spatial correlations when using data withholding for PM2.5 (spatial correlation coefficient R = -0.81). The TESS-D method was found to have major limitations. Results of the IDW, KRIG and TESS-NN methods are similar. They are found to be better suited for secondary pollutants because of their satisfactory temporal performance (e.g. average temporal R2 > 0.85 for PM2.5 but less than 0.35 for primary pollutant NO2). In addition, they are suitable for areas with relatively dense monitoring networks due to their inability to capture spatial concentration variabilities, as indicated by the negative spatial R (lower than -0.2 for PM2.5 when assessed using data withholding). The performance of LUR and AOD methods were similar to kriging. Using RLINE and CMAQ fields without fusing observational data led to substantial errors and biases, though the CMAQ model captured spatial gradients reasonably well (spatial R = 0.45 for PM2.5). Two unique tests conducted here included quantifying autocorrelation of method biases (which can be important in time series analyses) and how well the methods capture the observed interspecies correlations (which would be of particular importance in multipollutant health assessments). Autocorrelation of method biases lasted longest and interspecies correlations of primary pollutants was higher than observations when air quality models were used without data fusing. Use of hybrid methods that combine air quality model outputs with observational data overcome some of these limitations and is better suited for health studies. Results from this study contribute to better understanding the strengths and weaknesses of different methods for estimating human exposures.
Study of the Socratic method during cognitive restructuring.
Froján-Parga, María Xesús; Calero-Elvira, Ana; Montaño-Fidalgo, Montserrat
2011-01-01
Cognitive restructuring, in particular in the form of the Socratic method, is widely used by clinicians. However, little research has been published with respect to underlying processes, which has hindered well-accepted explanations of its effectiveness. The aim of this study is to present a new method of analysis of the Socratic method during cognitive restructuring based on the observation of the therapist's verbal behaviour. Using recordings from clinical sessions, 18 sequences were selected in which the Socratic method was applied by six cognitive-behavioural therapists working at a private clinical centre in Madrid. The recordings involved eight patients requiring therapy for various psychological problems. Observations were coded using a category system designed by the authors and that classifies the therapist's verbal behaviour into seven hypothesized functions based on basic behavioural operations. We used the Observer XT software to code the observed sequences. The results are summarized through a preliminary model which considers three different phases of the Socratic method and some functions of the therapist's verbal behaviour in each of these phases: discriminative and reinforcement functions in the starting phase, informative and motivational functions in the course of the debate, and instructional and reinforcement functions in the final phase. We discuss the long-term potential clinical benefits of the current proposal. Copyright © 2010 John Wiley & Sons, Ltd.
Munn, Zachary; Giles, Kristy; Aromataris, Edoardo; Deakin, Anita; Schultz, Timothy; Mandel, Catherine; Peters, Micah Dj; Maddern, Guy; Pearson, Alan; Runciman, William
2018-02-01
The use of safety checklists in interventional radiology is an intervention aimed at reducing mortality and morbidity. Currently there is little known about their practical use in Australian radiology departments. The primary aim of this mixed methods study was to evaluate how safety checklists (SC) are used and completed in radiology departments within Australian hospitals, and attitudes towards their use as described by Australian radiologists. A mixed methods approach employing both quantitative and qualitative techniques was used for this study. Direct observations of checklist use during radiological procedures were performed to determine compliance. Medical records were also audited to investigate whether there was any discrepancy between practice (actual care measured by direct observation) and documentation (documented care measured by an audit of records). A focus group with Australian radiologists was conducted to determine attitudes towards the use of checklists. Among the four participating radiology departments, overall observed mean completion of the components of the checklist was 38%. The checklist items most commonly observed to be addressed by the operating theatre staff as noted during observations were correct patient (80%) and procedure (60%). Findings from the direct observations conflicted with the medical record audit, where there was a higher percentage of completion (64% completion) in comparison to the 38% observed. The focus group participants spoke of barriers to the use of checklists, including the culture of radiology departments. This is the first study of safety checklist use in radiology within Australia. Overall completion was low across the sites included in this study. Compliance data collected from observations differed markedly from reported compliance in medical records. There remain significant barriers to the proper use of safety checklists in Australian radiology departments. © 2017 The Royal Australian and New Zealand College of Radiologists.
Dietary assessment in elderly people: experiences gained from studies in the Netherlands.
de Vries, J H M; de Groot, L C P G M; van Staveren, W A
2009-02-01
In selecting a dietary assessment method, several aspects such as the aim of the study and the characteristics of the target population should be taken into account. In elderly people, diminished functionality and cognitive decline may hamper dietary assessment and require tailored approaches to assess dietary intake. The objective of this paper is to summarize our experience in dietary assessment in a number of different studies in population groups over 65 years of age in the Netherlands, and to discuss this experience in the perspective of other nutrition surveys in the elderly. In longitudinal studies, we applied a modified dietary history; in clinical nursing home studies, trained staff observed and recorded food consumption; and in a controlled trial in healthy elderly men, we used a food frequency questionnaire (FFQ). For all methods applied in the community-dwelling elderly people, validation studies showed a similar underestimation of intake of 10-15% compared with the reference value. In the care-depending elderly, the underestimation was less: 5% according to an observational method. The methods varied widely in the resources required, including burden to the participants, field staff and finances. For effective dietary assessment in older adults, the major challenge will be to distinguish between those elderly who are able to respond correctly to the less intensive methods, such as 24-h recalls or FFQ, and those who are not able to respond to these methods and require adapted techniques, for example, observational records.
Video observation of procedural skills for assessment of trabeculectomy performed by residents.
Hassanpour, Narges; Chen, Rebecca; Baikpour, Masoud; Moghimi, Sasan
2016-06-01
The efficacy and sufficiency of a healthcare system is directly related to the knowledge and skills of graduates working in the system. In this regard, many different assessment methods have been proposed to evaluate various skills of the learners. Video Observation of Procedural Skills (VOPS) is one newly-proposed method. In this study we aimed to compare the results of the VOPS method with the more commonly used Direct Observation of Procedural Skills (DOPS). In this prospective study conducted in 2012, all 10 ophthalmology residents of post graduate year 4 were selected for participation. Three months into training in the glaucoma ward, these residents performed trabeculectomy surgery on patients, and their procedural skills were assessed in real time by an expert via the DOPS method. All surgeries were also recorded and later evaluated via the VOPS method by an expert. Bland-Altman plot also was used to compare the two methods and calculating the mean and 95% limit of agreement. Residents have been done a mean of 14.9 ± 3.5 (range 10-20) independent trabeculectomy before the assessments. DOPS grade was positively associated with number of independent trabeculectomy during glaucoma rotation (β=0.227, p = 0.004). The intra-observer reproducibility of VOPS measurements was 0.847 (95% CI: 0.634, 0.961). The mean VOPS grade was significantly lower than the mean DOPS grade (8.4 vs. 8.9, p = 0.02). However, a good correlation was observed between the grades of VOPS and DOPS (r = 0.89, p = 0.001). Bland-Altman analysis demonstrated that all data points fell within the 95% limits of agreement (-1.46, 0.46). The present study showed that VOPS might be considered a feasible, valid, and reliable assessment method for procedural skills of medical students and residents that can be used as an alternative to the DOPS method. However, VOPS might underestimate DOPS in evaluating surgical skills of residents.
A Comparison of Observed Abundances in Five Well-Studied Planetary Nebulae
NASA Astrophysics Data System (ADS)
Tanner, Jolene; Balick, B.; Kwitter, K. B.
2013-01-01
We have assembled data and derived abundances in several recent careful studies for five bright planetary nebulae (PNe) of low, moderate, and high ionization and relatively simple morphology. Each of the studies employ different apertures, aperture placement, and facilities for the observations. Various methods were used to derive total abundances. All used spectral windows that included [OII]3727 in the UV through Argon lines in the red. Our ultimate goal is to determine the extent to which the derived abundances are consistent. We show that the reddening-corrected line ratios are surprisingly similar despite the different modes of observation and that the various abundance analysis methods yield generally consistent results for He/H, N/H, O/H, and Ne/H (within 50% with a few larger deviations). In addition we processed the line ratios from the different sources using a common abundance derivation method (ELSA) to search for clues of systematic methodological inconsistencies. None were uncovered.
NASA Astrophysics Data System (ADS)
Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung
2017-04-01
Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.
Qualitative versus quantitative methods in psychiatric research.
Razafsha, Mahdi; Behforuzi, Hura; Azari, Hassan; Zhang, Zhiqun; Wang, Kevin K; Kobeissy, Firas H; Gold, Mark S
2012-01-01
Qualitative studies are gaining their credibility after a period of being misinterpreted as "not being quantitative." Qualitative method is a broad umbrella term for research methodologies that describe and explain individuals' experiences, behaviors, interactions, and social contexts. In-depth interview, focus groups, and participant observation are among the qualitative methods of inquiry commonly used in psychiatry. Researchers measure the frequency of occurring events using quantitative methods; however, qualitative methods provide a broader understanding and a more thorough reasoning behind the event. Hence, it is considered to be of special importance in psychiatry. Besides hypothesis generation in earlier phases of the research, qualitative methods can be employed in questionnaire design, diagnostic criteria establishment, feasibility studies, as well as studies of attitude and beliefs. Animal models are another area that qualitative methods can be employed, especially when naturalistic observation of animal behavior is important. However, since qualitative results can be researcher's own view, they need to be statistically confirmed, quantitative methods. The tendency to combine both qualitative and quantitative methods as complementary methods has emerged over recent years. By applying both methods of research, scientists can take advantage of interpretative characteristics of qualitative methods as well as experimental dimensions of quantitative methods.
Himle, Michael B; Chang, Susanna; Woods, Douglas W; Pearlman, Amanda; Buzzella, Brian; Bunaciu, Liviu; Piacentini, John C
2006-01-01
Behavior analysis has been at the forefront in establishing effective treatments for children and adults with chronic tic disorders. As is customary in behavior analysis, the efficacy of these treatments has been established using direct-observation assessment methods. Although behavior-analytic treatments have enjoyed acceptance and integration into mainstream health care practices for tic disorders (e.g., psychiatry and neurology), the use of direct observation as a primary assessment tool has been neglected in favor of less objective methods. Hesitation to use direct observation appears to stem largely from concerns about the generalizability of clinic observations to other settings (e.g., home) and a lack of consensus regarding the most appropriate and feasible techniques for conducting and scoring direct observation. The purpose of the current study was to evaluate and establish a reliable, valid, and feasible direct-observation protocol capable of being transported to research and clinical settings. A total of 43 children with tic disorders, collected from two outpatient specialty clinics, were assessed using direct (videotape samples) and indirect (Yale Global Tic Severity Scale; YGTSS) methods. Videotaped observation samples were collected across 3 consecutive weeks and two different settings (clinic and home), were scored using both exact frequency counts and partial-interval coding, and were compared to data from a common indirect measure of tic severity (the YGTSS). In addition, various lengths of videotaped segments were scored to determine the optimal observation length. Results show that (a) clinic-based observations correspond well to home-based observations, (b) brief direct-observation segments scored with time-sampling methods reliably quantified tics, and (c) indirect methods did not consistently correspond with the direct methods.
NASA Technical Reports Server (NTRS)
Prinn, Ronald G.
2001-01-01
For interpreting observational data, and in particular for use in inverse methods, accurate and realistic chemical transport models are essential. Toward this end we have, in recent years, helped develop and utilize a number of three-dimensional models including the Model for Atmospheric Transport and Chemistry (MATCH).
Classical methods and modern analysis for studying fungal diversity
John Paul Schmit
2005-01-01
In this chapter, we examine the use of classical methods to study fungal diversity. Classical methods rely on the direct observation of fungi, rather than sampling fungal DNA. We summarize a wide variety of classical methods, including direct sampling of fungal fruiting bodies, incubation of substrata in moist chambers, culturing of endophytes, and particle plating. We...
Classical Methods and Modern Analysis for Studying Fungal Diversity
J. P. Schmit; D. J. Lodge
2005-01-01
In this chapter, we examine the use of classical methods to study fungal diversity. Classical methods rely on the direct observation of fungi, rather than sampling fungal DNA. We summarize a wide variety of classical methods, including direct sampling of fungal fruiting bodies, incubation of substrata in moist chambers, culturing of endophytes, and particle plating. We...
Radio Meteors Observations Techniques at RI NAO
NASA Astrophysics Data System (ADS)
Vovk, Vasyl; Kaliuzhnyi, Mykola
2016-07-01
The Solar system is inhabited with large number of celestial bodies. Some of them are well studied, such as planets and vast majority of big asteroids and comets. There is one group of objects which has received little attention. That is meteoroids with related to them meteors. Nowadays enough low-technology high-efficiency radio-technical solutions are appeared which allow to observe meteors daily. At RI NAO three methodologies for meteor observation are developed: single-station method using FM-receiver, correlation method using FM-receiver and Internet resources, and single-station method using low-cost SDR-receiver.
Land Surface Model Biases and their Impacts on the Assimilation of Snow-related Observations
NASA Astrophysics Data System (ADS)
Arsenault, K. R.; Kumar, S.; Hunter, S. M.; Aman, R.; Houser, P. R.; Toll, D.; Engman, T.; Nigro, J.
2007-12-01
Some recent snow modeling studies have employed a wide range of assimilation methods to incorporate snow cover or other snow-related observations into different hydrological or land surface models. These methods often include taking both model and observation biases into account throughout the model integration. This study focuses more on diagnosing the model biases and presenting their subsequent impacts on assimilating snow observations and modeled snowmelt processes. In this study, the land surface model, the Community Land Model (CLM), is used within the Land Information System (LIS) modeling framework to show how such biases impact the assimilation of MODIS snow cover observations. Alternative in-situ and satellite-based observations are used to help guide the CLM LSM in better predicting snowpack conditions and more realistic timing of snowmelt for a western US mountainous region. Also, MODIS snow cover observation biases will be discussed, and validation results will be provided. The issues faced with inserting or assimilating MODIS snow cover at moderate spatial resolutions (like 1km or less) will be addressed, and the impacts on CLM will be presented.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
WRF nested large-eddy simulations of deep convection during SEAC4RS
NASA Astrophysics Data System (ADS)
Heath, Nicholas K.; Fuelberg, Henry E.; Tanelli, Simone; Turk, F. Joseph; Lawson, R. Paul; Woods, Sarah; Freeman, Sean
2017-04-01
Large-eddy simulations (LES) and observations are often combined to increase our understanding and improve the simulation of deep convection. This study evaluates a nested LES method that uses the Weather Research and Forecasting (WRF) model and, specifically, tests whether the nested LES approach is useful for studying deep convection during a real-world case. The method was applied on 2 September 2013, a day of continental convection that occurred during the Studies of Emissions and Atmospheric Composition, Clouds and Climate Coupling by Regional Surveys (SEAC4RS) campaign. Mesoscale WRF output (1.35 km grid length) was used to drive a nested LES with 450 m grid spacing, which then drove a 150 m domain. Results reveal that the 450 m nested LES reasonably simulates observed reflectivity distributions and aircraft-observed in-cloud vertical velocities during the study period. However, when examining convective updrafts, reducing the grid spacing to 150 m worsened results. We find that the simulated updrafts in the 150 m run become too diluted by entrainment, thereby generating updrafts that are weaker than observed. Lastly, the 450 m simulation is combined with observations to study the processes forcing strong midlevel cloud/updraft edge downdrafts that were observed on 2 September. Results suggest that these strong downdrafts are forced by evaporative cooling due to mixing and by perturbation pressure forces acting to restore mass continuity around neighboring updrafts. We conclude that the WRF nested LES approach, with further development and evaluation, could potentially provide an effective method for studying deep convection in real-world cases.
Theory, Method, and Triangulation in the Study of Street Children.
ERIC Educational Resources Information Center
Lucchini, Riccardo
1996-01-01
Describes how a comparative study of street children in Montevideo (Uruguay), Rio de Janeiro, and Mexico City contributes to a synergism between theory and method. Notes how theoretical approaches of symbolic interactionism, genetic structuralism, and habitus theory complement interview, participant observation, and content analysis methods;…
Data-Driven Model Uncertainty Estimation in Hydrologic Data Assimilation
NASA Astrophysics Data System (ADS)
Pathiraja, S.; Moradkhani, H.; Marshall, L.; Sharma, A.; Geenens, G.
2018-02-01
The increasing availability of earth observations necessitates mathematical methods to optimally combine such data with hydrologic models. Several algorithms exist for such purposes, under the umbrella of data assimilation (DA). However, DA methods are often applied in a suboptimal fashion for complex real-world problems, due largely to several practical implementation issues. One such issue is error characterization, which is known to be critical for a successful assimilation. Mischaracterized errors lead to suboptimal forecasts, and in the worst case, to degraded estimates even compared to the no assimilation case. Model uncertainty characterization has received little attention relative to other aspects of DA science. Traditional methods rely on subjective, ad hoc tuning factors or parametric distribution assumptions that may not always be applicable. We propose a novel data-driven approach (named SDMU) to model uncertainty characterization for DA studies where (1) the system states are partially observed and (2) minimal prior knowledge of the model error processes is available, except that the errors display state dependence. It includes an approach for estimating the uncertainty in hidden model states, with the end goal of improving predictions of observed variables. The SDMU is therefore suited to DA studies where the observed variables are of primary interest. Its efficacy is demonstrated through a synthetic case study with low-dimensional chaotic dynamics and a real hydrologic experiment for one-day-ahead streamflow forecasting. In both experiments, the proposed method leads to substantial improvements in the hidden states and observed system outputs over a standard method involving perturbation with Gaussian noise.
ERIC Educational Resources Information Center
Karp, Jennifer; Serbin, Lisa A.; Stack, Dale M.; Schwartzman, Alex E.
2004-01-01
This study demonstrates the potential utility of the Behavioural Style Observational System (BSOS) as a new observational measure of children's behavioural style. The BSOS is an objective, short and easy to use measure that can be readily adapted to a variety of home and laboratory situations. In the present study, 160 mother-child dyads from the…
O'Connor, A M; Sargeant, J M; Dohoo, I R; Erb, H N; Cevallos, M; Egger, M; Ersbøll, A K; Martin, S W; Nielsen, L R; Pearl, D L; Pfeiffer, D U; Sanchez, J; Torrence, M E; Vigre, H; Waldner, C; Ward, M P
2016-11-01
The STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) statement was first published in 2007 and again in 2014. The purpose of the original STROBE was to provide guidance for authors, reviewers, and editors to improve the comprehensiveness of reporting; however, STROBE has a unique focus on observational studies. Although much of the guidance provided by the original STROBE document is directly applicable, it was deemed useful to map those statements to veterinary concepts, provide veterinary examples, and highlight unique aspects of reporting in veterinary observational studies. Here, we present the examples and explanations for the checklist items included in the STROBE-Vet statement. Thus, this is a companion document to the STROBE-Vet statement methods and process document (JVIM_14575 "Methods and Processes of Developing the Strengthening the Reporting of Observational Studies in Epidemiology-Veterinary (STROBE-Vet) Statement" undergoing proofing), which describes the checklist and how it was developed. Copyright © 2016 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.
NASA Astrophysics Data System (ADS)
Canbazoglu Bilici, Sedef; Selcen Guzey, S.; Yamak, Havva
2016-05-01
Background: Technological pedagogical content knowledge (TPACK) is critical for effective teaching with technology. However, generally science teacher education programs do not help pre-service teachers develop TPACK. Purpose: The purpose of this study was to assess pre-service science teachers' TPACK over a semester-long Science Methods. Sample: Twenty-seven pre-service science teachers took the course toward the end of their four-year teacher education program. Design and method: The study employed the case study methodology. Lesson plans and microteaching observations were used as data collection tools. Technological Pedagogical Content Knowledge-based lesson plan assessment instrument (TPACK-LpAI) and Technological Pedagogical Content Knowledge Observation Protocol (TPACK-OP) were used to analyze data obtained from observations and lesson plans. Results: The results showed that the TPACK-focused Science Methods course had an impact on pre-service teachers' TPACK to varying degrees. Most importantly, the course helped teachers gain knowledge of effective usage of educational technology tools. Conclusion: Teacher education programs should provide opportunities to pre-service teachers to develop their TPACK so that they can effectively integrate technology into their teaching.
Kinematic Characteristics of Meteor Showers by Results of the Combined Radio-Television Observations
NASA Astrophysics Data System (ADS)
Narziev, Mirhusen
2016-07-01
One of the most important tasks of meteor astronomy is the study of the distribution of meteoroid matter in the solar system. The most important component to address this issue presents the results of measurements of the velocities, radiants, and orbits of both showers and sporadic meteors. Radiant's and orbits of meteors for different sets of data obtained as a result of photographic, television, electro-optical, video, Fireball Network and radar observations have been measured repeatedly. However, radiants, velocities and orbits of shower meteors based on the results of combined radar-optical observations have not been sufficiently studied. In this paper, we present a methods for computing the radiants, velocities, and orbits of the combined radar-TV meteor observations carried out at HisAO in 1978-1980. As a result of the two-year cycle of simultaneous TV-radar observations 57 simultaneous meteors have been identified. Analysis of the TV images has shown that some meteor trails appeared as dashed lines. Among the simultaneous meteors of d-Aquariids 10 produced such dashed images, and among the Perseids there were only 7. Using a known method, for such fragmented images of simultaneous meteors - together with the measured radar distance, trace length, and time interval between the segments - allowed to determine meteor velocity using combined method. In addition, velocity of the same meteors was measured using diffraction and radar range-time methods based on the results of radar observation. It has been determined that the mean values of meteoroid velocity based on the combined radar-TV observations are greater in 1 ÷ 3 km / c than the averaged velocity values measured using only radar methods. Orbits of the simultaneously observed meteors with segmented photographic images were calculated on the basis of the average velocity observed using the combined radar-TV method. The measured results of radiants velocities and orbital elements of individual meteors allowed us to calculate the average value for stream meteors. The data for the radiants, velocities and orbits of the meteor showers obtained by combined radar-TV observations to compared with data obtained by other authors.
Mochizuki, Yuta; Kaneko, Takao; Kawahara, Keisuke; Toyoda, Shinya; Kono, Norihiko; Hada, Masaru; Ikegami, Hiroyasu; Musha, Yoshiro
2017-11-20
The quadrant method was described by Bernard et al. and it has been widely used for postoperative evaluation of anterior cruciate ligament (ACL) reconstruction. The purpose of this research is to further develop the quadrant method measuring four points, which we named four-point quadrant method, and to compare with the quadrant method. Three-dimensional computed tomography (3D-CT) analyses were performed in 25 patients who underwent double-bundle ACL reconstruction using the outside-in technique. The four points in this study's quadrant method were defined as point1-highest, point2-deepest, point3-lowest, and point4-shallowest, in femoral tunnel position. Value of depth and height in each point was measured. Antero-medial (AM) tunnel is (depth1, height2) and postero-lateral (PL) tunnel is (depth3, height4) in this four-point quadrant method. The 3D-CT images were evaluated independently by 2 orthopaedic surgeons. A second measurement was performed by both observers after a 4-week interval. Intra- and inter-observer reliability was calculated by means of intra-class correlation coefficient (ICC). Also, the accuracy of the method was evaluated against the quadrant method. Intra-observer reliability was almost perfect for both AM and PL tunnel (ICC > 0.81). Inter-observer reliability of AM tunnel was substantial (ICC > 0.61) and that of PL tunnel was almost perfect (ICC > 0.81). The AM tunnel position was 0.13% deep, 0.58% high and PL tunnel position was 0.01% shallow, 0.13% low compared to quadrant method. The four-point quadrant method was found to have high intra- and inter-observer reliability and accuracy. This method can evaluate the tunnel position regardless of the shape and morphology of the bone tunnel aperture for use of comparison and can provide measurement that can be compared with various reconstruction methods. The four-point quadrant method of this study is considered to have clinical relevance in that it is a detailed and accurate tool for evaluating femoral tunnel position after ACL reconstruction. Case series, Level IV.
Optimal observables for multiparameter seismic tomography
NASA Astrophysics Data System (ADS)
Bernauer, Moritz; Fichtner, Andreas; Igel, Heiner
2014-08-01
We propose a method for the design of seismic observables with maximum sensitivity to a target model parameter class, and minimum sensitivity to all remaining parameter classes. The resulting optimal observables thereby minimize interparameter trade-offs in multiparameter inverse problems. Our method is based on the linear combination of fundamental observables that can be any scalar measurement extracted from seismic waveforms. Optimal weights of the fundamental observables are determined with an efficient global search algorithm. While most optimal design methods assume variable source and/or receiver positions, our method has the flexibility to operate with a fixed source-receiver geometry, making it particularly attractive in studies where the mobility of sources and receivers is limited. In a series of examples we illustrate the construction of optimal observables, and assess the potentials and limitations of the method. The combination of Rayleigh-wave traveltimes in four frequency bands yields an observable with strongly enhanced sensitivity to 3-D density structure. Simultaneously, sensitivity to S velocity is reduced, and sensitivity to P velocity is eliminated. The original three-parameter problem thereby collapses into a simpler two-parameter problem with one dominant parameter. By defining parameter classes to equal earth model properties within specific regions, our approach mimics the Backus-Gilbert method where data are combined to focus sensitivity in a target region. This concept is illustrated using rotational ground motion measurements as fundamental observables. Forcing dominant sensitivity in the near-receiver region produces an observable that is insensitive to the Earth structure at more than a few wavelengths' distance from the receiver. This observable may be used for local tomography with teleseismic data. While our test examples use a small number of well-understood fundamental observables, few parameter classes and a radially symmetric earth model, the method itself does not impose such restrictions. It can easily be applied to large numbers of fundamental observables and parameters classes, as well as to 3-D heterogeneous earth models.
Statistical procedures for evaluating daily and monthly hydrologic model predictions
Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.
2004-01-01
The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.
ERIC Educational Resources Information Center
Austin, Peter C.
2011-01-01
The propensity score is the probability of treatment assignment conditional on observed baseline characteristics. The propensity score allows one to design and analyze an observational (nonrandomized) study so that it mimics some of the particular characteristics of a randomized controlled trial. In particular, the propensity score is a balancing…
Generalizing Observational Study Results: Applying Propensity Score Methods to Complex Surveys
DuGoff, Eva H; Schuler, Megan; Stuart, Elizabeth A
2014-01-01
ObjectiveTo provide a tutorial for using propensity score methods with complex survey data. Data SourcesSimulated data and the 2008 Medical Expenditure Panel Survey. Study DesignUsing simulation, we compared the following methods for estimating the treatment effect: a naïve estimate (ignoring both survey weights and propensity scores), survey weighting, propensity score methods (nearest neighbor matching, weighting, and subclassification), and propensity score methods in combination with survey weighting. Methods are compared in terms of bias and 95 percent confidence interval coverage. In Example 2, we used these methods to estimate the effect on health care spending of having a generalist versus a specialist as a usual source of care. Principal FindingsIn general, combining a propensity score method and survey weighting is necessary to achieve unbiased treatment effect estimates that are generalizable to the original survey target population. ConclusionsPropensity score methods are an essential tool for addressing confounding in observational studies. Ignoring survey weights may lead to results that are not generalizable to the survey target population. This paper clarifies the appropriate inferences for different propensity score methods and suggests guidelines for selecting an appropriate propensity score method based on a researcher’s goal. PMID:23855598
Baucom, Brian R W; Leo, Karena; Adamo, Colin; Georgiou, Panayiotis; Baucom, Katherine J W
2017-12-01
Observational behavioral coding methods are widely used for the study of relational phenomena. There are numerous guidelines for the development and implementation of these methods that include principles for creating new and adapting existing coding systems as well as principles for creating coding teams. While these principles have been successfully implemented in research on relational phenomena, the ever expanding array of phenomena being investigated with observational methods calls for a similar expansion of these principles. Specifically, guidelines are needed for decisions that arise in current areas of emphasis in couple research including observational investigation of related outcomes (e.g., relationship distress and psychological symptoms), the study of change in behavior over time, and the study of group similarities and differences in the enactment and perception of behavior. This article describes conceptual and statistical considerations involved in these 3 areas of research and presents principle- and empirically based rationale for design decisions related to these issues. A unifying principle underlying these guidelines is the need for careful consideration of fit between theory, research questions, selection of coding systems, and creation of coding teams. Implications of (mis)fit for the advancement of theory are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
An advanced analysis method of initial orbit determination with too short arc data
NASA Astrophysics Data System (ADS)
Li, Binzhe; Fang, Li
2018-02-01
This paper studies the initial orbit determination (IOD) based on space-based angle measurement. Commonly, these space-based observations have short durations. As a result, classical initial orbit determination algorithms give poor results, such as Laplace methods and Gauss methods. In this paper, an advanced analysis method of initial orbit determination is developed for space-based observations. The admissible region and triangulation are introduced in the method. Genetic algorithm is also used for adding some constraints of parameters. Simulation results show that the algorithm can successfully complete the initial orbit determination.
Development of an Aerosol Surface Inoculation Method for Bacillus Spores ▿
Lee, Sang Don; Ryan, Shawn P.; Snyder, Emily Gibb
2011-01-01
A method was developed to deposit Bacillus subtilis spores via aerosolization onto various surface materials for biological agent decontamination and detection studies. This new method uses an apparatus coupled with a metered dose inhaler to reproducibly deposit spores onto various surfaces. A metered dose inhaler was loaded with Bacillus subtilis spores, a surrogate for Bacillus anthracis. Five different material surfaces (aluminum, galvanized steel, wood, carpet, and painted wallboard paper) were tested using this spore deposition method. This aerosolization method deposited spores at a concentration of more than 107 CFU per coupon (18-mm diameter) with less than a 50% coefficient of variation, showing that the aerosolization method developed in this study can deposit reproducible numbers of spores onto various surface coupons. Scanning electron microscopy was used to probe the spore deposition patterns on test coupons. The deposition patterns observed following aerosol impaction were compared to those of liquid inoculation. A physical difference in the spore deposition patterns was observed to result from the two different methods. The spore deposition method developed in this study will help prepare spore coupons via aerosolization fast and reproducibly for bench top decontamination and detection studies. PMID:21193670
Development of an aerosol surface inoculation method for bacillus spores.
Lee, Sang Don; Ryan, Shawn P; Snyder, Emily Gibb
2011-03-01
A method was developed to deposit Bacillus subtilis spores via aerosolization onto various surface materials for biological agent decontamination and detection studies. This new method uses an apparatus coupled with a metered dose inhaler to reproducibly deposit spores onto various surfaces. A metered dose inhaler was loaded with Bacillus subtilis spores, a surrogate for Bacillus anthracis. Five different material surfaces (aluminum, galvanized steel, wood, carpet, and painted wallboard paper) were tested using this spore deposition method. This aerosolization method deposited spores at a concentration of more than 10(7) CFU per coupon (18-mm diameter) with less than a 50% coefficient of variation, showing that the aerosolization method developed in this study can deposit reproducible numbers of spores onto various surface coupons. Scanning electron microscopy was used to probe the spore deposition patterns on test coupons. The deposition patterns observed following aerosol impaction were compared to those of liquid inoculation. A physical difference in the spore deposition patterns was observed to result from the two different methods. The spore deposition method developed in this study will help prepare spore coupons via aerosolization fast and reproducibly for bench top decontamination and detection studies.
Embedding of multidimensional time-dependent observations.
Barnard, J P; Aldrich, C; Gerber, M
2001-10-01
A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.
Embedding of multidimensional time-dependent observations
NASA Astrophysics Data System (ADS)
Barnard, Jakobus P.; Aldrich, Chris; Gerber, Marius
2001-10-01
A method is proposed to reconstruct dynamic attractors by embedding of multivariate observations of dynamic nonlinear processes. The Takens embedding theory is combined with independent component analysis to transform the embedding into a vector space of linearly independent vectors (phase variables). The method is successfully tested against prediction of the unembedded state vector in two case studies of simulated chaotic processes.
Westbrook, Johanna I; Ampt, Amanda
2009-04-01
Evidence regarding how health information technologies influence clinicians' patterns of work and support efficient practices is limited. Traditional paper-based data collection methods are unable to capture clinical work complexity and communication patterns. The use of electronic data collection tools for such studies is emerging yet is rarely assessed for reliability or validity. Our aim was to design, apply and test an observational method which incorporated the use of an electronic data collection tool for work measurement studies which would allow efficient, accurate and reliable data collection, and capture greater degrees of work complexity than current approaches. We developed an observational method and software for personal digital assistants (PDAs) which captures multiple dimensions of clinicians' work tasks, namely what task, with whom, and with what; tasks conducted in parallel (multi-tasking); interruptions and task duration. During field-testing over 7 months across four hospital wards, fifty-two nurses were observed for 250 h. Inter-rater reliability was tested and validity was measured by (i) assessing whether observational data reflected known differences in clinical role work tasks and (ii) by comparing observational data with participants' estimates of their task time distribution. Observers took 15-20 h of training to master the method and data collection process. Only 1% of tasks observed did not match the classification developed and were classified as 'other'. Inter-rater reliability scores of observers were maintained at over 85%. The results discriminated between the work patterns of enrolled and registered nurses consistent with differences in their roles. Survey data (n=27) revealed consistent ratings of tasks by nurses, and their rankings of most to least time-consuming tasks were significantly correlated with those derived from the observational data. Over 40% of nurses' time was spent in direct care or professional communication, with 11.8% of time spent multi-tasking. Nurses were interrupted approximately every 49 min. One quarter of interruptions occurred while nurses were preparing or administering medications. This method efficiently produces reliable and valid data. The multi-dimensional nature of the data collected provides greater insights into patterns of clinicians' work and communication than has previously been possible using other methods.
ERIC Educational Resources Information Center
Bates, Reid A.; Holton, Elwood F., III; Burnett, Michael F.
1999-01-01
A case study of learning transfer demonstrates the possible effect of influential observation on linear regression analysis. A diagnostic method that tests for violation of assumptions, multicollinearity, and individual and multiple influential observations helps determine which observation to delete to eliminate bias. (SK)
A study of numerical methods for hyperbolic conservation laws with stiff source terms
NASA Technical Reports Server (NTRS)
Leveque, R. J.; Yee, H. C.
1988-01-01
The proper modeling of nonequilibrium gas dynamics is required in certain regimes of hypersonic flow. For inviscid flow this gives a system of conservation laws coupled with source terms representing the chemistry. Often a wide range of time scales is present in the problem, leading to numerical difficulties as in stiff systems of ordinary differential equations. Stability can be achieved by using implicit methods, but other numerical difficulties are observed. The behavior of typical numerical methods on a simple advection equation with a parameter-dependent source term was studied. Two approaches to incorporate the source term were utilized: MacCormack type predictor-corrector methods with flux limiters, and splitting methods in which the fluid dynamics and chemistry are handled in separate steps. Various comparisons over a wide range of parameter values were made. In the stiff case where the solution contains discontinuities, incorrect numerical propagation speeds are observed with all of the methods considered. This phenomenon is studied and explained.
Heymann, Eckhard W.; Lüttmann, Kathrin; Michalczyk, Inga M.; Saboya, Pedro Pablo Pinedo; Ziegenhagen, Birgit; Bialozyt, Ronald
2012-01-01
Background Determining the distances over which seeds are dispersed is a crucial component for examining spatial patterns of seed dispersal and their consequences for plant reproductive success and population structure. However, following the fate of individual seeds after removal from the source tree till deposition at a distant place is generally extremely difficult. Here we provide a comparison of observationally and genetically determined seed dispersal distances and dispersal curves in a Neotropical animal-plant system. Methodology/Principal Findings In a field study on the dispersal of seeds of three Parkia (Fabaceae) species by two Neotropical primate species, Saguinus fuscicollis and Saguinus mystax, in Peruvian Amazonia, we observationally determined dispersal distances. These dispersal distances were then validated through DNA fingerprinting, by matching DNA from the maternally derived seed coat to DNA from potential source trees. We found that dispersal distances are strongly right-skewed, and that distributions obtained through observational and genetic methods and fitted distributions do not differ significantly from each other. Conclusions/Significance Our study showed that seed dispersal distances can be reliably estimated through observational methods when a strict criterion for inclusion of seeds is observed. Furthermore, dispersal distances produced by the two primate species indicated that these primates fulfil one of the criteria for efficient seed dispersers. Finally, our study demonstrated that DNA extraction methods so far employed for temperate plant species can be successfully used for hard-seeded tropical plants. PMID:22514748
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ry, Rexha Verdhora, E-mail: rexha.vry@gmail.com; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id
Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment.more » We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger’s method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger’s result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.« less
Regression-Based Estimates of Observed Functional Status in Centenarians
Mitchell, Meghan B.; Miller, L. Stephen; Woodard, John L.; Davey, Adam; Martin, Peter; Burgess, Molly; Poon, Leonard W.
2011-01-01
Purpose of the Study: There is lack of consensus on the best method of functional assessment, and there is a paucity of studies on daily functioning in centenarians. We sought to compare associations between performance-based, self-report, and proxy report of functional status in centenarians. We expected the strongest relationships between proxy reports and observed performance of basic activities of daily living (BADLs) and instrumental activities of daily living (IADLs). We hypothesized that the discrepancy between self-report and observed daily functioning would be modified by cognitive status. We additionally sought to provide clinicians with estimates of centenarians’ observed daily functioning based on their mental status in combination with subjective measures of activities of daily living (ADLs). Design and Methods: Two hundred and forty-four centenarians from the Georgia Centenarian Study were included in this cross-sectional population-based study. Measures included the Direct Assessment of Functional Status, self-report and proxy report of functional status, and the Mini-Mental State Examination (MMSE). Results: Associations between observed and proxy reports were stronger than between observed and self-report across BADL and IADL measures. A significant MMSE by type of report interaction was found, indicating that lower MMSE performance is associated with a greater discrepancy between subjective and objective ADL measures. Implications: Results demonstrate associations between 3 methods of assessing functional status and suggest proxy reports are generally more accurate than self-report measures. Cognitive status accounted for some of the discrepancy between observed and self-reports, and we provide clinicians with tables to estimate centenarians’ performance on observed functional measures based on MMSE and subjective report of functional status. PMID:20974657
Validity and inter-observer reliability of subjective hand-arm vibration assessments.
Coenen, Pieter; Formanoy, Margriet; Douwes, Marjolein; Bosch, Tim; de Kraker, Heleen
2014-07-01
Exposure to mechanical vibrations at work (e.g., due to handling powered tools) is a potential occupational risk as it may cause upper extremity complaints. However, reliable and valid assessment methods for vibration exposure at work are lacking. Measuring hand-arm vibration objectively is often difficult and expensive, while often used information provided by manufacturers lacks detail. Therefore, a subjective hand-arm vibration assessment method was tested on validity and inter-observer reliability. In an experimental protocol, sixteen tasks handling powered tools were executed by two workers. Hand-arm vibration was assessed subjectively by 16 observers according to the proposed subjective assessment method. As a gold standard reference, hand-arm vibration was measured objectively using a vibration measurement device. Weighted κ's were calculated to assess validity, intra-class-correlation coefficients (ICCs) were calculated to assess inter-observer reliability. Inter-observer reliability of the subjective assessments depicting the agreement among observers can be expressed by an ICC of 0.708 (0.511-0.873). The validity of the subjective assessments as compared to the gold-standard reference can be expressed by a weighted κ of 0.535 (0.285-0.785). Besides, the percentage of exact agreement of the subjective assessment compared to the objective measurement was relatively low (i.e., 52% of all tasks). This study shows that subjectively assessed hand-arm vibrations are fairly reliable among observers and moderately valid. This assessment method is a first attempt to use subjective risk assessments of hand-arm vibration. Although, this assessment method can benefit from some future improvement, it can be of use in future studies and in field-based ergonomic assessments. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Kim, Shin-Woo; Noh, Nam-Kyu; Lim, Gyu-Ho
2013-04-01
This study presents the introduction of retrospective optimal interpolation (ROI) and its application with Weather Research and Forecasting model (WRF). Song et al. (2009) suggested ROI method which is an optimal interpolation (OI) that gradually assimilates observations over the analysis window for variance-minimum estimate of an atmospheric state at the initial time of the analysis window. The assimilation window of ROI algorithm is gradually increased, similar with that of the quasi-static variational assimilation (QSVA; Pires et al., 1996). Unlike QSVA method, however, ROI method assimilates the data at post analysis time using perturbation method (Verlaan and Heemink, 1997) without adjoint model. Song and Lim (2011) improved this method by incorporating eigen-decomposition and covariance inflation. The computational costs for ROI can be reduced due to the eigen-decomposition of background error covariance which can concentrate ROI analyses on the error variances of governing eigenmodes by transforming the control variables into eigenspace. A total energy norm is used for the normalization of each control variables. In this study, ROI method is applied to WRF model with Observing System Simulation Experiment (OSSE) to validate the algorithm and to investigate the capability. Horizontal wind, pressure, potential temperature, and water vapor mixing ratio are used for control variables and observations. Firstly, 1-profile assimilation experiment is performed. Subsequently, OSSE's are performed using the virtual observing system which consists of synop, ship, and sonde data. The difference between forecast errors with assimilation and without assimilation is obviously increased as time passed, which means the improvement of forecast error with the assimilation by ROI. The characteristics and strength/weakness of ROI method are also investigated by conducting the experiments with 3D-Var (3-dimensional variational) method and 4D-Var (4-dimensional variational) method. In the initial time, ROI produces a larger forecast error than that of 4D-Var. However, the difference between the two experimental results is decreased gradually with time, and the ROI shows apparently better result (i.e., smaller forecast error) than that of 4D-Var after 9-hour forecast.
Laborde-Castérot, Hervé; Agrinier, Nelly; Thilly, Nathalie
2015-10-01
Propensity score (PS) and instrumental variable (IV) are analytical techniques used to adjust for confounding in observational research. More and more, they seem to be used simultaneously in studies evaluating health interventions. The present review aimed to analyze the agreement between PS and IV results in medical research published to date. Review of all published observational studies that evaluated a clinical intervention using simultaneously PS and IV analyses, as identified in MEDLINE and Web of Science. Thirty-seven studies, most of them published during the previous 5 years, reported 55 comparisons between results from PS and IV analyses. There was a slight/fair agreement between the methods [Cohen's kappa coefficient = 0.21 (95% confidence interval: 0.00, 0.41)]. In 23 cases (42%), results were nonsignificant for one method and significant for the other, and IV analysis results were nonsignificant in most situations (87%). Discrepancies are frequent between PS and IV analyses and can be interpreted in various ways. This suggests that researchers should carefully consider their analytical choices, and readers should be cautious when interpreting results, until further studies clarify the respective roles of the two methods in observational comparative effectiveness research. Copyright © 2015 Elsevier Inc. All rights reserved.
Regression analysis of sparse asynchronous longitudinal data
Cao, Hongyuan; Zeng, Donglin; Fine, Jason P.
2015-01-01
Summary We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus. PMID:26568699
Partial Variance of Increments Method in Solar Wind Observations and Plasma Simulations
NASA Astrophysics Data System (ADS)
Greco, A.; Matthaeus, W. H.; Perri, S.; Osman, K. T.; Servidio, S.; Wan, M.; Dmitruk, P.
2018-02-01
The method called "PVI" (Partial Variance of Increments) has been increasingly used in analysis of spacecraft and numerical simulation data since its inception in 2008. The purpose of the method is to study the kinematics and formation of coherent structures in space plasmas, a topic that has gained considerable attention, leading the development of identification methods, observations, and associated theoretical research based on numerical simulations. This review paper will summarize key features of the method and provide a synopsis of the main results obtained by various groups using the method. This will enable new users or those considering methods of this type to find details and background collected in one place.
Lu, Tzong-Shi; Yiao, Szu-Yu; Lim, Kenneth; Jensen, Roderick V; Hsiao, Li-Li
2010-07-01
The identification of differences in protein expression resulting from methodical variations is an essential component to the interpretation of true, biologically significant results. We used the Lowry and Bradford methods- two most commonly used methods for protein quantification, to assess whether differential protein expressions are a result of true biological or methodical variations. MATERIAL #ENTITYSTARTX00026; Differential protein expression patterns was assessed by western blot following protein quantification by the Lowry and Bradford methods. We have observed significant variations in protein concentrations following assessment with the Lowry versus Bradford methods, using identical samples. Greater variations in protein concentration readings were observed over time and in samples with higher concentrations, with the Bradford method. Identical samples quantified using both methods yielded significantly different expression patterns on Western blot. We show for the first time that methodical variations observed in these protein assay techniques, can potentially translate into differential protein expression patterns, that can be falsely taken to be biologically significant. Our study therefore highlights the pivotal need to carefully consider methodical approaches to protein quantification in techniques that report quantitative differences.
King, G. J.; Macpherson, J. W.
1966-01-01
A successful method for low temperature preservation of bull semen was modified for use with boar semen. Observations were made on the effects of varying cooling rate, equilibration time, freezing rate, glycerol concentration, method of glycerol addition, packaging containers, extender pH and tonicity. Observations indicate that boar semen should be cooled and frozen at a slower rate than bull semen. Within the ranges or methods examined, the other factors had little effect on recovery of motility after freezing. PMID:4226548
A Longitudinal Examination of Agitation and Resident Characteristics in the Nursing Home
ERIC Educational Resources Information Center
Burgio, Louis D.; Park, Nan Sook; Hardin, J. Michael; Sun, Fei
2007-01-01
Purpose: Agitation frequently accompanies cognitive decline among nursing home residents. This study used cross-sectional and longitudinal (up to 18 months) methods to examine agitation among profoundly and moderately impaired residents using both staff report and direct observation methods. Design and Methods: The study included participants (N =…
The Role of Teacher Questions and the Socratic Method in EFL Classrooms in Kuwait
ERIC Educational Resources Information Center
Al-Darwish, Salwa
2012-01-01
The present study sheds light on teaching English through two ways of questioning (Socratic & Traditional) methods in Kuwaiti elementary public schools. Data were collected through a qualitative observational method. The study engaged 15 female participants, seven of whom were newly graduate English language teachers with experience in how…
ERIC Educational Resources Information Center
Mashburn, Andrew J.; Meyer, J. Patrick; Allen, Joseph P.; Pianta, Robert C.
2014-01-01
Observational methods are increasingly being used in classrooms to evaluate the quality of teaching. Operational procedures for observing teachers are somewhat arbitrary in existing measures and vary across different instruments. To study the effect of different observation procedures on score reliability and validity, we conducted an experimental…
Height Accuracy Based on Different Rtk GPS Method for Ultralight Aircraft Images
NASA Astrophysics Data System (ADS)
Tahar, K. N.
2015-08-01
Height accuracy is one of the important elements in surveying work especially for control point's establishment which requires an accurate measurement. There are many methods can be used to acquire height value such as tacheometry, leveling and Global Positioning System (GPS). This study has investigated the effect on height accuracy based on different observations which are single based and network based GPS methods. The GPS network is acquired from the local network namely Iskandar network. This network has been setup to provide real-time correction data to rover GPS station while the single network is based on the known GPS station. Nine ground control points were established evenly at the study area. Each ground control points were observed about two and ten minutes. It was found that, the height accuracy give the different result for each observation.
NASA Astrophysics Data System (ADS)
Suzuki, Kai; Iwasaki, Ryosuke; Takagi, Ryo; Yoshizawa, Shin; Umemura, Shin-ichiro
2017-07-01
Acoustic cavitation bubbles are useful for enhancing the heating effect in high-intensity focused ultrasound (HIFU) treatment. Many studies were conducted to investigate the behavior of such bubbles in tissue-mimicking materials, such as a transparent gel phantom; however, the detailed behavior in tissue was still unclear owing to the difficulty in optical observation. In this study, a new biological phantom was developed to observe cavitation bubbles generated in an optically shallow area of tissue. Two imaging methods, high-speed photography using light scattering and high-speed ultrasonic imaging, were used for detecting the behavior of the bubbles simultaneously. The results agreed well with each other for the area of bubble formation and the temporal change in the region of bubbles, suggesting that both methods are useful for visualizing the bubbles.
NASA Astrophysics Data System (ADS)
Vista Wulandari, Ayu; Rizki Pratama, Khafid; Ismail, Prayoga
2018-05-01
Accurate and realtime data in wide spatial space at this time is still a problem because of the unavailability of observation of rainfall in each region. Weather satellites have a very wide range of observations and can be used to determine rainfall variability with better resolution compared with a limited direct observation. Utilization of Himawari-8 satellite data in estimating rainfall using Convective Stratiform Technique (CST) method. The CST method is performed by separating convective and stratiform cloud components using infrared channel satellite data. Cloud components are classified by slope because the physical and dynamic growth processes are very different. This research was conducted in Bali area on December 14, 2016 by verifying the result of CST process with rainfall data from Ngurah Rai Meteorology Station Bali. It is found that CST method result had simililar value with data observation in Ngurah Rai meteorological station, so it assumed that CST method can be used for rainfall estimation in Bali region.
NASA Astrophysics Data System (ADS)
Willamo, T.; Usoskin, I. G.; Kovaltsov, G. A.
2018-04-01
The method of active-day fraction (ADF) was proposed recently to calibrate different solar observers to standard observational conditions. The result of the calibration may depend on the overall level of solar activity during the observational period. This dependency is studied quantitatively using data of the Royal Greenwich Observatory by formally calibrating synthetic pseudo-observers to the full reference dataset. It is shown that the sunspot group number is precisely estimated by the ADF method for periods of moderate activity, may be slightly underestimated by 0.5 - 1.5 groups ({≤} 10%) for strong and very strong activity, and is strongly overestimated by up to 2.5 groups ({≤} 30%) for weak-to-moderate activity. The ADF method becomes inapplicable for the periods of grand minima of activity. In general, the ADF method tends to overestimate the overall level of activity and to reduce the long-term trends.
NASA Astrophysics Data System (ADS)
Lu, F.; Liu, Z.; Liu, Y.; Zhang, S.; Jacob, R. L.
2017-12-01
The Regional Coupled Data Assimilation (RCDA) method is introduced as a tool to study coupled climate dynamics and teleconnections. The RCDA method is built on an ensemble-based coupled data assimilation (CDA) system in a coupled general circulation model (CGCM). The RCDA method limits the data assimilation to the desired model components (e.g. atmosphere) and regions (e.g. the extratropics), and studies the ensemble-mean model response (e.g. tropical response to "observed" extratropical atmospheric variability). When applied to the extratropical influence on tropical climate, the RCDA method has shown some unique advantages, namely the combination of a fully coupled model, real-world observations and an ensemble approach. Tropical variability (e.g. El Niño-Southern Oscillation or ENSO) and climatology (e.g. asymmetric Inter-Tropical Convergence Zone or ITCZ) were initially thought to be determined mostly by local forcing and ocean-atmosphere interaction in the tropics. Since late 20th century, numerous studies have showed that extratropical forcing could affect, or even largely determine some aspects of the tropical climate. Due to the coupled nature of the climate system, however, the challenge of determining and further quantifying the causality of extratropical forcing on the tropical climate remains. Using the RCDA method, we have demonstrated significant control of extratropical atmospheric forcing on ENSO variability in a CGCM, both with model-generated and real-world observation datasets. The RCDA method has also shown robust extratropical impact on the tropical double-ITCZ bias in a CGCM. The RCDA method has provided the first systematic and quantitative assessment of extratropical influence on tropical climatology and variability by incorporating real world observations in a CGCM.
Mehl, Matthias R.
2016-01-01
This article reviews the Electronically Activated Recorder or EAR as an ambulatory ecological momentary assessment tool for the real-world observation of daily behavior. Technically, the EAR is an audio recorder that intermittently records snippets of ambient sounds while participants go about their lives. Conceptually, it is a naturalistic observation method that yields an acoustic log of a person’s day as it unfolds. The power of the EAR lies in unobtrusively collecting authentic real-life observational data. In preserving a high degree of naturalism at the level of the raw recordings, it resembles ethnographic methods; through its sampling and coding, it enables larger empirical studies. The article provides an overview of the EAR method, reviews its validity, utility, and limitations, and discusses it in the context of current developments in ambulatory assessment, specifically the emerging field of mobile sensing. PMID:28529411
Mehl, Matthias R
2017-04-01
This article reviews the Electronically Activated Recorder or EAR as an ambulatory ecological momentary assessment tool for the real-world observation of daily behavior. Technically, the EAR is an audio recorder that intermittently records snippets of ambient sounds while participants go about their lives. Conceptually, it is a naturalistic observation method that yields an acoustic log of a person's day as it unfolds. The power of the EAR lies in unobtrusively collecting authentic real-life observational data. In preserving a high degree of naturalism at the level of the raw recordings, it resembles ethnographic methods; through its sampling and coding, it enables larger empirical studies. The article provides an overview of the EAR method, reviews its validity, utility, and limitations, and discusses it in the context of current developments in ambulatory assessment, specifically the emerging field of mobile sensing.
NASA Astrophysics Data System (ADS)
Wei, T. B.; Chen, Y. L.; Lin, H. R.; Huang, S. Y.; Yeh, T. C. J.; Wen, J. C.
2016-12-01
In the groundwater study, it estimated the heterogeneous spatial distribution of hydraulic Properties, there were many scholars use to hydraulic tomography (HT) from field site pumping tests to estimate inverse of heterogeneous spatial distribution of hydraulic Properties, to prove the most of most field site aquifer was heterogeneous hydrogeological parameters spatial distribution field. Many scholars had proposed a method of hydraulic tomography to estimate heterogeneous spatial distribution of hydraulic Properties of aquifer, the Huang et al. [2011] was used the non-redundant verification analysis of pumping wells changed, observation wells fixed on the inverse and the forward, to reflect the feasibility of the heterogeneous spatial distribution of hydraulic Properties of field site aquifer of the non-redundant verification analysis on steady-state model.From post literature, finding only in steady state, non-redundant verification analysis of pumping well changed location and observation wells fixed location for inverse and forward. But the studies had not yet pumping wells fixed or changed location, and observation wells fixed location for redundant verification or observation wells change location for non-redundant verification of the various combinations may to explore of influences of hydraulic tomography method. In this study, it carried out redundant verification method and non-redundant verification method for forward to influences of hydraulic tomography method in transient. And it discuss above mentioned in NYUST campus sites the actual case, to prove the effectiveness of hydraulic tomography methods, and confirmed the feasibility on inverse and forward analysis from analysis results.Keywords: Hydraulic Tomography, Redundant Verification, Heterogeneous, Inverse, Forward
NASA Astrophysics Data System (ADS)
O'Connor, J. Michael; Pretorius, P. Hendrik; Gifford, Howard C.; Licho, Robert; Joffe, Samuel; McGuiness, Matthew; Mehurg, Shannon; Zacharias, Michael; Brankov, Jovan G.
2012-02-01
Our previous Single Photon Emission Computed Tomography (SPECT) myocardial perfusion imaging (MPI) research explored the utility of numerical observers. We recently created two hundred and eighty simulated SPECT cardiac cases using Dynamic MCAT (DMCAT) and SIMIND Monte Carlo tools. All simulated cases were then processed with two reconstruction methods: iterative ordered subset expectation maximization (OSEM) and filtered back-projection (FBP). Observer study sets were assembled for both OSEM and FBP methods. Five physicians performed an observer study on one hundred and seventy-nine images from the simulated cases. The observer task was to indicate detection of any myocardial perfusion defect using the American Society of Nuclear Cardiology (ASNC) 17-segment cardiac model and the ASNC five-scale rating guidelines. Human observer Receiver Operating Characteristic (ROC) studies established the guidelines for the subsequent evaluation of numerical model observer (NO) performance. Several NOs were formulated and their performance was compared with the human observer performance. One type of NO was based on evaluation of a cardiac polar map that had been pre-processed using a gradient-magnitude watershed segmentation algorithm. The second type of NO was also based on analysis of a cardiac polar map but with use of a priori calculated average image derived from an ensemble of normal cases.
76 FR 72901 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-28
.... The Shipboard Observation Form for Floating Marine Debris was created based on methods used in studies of floating marine debris by established researchers, previous shipboard observational studies.... This survey will assist in carrying out activities prescribed in the Marine Debris Research, Prevention...
Fledelius, Joan; Khalil, Azza; Hjorthaug, Karin; Frøkiær, Jørgen
2016-12-01
The purpose of this study is to determine whether a qualitative approach or a semi-quantitative approach provides the most robust method for early response evaluation with 2'-deoxy-2'-[(18)F]fluoro-D-glucose (F-18-FDG) positron emission tomography combined with whole body computed tomography (PET/CT) in non-small cell lung cancer (NSCLC). In this study eight Nuclear Medicine consultants analyzed F-18-FDG PET/CT scans from 35 patients with locally advanced NSCLC. Scans were performed at baseline and after 2 cycles of chemotherapy. Each observer used two different methods for evaluation: (1) PET response criteria in solid tumors (PERCIST) 1.0 and (2) a qualitative approach. Both methods allocate patients into one of four response categories (complete and partial metabolic response (CMR and PMR) and stable and progressive metabolic disease (SMD and PMD)). The inter-observer agreement was evaluated using Fleiss' kappa for multiple raters, Cohens kappa for comparison of the two methods, and intraclass correlation coefficients (ICC) for comparison of lean body mass corrected standardized uptake value (SUL) peak measurements. The agreement between observers when determining the percentage change in SULpeak was "almost perfect", with ICC = 0.959. There was a strong agreement among observers allocating patients to the different response categories with a Fleiss kappa of 0.76 (0.71-0.81). In 22 of the 35 patients, complete agreement was observed with PERCIST 1.0. The agreement was lower when using the qualitative method, moderate, having a Fleiss kappa of 0.60 (0.55-0.64). Complete agreement was achieved in only 10 of the 35 patients. The difference between the two methods was statistically significant (p < 0.005) (chi-squared). Comparing the two methods for each individual observer showed Cohen's kappa values ranging from 0.64 to 0.79, translating into a strong agreement between the two methods. PERCIST 1.0 provides a higher overall agreement between observers than the qualitative approach in categorizing early treatment response in NSCLC patients. The inter-observer agreement is in fact strong when using PERCIST 1.0 even when the level of instruction is purposely kept to a minimum in order to mimic the everyday situation. The variability is largely owing to the subjective elements of the method.
Diego-Mas, Jose-Antonio; Poveda-Bautista, Rocio; Garzon-Leal, Diana-Carolina
2015-01-01
Most observational methods for musculoskeletal disorder risk assessment have been developed by researchers to be applied in specific situations, and practitioners could find difficulties in their use in real-work conditions. The main objective of this study was to identify the factors which have an influence on how useful the observational techniques are perceived to be by practitioners and to what extent these factors influence their perception. A survey was conducted on practitioners regarding the problems normally encountered when implementing these methods, as well as the perceived overall utility of these techniques. The results show that practitioners place particular importance on the support the methods provide in making decisions regarding changes in work systems and how applicable they are to different types of jobs. The results of this study can serve as guide to researchers for the development of new assessment techniques that are more useful and applicable in real-work situations.
Bird radar validation in the field by time-referencing line-transect surveys.
Dokter, Adriaan M; Baptist, Martin J; Ens, Bruno J; Krijgsveld, Karen L; van Loon, E Emiel
2013-01-01
Track-while-scan bird radars are widely used in ornithological studies, but often the precise detection capabilities of these systems are unknown. Quantification of radar performance is essential to avoid observational biases, which requires practical methods for validating a radar's detection capability in specific field settings. In this study a method to quantify the detection capability of a bird radar is presented, as well a demonstration of this method in a case study. By time-referencing line-transect surveys, visually identified birds were automatically linked to individual tracks using their transect crossing time. Detection probabilities were determined as the fraction of the total set of visual observations that could be linked to radar tracks. To avoid ambiguities in assigning radar tracks to visual observations, the observer's accuracy in determining a bird's transect crossing time was taken into account. The accuracy was determined by examining the effect of a time lag applied to the visual observations on the number of matches found with radar tracks. Effects of flight altitude, distance, surface substrate and species size on the detection probability by the radar were quantified in a marine intertidal study area. Detection probability varied strongly with all these factors, as well as species-specific flight behaviour. The effective detection range for single birds flying at low altitude for an X-band marine radar based system was estimated at ~1.5 km. Within this range the fraction of individual flying birds that were detected by the radar was 0.50 ± 0.06 with a detection bias towards higher flight altitudes, larger birds and high tide situations. Besides radar validation, which we consider essential when quantification of bird numbers is important, our method of linking radar tracks to ground-truthed field observations can facilitate species-specific studies using surveillance radars. The methodology may prove equally useful for optimising tracking algorithms.
ERIC Educational Resources Information Center
Anliak, Sakire; Sahin, Derya
2010-01-01
The present observational study was designed to evaluate the effectiveness of the I Can Problem Solve (ICPS) programme on behavioural change from aggression to pro-social behaviours by using the DECB rating scale. Non-participant observation method was used to collect data in pretest-training-posttest design. It was hypothesised that the ICPS…
Medication safety research by observational study design.
Lao, Kim S J; Chui, Celine S L; Man, Kenneth K C; Lau, Wallis C Y; Chan, Esther W; Wong, Ian C K
2016-06-01
Observational studies have been recognised to be essential for investigating the safety profile of medications. Numerous observational studies have been conducted on the platform of large population databases, which provide adequate sample size and follow-up length to detect infrequent and/or delayed clinical outcomes. Cohort and case-control are well-accepted traditional methodologies for hypothesis testing, while within-individual study designs are developing and evolving, addressing previous known methodological limitations to reduce confounding and bias. Respective examples of observational studies of different study designs using medical databases are shown. Methodology characteristics, study assumptions, strengths and weaknesses of each method are discussed in this review.
COR V2: teaching observational research with multimedia courseware.
Blasko, Dawn G; Kazmerski, Victoria A; Torgerson, Carla N
2004-05-01
Courseware for Observational Research (COR Version 2) is an interactive multimedia program designed to teach the foundation of the scientific method: systematic observation. COR uses digital video with interactive coding to teach basic concepts, such as creating precise operational definitions; using frequency, interval, and duration coding; developing sampling strategies; and analyzing and interpreting data. Through lessons, a case study, and laboratory exercises, it gradually scaffolds students from teacher-directed learning into self-directed learning. The newest addition to COR is a case study in which students work collaboratively, using their own observations to make recommendations about a child's disruptive behavior in an after-school program. Evaluations of the lessons showed that classes using COR received better grades on their field observations than did those using methods that are more traditional. Students' confidence and knowledge increased as they moved through each section of the program.
The value of the NDT-Bobath method in post-stroke gait training.
Mikołajewska, Emilia
2013-01-01
Stroke is perceived a major cause of disability, including gait disorders. Looking for more effective methods of gait reeducation in post-stroke survivors is one of the most important issues in contemporary neurorehabilitation. Following a stroke, patients suffer from gait disorders. The aim of this paper is to present the outcomes of a study of post-stroke gait reeducation using the NeuroDevelopmental Treatment-Bobath (NDT-Bobath) method. The research was conducted among 60 adult patients who had undergone ischemic stroke. These patients were treated using the NDT-Bobath method. These patients' gait reeducation was assessed using spatio-temporal gait parameters (gait velocity, cadence and stride length). Measurements of these parameters were conducted by the same therapist twice: on admission, and after the tenth session of gait reeducation. Among the 60 patients involved in the study, the results were as follows: in terms of gait velocity, recovery was observed in 39 cases (65%), in terms of cadence, recovery was observed in 39 cases (65%), in terms of stride length, recovery was observed in 50 cases (83.33%). Benefits were observed after short-term therapy, reflected by measurable statistically significant changes in the patients' gait parameters.
The Influence of Observation Length on the Dependability of Data
ERIC Educational Resources Information Center
David Ferguson, Tyler; Briesch, Amy M.; Volpe, Robert J.; Daniels, Brian
2012-01-01
Although direct observation is one of the most frequently used assessment methods by school psychologists, studies have shown that the number of observations needed to obtain a dependable estimate of student behavior may be impractical. Because direct observation may be used to inform important decisions about students, it is crucial that data be…
ERIC Educational Resources Information Center
Brock, Richard; Taber, Keith S.
2017-01-01
This paper examines the role of the microgenetic method in science education. The microgenetic method is a technique for exploring the progression of learning in detail through repeated, high-frequency observations of a learner's "performance" in some activity. Existing microgenetic studies in science education are analysed. This leads…
ERIC Educational Resources Information Center
von Davier, Alina A.; Holland, Paul W.; Livingston, Samuel A.; Casabianca, Jodi; Grant, Mary C.; Martin, Kathleen
2006-01-01
This study examines how closely the kernel equating (KE) method (von Davier, Holland, & Thayer, 2004a) approximates the results of other observed-score equating methods--equipercentile and linear equatings. The study used pseudotests constructed of item responses from a real test to simulate three equating designs: an equivalent groups (EG)…
Continuous monitoring of seasonal phenological development by BBCH code
NASA Astrophysics Data System (ADS)
Cornelius, Christine; Estrella, Nicole; Menzel, Annette
2010-05-01
Phenology, the science of recurrent seasonal natural events, is a proxy for changes in ecosystems due to recent global climate change. Phenological studies mostly deal with data considering the beginning of different development stages e.g. budburst or the beginning of flowering. Just few studies focus on the end of phenological stages, such as the end of flowering or seed dispersal. Information about the entire development cycle of plants, including data of the end of stages, are received by observing plants according to the extended BBCH-scale (MEIER 1997). The scale is a standardized growth stage key which allows a less labor intensive, weekly observation rhythm. Every week frequencies of all occurring phenological stages are noted. These frequencies then constitute the basis to interpolate the development of each phenological stage, even though it was not being seen during field work. Due to the lack of studies using this kind of key for observations over the entire development cycle there is no common methodology to analyze the data. So our objective was to find a method of analysis, with which onset dates as well as endpoints of each development stage could be defined. Three different methods of analysis were compared. Results show that there is no significant difference in onset dates of phenological stages between all methods tested. However, the method of pooled pre/post stage development seems to be most suitable for climate change studies, followed by the method of cumulative stage development and the method of weighted plant development.
Zhang, Zhiyong; Yuan, Ke-Hai
2015-01-01
Cronbach’s coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald’s omega has been used as a popular alternative to alpha in the literature. Traditional estimation methods for alpha and omega often implicitly assume that data are complete and normally distributed. This study proposes robust procedures to estimate both alpha and omega as well as corresponding standard errors and confidence intervals from samples that may contain potential outlying observations and missing values. The influence of outlying observations and missing data on the estimates of alpha and omega is investigated through two simulation studies. Results show that the newly developed robust method yields substantially improved alpha and omega estimates as well as better coverage rates of confidence intervals than the conventional nonrobust method. An R package coefficientalpha is developed and demonstrated to obtain robust estimates of alpha and omega. PMID:29795870
Song, Ling; Zhang, Yi; Jiang, Ji; Ren, Shuang; Chen, Li; Liu, Dongyang; Chen, Xijing; Hu, Pei
2018-04-06
The objective of this study was to develop a physiologically based pharmacokinetic (PBPK) model for sinogliatin (HMS-5552, dorzagliatin) by integrating allometric scaling (AS), in vitro to in vivo exploration (IVIVE), and steady-state concentration-mean residence time (C ss -MRT) methods and to provide mechanistic insight into its pharmacokinetic properties in humans. Human major pharmacokinetic parameters were analyzed using AS, IVIVE, and C ss -MRT methods with available preclinical in vitro and in vivo data to understand sinogliatin drug metabolism and pharmacokinetic (DMPK) characteristics and underlying mechanisms. On this basis, an initial mechanistic PBPK model of sinogliatin was developed. The initial PBPK model was verified using observed data from a single ascending dose (SAD) study and further optimized with various strategies. The final model was validated by simulating sinogliatin pharmacokinetics under a fed condition. The validated model was applied to support a clinical drug-drug interaction (DDI) study design and to evaluate the effects of intrinsic (hepatic cirrhosis, genetic) factors on drug exposure. The two-species scaling method using rat and dog data (TS- rat,dog ) was the best AS method in predicting human systemic clearance in the central compartment (CL). The IVIVE method confirmed that sinogliatin was predominantly metabolized by cytochrome P450 (CYP) 3A4. The C ss -MRT method suggested dog pharmacokinetic profiles were more similar to human pharmacokinetic profiles. The estimated CL using the AS and IVIVE approaches was within 1.5-fold of that observed. The C ss -MRT method in dogs also provided acceptable prediction of human pharmacokinetic characteristics. For the PBPK approach, the 90% confidence intervals (CIs) of the simulated maximum concentration (C max ), CL, and area under the plasma concentration-time curve (AUC) of sinogliatin were within those observed and the 90% CI of simulated time to C max (t max ) was closed to that observed for a dose range of 5-50 mg in the SAD study. The final PBPK model was validated by simulating sinogliatin pharmacokinetics with food. The 90% CIs of the simulated C max , CL, and AUC values for sinogliatin were within those observed and the 90% CI of the simulated t max was partially within that observed for the dose range of 25-200 mg in the multiple ascending dose (MAD) study. This PBPK model selected a final clinical DDI study design with itraconazole from four potential designs and also evaluated the effects of intrinsic (hepatic cirrhosis, genetic) factors on drug exposure. Sinogliatin pharmacokinetic properties were mechanistically understood by integrating all four methods and a mechanistic PBPK model was successfully developed and validated using clinical data. This PBPK model was applied to support the development of sinogliatin.
Guidelines for reporting evaluations based on observational methodology.
Portell, Mariona; Anguera, M Teresa; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana
2015-01-01
Observational methodology is one of the most suitable research designs for evaluating fidelity of implementation, especially in complex interventions. However, the conduct and reporting of observational studies is hampered by the absence of specific guidelines, such as those that exist for other evaluation designs. This lack of specific guidance poses a threat to the quality and transparency of these studies and also constitutes a considerable publication hurdle. The aim of this study thus was to draw up a set of proposed guidelines for reporting evaluations based on observational methodology. The guidelines were developed by triangulating three sources of information: observational studies performed in different fields by experts in observational methodology, reporting guidelines for general studies and studies with similar designs to observational studies, and proposals from experts in observational methodology at scientific meetings. We produced a list of guidelines grouped into three domains: intervention and expected outcomes, methods, and results. The result is a useful, carefully crafted set of simple guidelines for conducting and reporting observational studies in the field of program evaluation.
1988-02-01
feasible. However, only the winter 80-110 km region could be studied by this approach, due to the limitation in the observing method , i.e. nightglow...epoch methods have been employed. The diurnal tide component at the mesopause, that according to the latest tidal models should dominate in polar region...temperature was calculated by Kvifte’s method . tweem temperature and intensity was found by Shaqae [1974] using the intensity ratio of the P,(2) and P,(3
Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús
2014-01-01
This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations). PMID:24732102
Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús
2014-04-11
This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations).
Application of the Newtonian nudging data assimilation method for the Biebrza River flow model
NASA Astrophysics Data System (ADS)
Miroslaw-Swiatek, Dorota
2010-05-01
Data assimilation provides a tool for integrating observations of spatially distributed environmental variables with model predictions. In this paper a simple data assimilation technique, the Newtonian nudging to individual observations method, has been implemented in the 1D St. Venant equations. The method involves adding a term to the prognostic equation. This term is proportional to the difference between the value calculated in the model at a given point in time and space and the one resulted from observations. Improving the model with available measurement observations is accomplished by adequate weighting functions, that can incorporate prior knowledge about the spatial and temporal variability of the state variables being assimilated. The article contains a description of the numerical model, which employs the finite element method (FEM) to solve the 1D St. Venant equations modified by the ‘nudging' method. The developed model was applied to the Biebrza River, situated in the north-eastern part of Poland, flowing through the last extensive, fairly undisturbed river-marginal peatland in Europe. A 41 km long reach of the Lower Biebrza River described by 68 cross-sections was selected for the study. The observed water stage collected by automatic sensors was the subject of the data assimilation in the Newtonian nudging to individual observations method. The water level observation data were collected in four observation points along a river with time interval 6 hours for one year. The obtained results show a prediction with no nudging and influence of the nudging term on water stages forecast. The developed model enables integrating water stage observation with an unsteady river flow model for improved water level prediction.
USDA-ARS?s Scientific Manuscript database
A rapid method for extracting eriophyoid mites was adapted from previous studies to provide growers and IPM consultants with a practical, efficient, and reliable tool to monitor for rust mites in vineyards. The rinse in bag (RIB) method allows quick extraction of mites from collected plant parts (sh...
A new universality class in corpus of texts; A statistical physics study
NASA Astrophysics Data System (ADS)
Najafi, Elham; Darooneh, Amir H.
2018-05-01
Text can be regarded as a complex system. There are some methods in statistical physics which can be used to study this system. In this work, by means of statistical physics methods, we reveal new universal behaviors of texts associating with the fractality values of words in a text. The fractality measure indicates the importance of words in a text by considering distribution pattern of words throughout the text. We observed a power law relation between fractality of text and vocabulary size for texts and corpora. We also observed this behavior in studying biological data.
Observations in public settings
Robert G. Lee
1977-01-01
Straightforward observation of children in their everyday environments is a more appropriate method of discovering the meaning of their relationships to nature than complex methodologies or reductionist commonsense thinking. Observational study requires an explicit conceptual framework and adherence to procedures that allow scientific inference. Error may come from...
Applications of Land Surface Temperature from Microwave Observations
USDA-ARS?s Scientific Manuscript database
Land surface temperature (LST) is a key input for physically-based retrieval algorithms of hydrological states and fluxes. Yet, it remains a poorly constrained parameter for global scale studies. The main two observational methods to remotely measure T are based on thermal infrared (TIR) observation...
Tian, Guo-Liang; Li, Hui-Qiong
2017-08-01
Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.
NASA Technical Reports Server (NTRS)
Bedka, Kristopher M.; Dworak, Richard; Brunner, Jason; Feltz, Wayne
2012-01-01
Two satellite infrared-based overshooting convective cloud-top (OT) detection methods have recently been described in the literature: 1) the 11-mm infrared window channel texture (IRW texture) method, which uses IRW channel brightness temperature (BT) spatial gradients and thresholds, and 2) the water vapor minus IRW BT difference (WV-IRW BTD). While both methods show good performance in published case study examples, it is important to quantitatively validate these methods relative to overshooting top events across the globe. Unfortunately, no overshooting top database currently exists that could be used in such study. This study examines National Aeronautics and Space Administration CloudSat Cloud Profiling Radar data to develop an OT detection validation database that is used to evaluate the IRW-texture and WV-IRW BTD OT detection methods. CloudSat data were manually examined over a 1.5-yr period to identify cases in which the cloud top penetrates above the tropopause height defined by a numerical weather prediction model and the surrounding cirrus anvil cloud top, producing 111 confirmed overshooting top events. When applied to Moderate Resolution Imaging Spectroradiometer (MODIS)-based Geostationary Operational Environmental Satellite-R Series (GOES-R) Advanced Baseline Imager proxy data, the IRW-texture (WV-IRW BTD) method offered a 76% (96%) probability of OT detection (POD) and 16% (81%) false-alarm ratio. Case study examples show that WV-IRW BTD.0 K identifies much of the deep convective cloud top, while the IRW-texture method focuses only on regions with a spatial scale near that of commonly observed OTs. The POD decreases by 20% when IRW-texture is applied to current geostationary imager data, highlighting the importance of imager spatial resolution for observing and detecting OT regions.
NASA Technical Reports Server (NTRS)
Fu, L.-L.; Chelton, D. B.
1985-01-01
A new method is developed for studying large-scale temporal variability of ocean currents from satellite altimetric sea level measurements at intersections (crossovers) of ascending and descending orbit ground tracks. Using this method, sea level time series can be constructed from crossover sea level differences in small sample areas where altimetric crossovers are clustered. The method is applied to Seasat altimeter data to study the temporal evolution of the Antarctic Circumpolar Current (ACC) over the 3-month Seasat mission (July-October 1978). The results reveal a generally eastward acceleration of the ACC around the Southern Ocean with meridional disturbances which appear to be associated with bottom topographic features. This is the first direct observational evidence for large-scale coherence in the temporal variability of the ACC. It demonstrates the great potential of satellite altimetry for synoptic observation of temporal variability of the world ocean circulation.
Qualitative methods in PhD theses from general practice in Scandinavia.
Malterud, Kirsti; Hamberg, Katarina; Reventlow, Susanne
2017-12-01
Qualitative methodology is gaining increasing attention and esteem in medical research, with general practice research taking a lead. With these methods, human and social interaction and meaning can be explored and shared by systematic interpretation of text from talk, observation or video. Qualitative studies are often included in Ph.D. theses from general practice in Scandinavia. Still, the Ph.D. programs across nations and institutions offer only limited training in qualitative methods. In this opinion article, we draw upon our observations and experiences, unpacking and reflecting upon values and challenges at stake when qualitative studies are included in Ph.D. theses. Hypotheses to explain these observations are presented, followed by suggestions for standards of evaluation and improvement of Ph.D. programs. The authors conclude that multimethod Ph.D. theses should be encouraged in general practice research, in order to offer future researchers an appropriate toolbox.
Qualitative methods in PhD theses from general practice in Scandinavia
Malterud, Kirsti; Hamberg, Katarina; Reventlow, Susanne
2017-01-01
Qualitative methodology is gaining increasing attention and esteem in medical research, with general practice research taking a lead. With these methods, human and social interaction and meaning can be explored and shared by systematic interpretation of text from talk, observation or video. Qualitative studies are often included in Ph.D. theses from general practice in Scandinavia. Still, the Ph.D. programs across nations and institutions offer only limited training in qualitative methods. In this opinion article, we draw upon our observations and experiences, unpacking and reflecting upon values and challenges at stake when qualitative studies are included in Ph.D. theses. Hypotheses to explain these observations are presented, followed by suggestions for standards of evaluation and improvement of Ph.D. programs. The authors conclude that multimethod Ph.D. theses should be encouraged in general practice research, in order to offer future researchers an appropriate toolbox. PMID:29094644
Best (but oft-forgotten) practices: propensity score methods in clinical nutrition research.
Ali, M Sanni; Groenwold, Rolf Hh; Klungel, Olaf H
2016-08-01
In observational studies, treatment assignment is a nonrandom process and treatment groups may not be comparable in their baseline characteristics, a phenomenon known as confounding. Propensity score (PS) methods can be used to achieve comparability of treated and nontreated groups in terms of their observed covariates and, as such, control for confounding in estimating treatment effects. In this article, we provide a step-by-step guidance on how to use PS methods. For illustrative purposes, we used simulated data based on an observational study of the relation between oral nutritional supplementation and hospital length of stay. We focused on the key aspects of PS analysis, including covariate selection, PS estimation, covariate balance assessment, treatment effect estimation, and reporting. PS matching, stratification, covariate adjustment, and weighting are discussed. R codes and example data are provided to show the different steps in a PS analysis. © 2016 American Society for Nutrition.
Medium-scale traveling ionospheric disturbances by three-dimensional ionospheric GPS tomography
NASA Astrophysics Data System (ADS)
Chen, C. H.; Saito, A.; Lin, C. H.; Yamamoto, M.; Suzuki, S.; Seemala, G. K.
2016-02-01
In this study, we develop a three-dimensional ionospheric tomography with the ground-based global position system (GPS) total electron content observations. Because of the geometric limitation of GPS observation path, it is difficult to solve the ill-posed inverse problem for the ionospheric electron density. Different from methods given by pervious studies, we consider an algorithm combining the least-square method with a constraint condition, in which the gradient of electron density tends to be smooth in the horizontal direction and steep in the vicinity of the ionospheric F2 peak. This algorithm is designed to be independent of any ionospheric or plasmaspheric electron density models as the initial condition. An observation system simulation experiment method is applied to evaluate the performance of the GPS ionospheric tomography in detecting ionospheric electron density perturbation at the scale size of around 200 km in wavelength, such as the medium-scale traveling ionospheric disturbances.
Short-arc measurement and fitting based on the bidirectional prediction of observed data
NASA Astrophysics Data System (ADS)
Fei, Zhigen; Xu, Xiaojie; Georgiadis, Anthimos
2016-02-01
To measure a short arc is a notoriously difficult problem. In this study, the bidirectional prediction method based on the Radial Basis Function Neural Network (RBFNN) to the observed data distributed along a short arc is proposed to increase the corresponding arc length, and thus improve its fitting accuracy. Firstly, the rationality of regarding observed data as a time series is discussed in accordance with the definition of a time series. Secondly, the RBFNN is constructed to predict the observed data where the interpolation method is used for enlarging the size of training examples in order to improve the learning accuracy of the RBFNN’s parameters. Finally, in the numerical simulation section, we focus on simulating how the size of the training sample and noise level influence the learning error and prediction error of the built RBFNN. Typically, the observed data coming from a 5{}^\\circ short arc are used to evaluate the performance of the Hyper method known as the ‘unbiased fitting method of circle’ with a different noise level before and after prediction. A number of simulation experiments reveal that the fitting stability and accuracy of the Hyper method after prediction are far superior to the ones before prediction.
Waller, Rebecca; Gardner, Frances; Dishion, Thomas; Sitnick, Stephanie L.; Shaw, Daniel S.; Winter, Charlotte E.; Wilson, Melvin
2016-01-01
A large literature provides strong empirical support for the influence of parenting on child outcomes. The current study addresses enduring research questions testing the importance of early parenting behavior to children’s adjustment. Specifically, we developed and tested a novel multi-method observational measure of parental positive behavior support at age 2. Next, we tested whether early parental positive behavior support was related to child adjustment at school age, within a multi-agent and multi-method measurement approach and design. Observational and parent-reported data from mother–child dyads (N = 731; 49 percent female) were collected from a high-risk sample at age 2. Follow-up data were collected via teacher report and child assessment at age 7.5. The results supported combining three different observational methods to assess positive behavior support at age 2 within a latent factor. Further, parents’ observed positive behavior support at age 2 predicted multiple types of teacher-reported and child-assessed problem behavior and competencies at 7.5 years old. Results supported the validity and predictive capability of a multi-method observational measure of parenting and the importance of a continued focus on the early years within preventive interventions. PMID:26997757
Waller, Rebecca; Gardner, Frances; Dishion, Thomas; Sitnick, Stephanie L; Shaw, Daniel S; Winter, Charlotte E; Wilson, Melvin
2015-05-01
A large literature provides strong empirical support for the influence of parenting on child outcomes. The current study addresses enduring research questions testing the importance of early parenting behavior to children's adjustment. Specifically, we developed and tested a novel multi-method observational measure of parental positive behavior support at age 2. Next, we tested whether early parental positive behavior support was related to child adjustment at school age, within a multi-agent and multi-method measurement approach and design. Observational and parent-reported data from mother-child dyads (N = 731; 49 percent female) were collected from a high-risk sample at age 2. Follow-up data were collected via teacher report and child assessment at age 7.5. The results supported combining three different observational methods to assess positive behavior support at age 2 within a latent factor. Further, parents' observed positive behavior support at age 2 predicted multiple types of teacher-reported and child-assessed problem behavior and competencies at 7.5 years old. Results supported the validity and predictive capability of a multi-method observational measure of parenting and the importance of a continued focus on the early years within preventive interventions.
Evaluation of methods to derive green-up dates based on daily NDVI satellite observations
NASA Astrophysics Data System (ADS)
Doktor, Daniel
2010-05-01
Bridging the gap between satellite derived green-up dates and in situ phenological observations has been the purpose of many studies over the last decades. Despite substantial advancements in satellite technology and data quality checks there is as yet no universally accepted method for extracting phenological metrics based on satellite derived vegetation indices. Dependent on the respective method derived green-up dates can vary up to serveral weeks using identical data sets. Consequently, it is difficult to compare various studies and to accurately determine an increased vegetation length due to changing temperature patterns as observed by ground phenological networks. Here, I compared how the characteristic NDVI increase over temperate deciduous forests in Germany in spring relates to respective budburst events observed on the ground. MODIS Terra daily surface reflectances with a 250 m resolution (2000-2008) were gathered to compute daily NDVI values. As ground truth, observations of the extensive phenological network of the German Weather Service were used. About 1500 observations per year and species (Beech, Oak and Birch) were available evenly distributed all over Germany. Two filtering methods were tested to reduce the noisy raw data. The first method only keeps NDVI values which are classified as ‚ideal global quality' and applies on those a temporal moving window where values are removed which differ more than 20% of the mean. The second method uses an adaptation of the BISE (Best Index Slope Extraction) algorithm. Subsequently, three functions were fitted to the selected observations: a simple linear interpolation, a sigmoidal function and a double logistic sigmoidal function allowing to approximate two temporally separated green-up signals. The green-up date was then determined at halfway between minimum and maximum (linear interpolation) or at the inflexion point of the sigmoidal curve. A number of global threshold values (NDVI 0.4,0.5,0.6) and varying definitions of the NDVI baseline during dormancy were also tested. In contrast to most past studies, I did not attempt to identify matched pairs of geographically coincident ground and satellite observations. Rather than comparing on an individual grid-cell basis I analysed and compared the statistical properties of distributions generated from ground and satellite observations. It has been noticed that remote sensing provides a statistical distribution of a random variable, not an exact representation of the state of the land surface or atmosphere at a particular pixel. The same holds true for ground observations as they sample from biological variability and landscapes with heterogeneous microclimates. First results reveal substantial differences between the applied methods. Based on the assumption that the satellite captures predominantly the greening-up of the canopy - which occurs about 2 weeks later than observed budburst dates - the double sigmoidal function combined with the BISE filtering procedure performed best.
Large-scale dark diversity estimates: new perspectives with combined methods.
Ronk, Argo; de Bello, Francesco; Fibich, Pavel; Pärtel, Meelis
2016-09-01
Large-scale biodiversity studies can be more informative if observed diversity in a study site is accompanied by dark diversity, the set of absent although ecologically suitable species. Dark diversity methodology is still being developed and a comparison of different approaches is needed. We used plant data at two different scales (European and seven large regions) and compared dark diversity estimates from two mathematical methods: species co-occurrence (SCO) and species distribution modeling (SDM). We used plant distribution data from the Atlas Florae Europaeae (50 × 50 km grid cells) and seven different European regions (10 × 10 km grid cells). Dark diversity was estimated by SCO and SDM for both datasets. We examined the relationship between the dark diversity sizes (type II regression) and the overlap in species composition (overlap coefficient). We tested the overlap probability according to the hypergeometric distribution. We combined the estimates of the two methods to determine consensus dark diversity and composite dark diversity. We tested whether dark diversity and completeness of site diversity (log ratio of observed and dark diversity) are related to various natural and anthropogenic factors differently than simple observed diversity. Both methods provided similar dark diversity sizes and distribution patterns; dark diversity is greater in southern Europe. The regression line, however, deviated from a 1:1 relationship. The species composition overlap of two methods was about 75%, which is much greater than expected by chance. Both consensus and composite dark diversity estimates showed similar distribution patterns. Both dark diversity and completeness measures exhibit relationships to natural and anthropogenic factors different than those exhibited by observed richness. In summary, dark diversity revealed new biodiversity patterns which were not evident when only observed diversity was examined. A new perspective in dark diversity studies can incorporate a combination of methods.
Evaluation of observational research reports published in Turkish nursing journals.
Karaçam, Z; Şen, E; Yildirim, B
2015-09-01
The aim of this literature-based descriptive study was to examine the reporting of the observational research studies published in peer-reviewed nursing journals in Turkey. Eleven peer-reviewed nursing journals printed on a regular basis in Turkey between 2007 and 2012 were selected. These journals were searched for observational research studies, and 502 studies were selected and examined by using the Strengthening the Reporting of Observational Studies in Epidemiology Statement. Of the 502 studies, 495 were cross-sectional, 3 were cohort, and 4 were case controlled. Summary and introduction and aim sections were sufficient in most of the studies. The methods sections of the reports were mostly not reported: 64.3% of the reports did not indicate eligibility/inclusion criteria; sampling method, 67.0%; possible sources of bias, 99.2%; ways to reach sample size, 92.6%. In the results section, the number of individuals participating in each stage of the studies (44.0%) and in other analyses made (39.2%) was not reported. In the discussion section, a main comment about research findings was partly made (97.4%), and limitations of the studies and possible sources of bias were not written in 99.0% of the studies. This study clearly revealed that the observational research studies published in nursing journals in Turkey did not fulfil the important criteria and needed to be improved. Information obtained from this study can contribute to improvement of the quality of reporting observational studies in nursing and thus using obtained findings in practice. © 2015 International Council of Nurses.
The value of including observational studies in systematic reviews was unclear: a descriptive study.
Seida, Jennifer; Dryden, Donna M; Hartling, Lisa
2014-12-01
To evaluate (1) how often observational studies are included in comparative effectiveness reviews (CERs); (2) the rationale for including observational studies; (3) how data from observational studies are appraised, analyzed, and graded; and (4) the impact of observational studies on strength of evidence (SOE) and conclusions. Descriptive study of 23 CERs published through the Effective Health Care Program of the U.S. Agency for Healthcare Research and Quality. Authors searched for observational studies in 20 CERs, of which 18 included a median of 11 (interquartile range, 2-31) studies. Sixteen CERs incorporated the observational studies in their SOE assessments. Seventy-eight comparisons from 12 CERs included evidence from both trials and observational studies; observational studies had an impact on SOE and conclusions for 19 (24%) comparisons. There was diversity across the CERs regarding decisions to include observational studies; study designs considered; and approaches used to appraise, synthesize, and grade SOE. Reporting and methods guidance are needed to ensure clarity and consistency in how observational studies are incorporated in CERs. It was not always clear that observational studies added value in light of the additional resources needed to search for, select, appraise, and analyze such studies. Copyright © 2014 Elsevier Inc. All rights reserved.
An observational assessment method for aging laboratory rats
The growth of the aging population highlights the need for laboratory animal models to study the basic biological processes ofaging and susceptibility to toxic chemicals and disease. Methods to evaluate health ofaging animals over time are needed, especially efficient methods for...
Recurrence of attic cholesteatoma: different methods of estimating recurrence rates.
Stangerup, S E; Drozdziewicz, D; Tos, M; Hougaard-Jensen, A
2000-09-01
One problem in cholesteatoma surgery is recurrence of cholesteatoma, which is reported to vary from 5% to 71%. This great variability can be explained by issues such as the type of cholesteatoma, surgical technique, follow-up rate, length of the postoperative observation period, and statistical method applied. The aim of this study was to illustrate the impact of applying different statistical methods to the same material. Thirty-three children underwent single-stage surgery for attic cholesteatoma during a 15-year period. Thirty patients (94%) attended a re-evaluation. During the observation period of 15 years, recurrence of cholesteatoma occurred in 10 ears. The cumulative total recurrence rate varied from 30% to 67%, depending on the statistical method applied. In conclusion, the choice of statistical method should depend on the number of patients, follow-up rates, length of the postoperative observation period and presence of censored data.
Near-Sun and 1 AU magnetic field of coronal mass ejections: a parametric study
NASA Astrophysics Data System (ADS)
Patsourakos, S.; Georgoulis, M. K.
2016-11-01
Aims: The magnetic field of coronal mass ejections (CMEs) determines their structure, evolution, and energetics, as well as their geoeffectiveness. However, we currently lack routine diagnostics of the near-Sun CME magnetic field, which is crucial for determining the subsequent evolution of CMEs. Methods: We recently presented a method to infer the near-Sun magnetic field magnitude of CMEs and then extrapolate it to 1 AU. This method uses relatively easy to deduce observational estimates of the magnetic helicity in CME-source regions along with geometrical CME fits enabled by coronagraph observations. We hereby perform a parametric study of this method aiming to assess its robustness. We use statistics of active region (AR) helicities and CME geometrical parameters to determine a matrix of plausible near-Sun CME magnetic field magnitudes. In addition, we extrapolate this matrix to 1 AU and determine the anticipated range of CME magnetic fields at 1 AU representing the radial falloff of the magnetic field in the CME out to interplanetary (IP) space by a power law with index αB. Results: The resulting distribution of the near-Sun (at 10 R⊙) CME magnetic fields varies in the range [0.004, 0.02] G, comparable to, or higher than, a few existing observational inferences of the magnetic field in the quiescent corona at the same distance. We also find that a theoretically and observationally motivated range exists around αB = -1.6 ± 0.2, thereby leading to a ballpark agreement between our estimates and observationally inferred field magnitudes of magnetic clouds (MCs) at L1. Conclusions: In a statistical sense, our method provides results that are consistent with observations.
Reich, Christian G; Ryan, Patrick B; Schuemie, Martijn J
2013-10-01
A systematic risk identification system has the potential to test marketed drugs for important Health Outcomes of Interest or HOI. For each HOI, multiple definitions are used in the literature, and some of them are validated for certain databases. However, little is known about the effect of different definitions on the ability of methods to estimate their association with medical products. Alternative definitions of HOI were studied for their effect on the performance of analytical methods in observational outcome studies. A set of alternative definitions for three HOI were defined based on literature review and clinical diagnosis guidelines: acute kidney injury, acute liver injury and acute myocardial infarction. The definitions varied by the choice of diagnostic codes and the inclusion of procedure codes and lab values. They were then used to empirically study an array of analytical methods with various analytical choices in four observational healthcare databases. The methods were executed against predefined drug-HOI pairs to generate an effect estimate and standard error for each pair. These test cases included positive controls (active ingredients with evidence to suspect a positive association with the outcome) and negative controls (active ingredients with no evidence to expect an effect on the outcome). Three different performance metrics where used: (i) Area Under the Receiver Operator Characteristics (ROC) curve (AUC) as a measure of a method's ability to distinguish between positive and negative test cases, (ii) Measure of bias by estimation of distribution of observed effect estimates for the negative test pairs where the true effect can be assumed to be one (no relative risk), and (iii) Minimal Detectable Relative Risk (MDRR) as a measure of whether there is sufficient power to generate effect estimates. In the three outcomes studied, different definitions of outcomes show comparable ability to differentiate true from false control cases (AUC) and a similar bias estimation. However, broader definitions generating larger outcome cohorts allowed more drugs to be studied with sufficient statistical power. Broader definitions are preferred since they allow studying drugs with lower prevalence than the more precise or narrow definitions while showing comparable performance characteristics in differentiation of signal vs. no signal as well as effect size estimation.
A Case Study on Teaching of Energy as a Subject for 9th Graders
ERIC Educational Resources Information Center
Bezen, Sevim; Bayrak, Celal; Aykutlu, Isil
2017-01-01
This study aims to describe how energy subject is taught in 9th grades. The study is designed as a descriptive case study with the participation of 3 physics teachers and 85 students. Data were obtained through observation, interviews, and documents, and they were analyzed through descriptive analysis method. In the observations made at the…
Tosteson, Tor D.; Morden, Nancy E.; Stukel, Therese A.; O'Malley, A. James
2014-01-01
The estimation of treatment effects is one of the primary goals of statistics in medicine. Estimation based on observational studies is subject to confounding. Statistical methods for controlling bias due to confounding include regression adjustment, propensity scores and inverse probability weighted estimators. These methods require that all confounders are recorded in the data. The method of instrumental variables (IVs) can eliminate bias in observational studies even in the absence of information on confounders. We propose a method for integrating IVs within the framework of Cox's proportional hazards model and demonstrate the conditions under which it recovers the causal effect of treatment. The methodology is based on the approximate orthogonality of an instrument with unobserved confounders among those at risk. We derive an estimator as the solution to an estimating equation that resembles the score equation of the partial likelihood in much the same way as the traditional IV estimator resembles the normal equations. To justify this IV estimator for a Cox model we perform simulations to evaluate its operating characteristics. Finally, we apply the estimator to an observational study of the effect of coronary catheterization on survival. PMID:25506259
MacKenzie, Todd A; Tosteson, Tor D; Morden, Nancy E; Stukel, Therese A; O'Malley, A James
2014-06-01
The estimation of treatment effects is one of the primary goals of statistics in medicine. Estimation based on observational studies is subject to confounding. Statistical methods for controlling bias due to confounding include regression adjustment, propensity scores and inverse probability weighted estimators. These methods require that all confounders are recorded in the data. The method of instrumental variables (IVs) can eliminate bias in observational studies even in the absence of information on confounders. We propose a method for integrating IVs within the framework of Cox's proportional hazards model and demonstrate the conditions under which it recovers the causal effect of treatment. The methodology is based on the approximate orthogonality of an instrument with unobserved confounders among those at risk. We derive an estimator as the solution to an estimating equation that resembles the score equation of the partial likelihood in much the same way as the traditional IV estimator resembles the normal equations. To justify this IV estimator for a Cox model we perform simulations to evaluate its operating characteristics. Finally, we apply the estimator to an observational study of the effect of coronary catheterization on survival.
Lund, Travis J.; Pilarz, Matthew; Velasco, Jonathan B.; Chakraverty, Devasmita; Rosploch, Kaitlyn; Undersander, Molly; Stains, Marilyne
2015-01-01
Researchers, university administrators, and faculty members are increasingly interested in measuring and describing instructional practices provided in science, technology, engineering, and mathematics (STEM) courses at the college level. Specifically, there is keen interest in comparing instructional practices between courses, monitoring changes over time, and mapping observed practices to research-based teaching. While increasingly common observation protocols (Reformed Teaching Observation Protocol [RTOP] and Classroom Observation Protocol in Undergraduate STEM [COPUS]) at the postsecondary level help achieve some of these goals, they also suffer from weaknesses that limit their applicability. In this study, we leverage the strengths of these protocols to provide an easy method that enables the reliable and valid characterization of instructional practices. This method was developed empirically via a cluster analysis using observations of 269 individual class periods, corresponding to 73 different faculty members, 28 different research-intensive institutions, and various STEM disciplines. Ten clusters, called COPUS profiles, emerged from this analysis; they represent the most common types of instructional practices enacted in the classrooms observed for this study. RTOP scores were used to validate the alignment of the 10 COPUS profiles with reformed teaching. Herein, we present a detailed description of the cluster analysis method, the COPUS profiles, and the distribution of the COPUS profiles across various STEM courses at research-intensive universities. PMID:25976654
Using microwave observations to estimate land surface temperature during cloudy conditions
USDA-ARS?s Scientific Manuscript database
Land surface temperature (LST), a key ingredient for physically-based retrieval algorithms of hydrological states and fluxes, remains a poorly constrained parameter for global scale studies. The main two observational methods to remotely measure T are based on thermal infrared (TIR) observations and...
ERIC Educational Resources Information Center
Curby, Timothy W.; Johnson, Price; Mashburn, Andrew J.; Carlis, Lydia
2016-01-01
When conducting classroom observations, researchers are often confronted with the decision of whether to conduct observations live or by using pre-recorded video. The present study focuses on comparing and contrasting observations of live and video administrations of the Classroom Assessment Scoring System-PreK (CLASS-PreK). Associations between…
Mutaf Yildiz, Belde; Sasanguie, Delphine; De Smedt, Bert; Reynvoet, Bert
2018-06-01
Home numeracy has been defined as the parent-child interactions that include experiences with numerical content in daily-life settings. Previous studies have commonly operationalized home numeracy either via questionnaires or via observational methods. These studies have shown that both types of measures are positively related to variability in children's mathematical skills. This study investigated whether these distinctive data collection methods index the same aspect of home numeracy. The frequencies of home numeracy activities and parents' opinions about their children's mathematics education were assessed via a questionnaire. The amount of home numeracy talk was observed via two semi-structured videotaped parent-child activity sessions (Lego building and book reading). Children's mathematical skills were examined with two calculation subtests. We observed that parents' reports and number of observed numeracy interactions were not related to each other. Interestingly, parents' reports of numeracy activities were positively related to children's calculation abilities, whereas the observed home numeracy talk was negatively related to children's calculation abilities. These results indicate that these two methods tap on different aspects of home numeracy. Statement of contribution What is already known on this subject? Home numeracy, that is, parent-child interactions that include experiences with numerical content, is supposed to have a positive impact on calculation or mathematical ability in general. Despite many positive results, some studies have failed to find such an association. Home numeracy has been assessed with questionnaires on the frequency of numerical experiences and observations of parent-child interactions; however, those two measures of home numeracy have never been compared directly. What does this study add? This study assessed home numeracy through questionnaires and observations in the 44 parent-child dyads and showed that home numeracy measures derived from questionnaires and observations are not related. Moreover, the relation between the reported frequency of home numeracy activities and calculation on the one hand, and parent-child number talk (derived from observations) and calculation on the other hand is in opposite directions; the frequency of activities is positively related to calculation performance; and the amount of number talk is negatively related to calculation. This study shows that both measures tap into different aspects of home numeracy and can be an important factor explaining inconsistencies in literature. © 2018 The British Psychological Society.
Pourmokhtarian, Afshin; Driscoll, Charles T; Campbell, John L; Hayhoe, Katharine; Stoner, Anne M K
2016-07-01
Assessments of future climate change impacts on ecosystems typically rely on multiple climate model projections, but often utilize only one downscaling approach trained on one set of observations. Here, we explore the extent to which modeled biogeochemical responses to changing climate are affected by the selection of the climate downscaling method and training observations used at the montane landscape of the Hubbard Brook Experimental Forest, New Hampshire, USA. We evaluated three downscaling methods: the delta method (or the change factor method), monthly quantile mapping (Bias Correction-Spatial Disaggregation, or BCSD), and daily quantile regression (Asynchronous Regional Regression Model, or ARRM). Additionally, we trained outputs from four atmosphere-ocean general circulation models (AOGCMs) (CCSM3, HadCM3, PCM, and GFDL-CM2.1) driven by higher (A1fi) and lower (B1) future emissions scenarios on two sets of observations (1/8º resolution grid vs. individual weather station) to generate the high-resolution climate input for the forest biogeochemical model PnET-BGC (eight ensembles of six runs).The choice of downscaling approach and spatial resolution of the observations used to train the downscaling model impacted modeled soil moisture and streamflow, which in turn affected forest growth, net N mineralization, net soil nitrification, and stream chemistry. All three downscaling methods were highly sensitive to the observations used, resulting in projections that were significantly different between station-based and grid-based observations. The choice of downscaling method also slightly affected the results, however not as much as the choice of observations. Using spatially smoothed gridded observations and/or methods that do not resolve sub-monthly shifts in the distribution of temperature and/or precipitation can produce biased results in model applications run at greater temporal and/or spatial resolutions. These results underscore the importance of carefully considering field observations used for training, as well as the downscaling method used to generate climate change projections, for smaller-scale modeling studies. Different sources of variability including selection of AOGCM, emissions scenario, downscaling technique, and data used for training downscaling models, result in a wide range of projected forest ecosystem responses to future climate change. © 2016 by the Ecological Society of America.
STRengthening Analytical Thinking for Observational Studies: the STRATOS initiative
Sauerbrei, Willi; Abrahamowicz, Michal; Altman, Douglas G; le Cessie, Saskia; Carpenter, James
2014-01-01
The validity and practical utility of observational medical research depends critically on good study design, excellent data quality, appropriate statistical methods and accurate interpretation of results. Statistical methodology has seen substantial development in recent times. Unfortunately, many of these methodological developments are ignored in practice. Consequently, design and analysis of observational studies often exhibit serious weaknesses. The lack of guidance on vital practical issues discourages many applied researchers from using more sophisticated and possibly more appropriate methods when analyzing observational studies. Furthermore, many analyses are conducted by researchers with a relatively weak statistical background and limited experience in using statistical methodology and software. Consequently, even ‘standard’ analyses reported in the medical literature are often flawed, casting doubt on their results and conclusions. An efficient way to help researchers to keep up with recent methodological developments is to develop guidance documents that are spread to the research community at large. These observations led to the initiation of the strengthening analytical thinking for observational studies (STRATOS) initiative, a large collaboration of experts in many different areas of biostatistical research. The objective of STRATOS is to provide accessible and accurate guidance in the design and analysis of observational studies. The guidance is intended for applied statisticians and other data analysts with varying levels of statistical education, experience and interests. In this article, we introduce the STRATOS initiative and its main aims, present the need for guidance documents and outline the planned approach and progress so far. We encourage other biostatisticians to become involved. PMID:25074480
ERIC Educational Resources Information Center
Simon, Brian
1980-01-01
Presents some of the findings of the ORACLE research program (Observational Research and Classroom Learning Evaluation), a detailed observational study of teacher-student interaction, teaching styles, and management methods within a sample of primary classrooms. (Editor/SJL)
Ostrov, Jamie M; Godleski, Stephanie A
2009-08-01
This short-term longitudinal study (N = 112) was conducted to explore the concurrent and prospective associations between teacher-reported impulsive-hyperactive behavior and observed relational and physical aggression during early childhood (M = 45.54 months old, SD = 9.07). Multiple informants and methods including observational methods (i.e., 160 min per child) were used to assess aggression and impulsivity-hyperactivity. All measures were found to be valid and reliable. Prospective hierarchical regression analyses revealed that impulsivity-hyperactivity was associated with increases in observed physical aggression across time, controlling for initial relational aggression and gender. These findings add to the growing developmental psychopathology literature that suggests that distinguishing between subtypes of aggression during early childhood may be important for understanding the course of impulsivity-hyperactivity in young children. Implications for practice are discussed.
Assessing digital literacy in web-based physical activity surveillance: the WIN study.
Mathew, Merly; Morrow, James R; Frierson, Georita M; Bain, Tyson M
2011-01-01
PURPOSE. Investigate relations between demographic characteristics and submission method, Internet or paper, when physical activity behaviors are reported. DESIGN. Observational. SETTING . Metropolitan. SUBJECTS. Adult women (N = 918) observed weekly for 2 years (total number of weekly reports, 44,963). MEASURES. Independent variables included age, race, education, income, employment status, and Internet skills. Dependent variables were method of submission (Internet or paper) and adherence. ANALYSIS . Logistic regression to analyze weekly odds of submitting data online and meeting study adherence criteria. Model 1 investigated method of submission, model 2 analyzed meeting study's Internet adherence, and model 3 analyzed meeting total adherence regardless of submission method. RESULTS. Whites, those with good Internet skills, and those reporting higher incomes were more likely to log online. Those who were white, older, and reported good Internet skills were more likely to be at least 75% adherent online. Older women were more likely to be adherent regardless of method. Employed women were less likely to log online or be adherent. CONCLUSION . Providing participants with multiple submission methods may reduce potential bias and provide more generalizable results relevant for future Internet-based research.
Previous studies have made the following observations: newly emerging global patterns of disease have been observed, and environmental exposures have been implicated. Ecologic studies are fundamental for the identification of public health problems. Some level of exposure in a...
Transformative Learning through Education Abroad: A Case Study of a Community College Program
ERIC Educational Resources Information Center
Brenner, Ashley A.
2014-01-01
This case study examined how participating in a short-term education abroad program fostered transformative learning for a small group of community college students. As a participant-observer, I utilized ethnographic methods, including interviews, observations, and document analysis, to understand students' perceptions of their experiences…
VISUALLY OBSERVED MOLD AND MOLDY ODOR VERSUS QUANTITATIVELY MEASURED MICROBIAL EXPOSURE IN HOMES
The main study objective was to compare different methods for assessing mold exposure in conjunction with an epidemiologic study on the development of children's asthma. Homes of 184 children were assessed for mold by visual observations and dust sampling at child's age 1 (Year ...
Female Leadership at High-Poverty, High-Performing Schools: Four Case Studies
ERIC Educational Resources Information Center
Reynolds, Shirley Ann
2009-01-01
This mixed methods study examined the leadership abilities of four African American female principals in an urban setting. The purpose of the mixed methods study was to observe, describe and analyze how the principals have been effective leaders in their respective high-poverty, high-performing elementary schools (K-5). The qualitative methodology…
Blencowe, Natalie S; Blazeby, Jane M; Donovan, Jenny L; Mills, Nicola
2015-12-28
Multi-centre randomised controlled trials (RCTs) in surgery are challenging. It is particularly difficult to establish standards of surgery and ensure that interventions are delivered as intended. This study developed and tested methods for identifying the key components of surgical interventions and standardising interventions within RCTs. Qualitative case studies of surgical interventions were undertaken within the internal pilot phase of a surgical RCT for obesity (the By-Band study). Each case study involved video data capture and non-participant observation of gastric bypass surgery in the operating theatre and interviews with surgeons. Methods were developed to transcribe and synchronise data from video recordings with observational data to identify key intervention components, which were then explored in the interviews with surgeons. Eight qualitative case studies were undertaken. A novel combination of video data capture, observation and interview data identified variations in intervention delivery between surgeons and centres. Although surgeons agreed that the most critical intervention component was the size and shape of the gastric pouch, there was no consensus regarding other aspects of the procedure. They conceded that evidence about the 'best way' to perform bypass was lacking and, combined with the pragmatic nature of the By-Band study, agreed that strict standardisation of bypass might not be required. This study has developed and tested methods for understanding how surgical interventions are designed and delivered delivered in RCTs. Applying these methods more widely may help identify key components of interventions to be delivered by surgeons in trials, enabling monitoring of key components and adherence to the protocol. These methods are now being tested in the context of other surgical RCTs. Current Controlled Trials ISRCTN00786323 , 05/09/2011.
Observing Children's Stress Behaviors in a Kindergarten Classroom
ERIC Educational Resources Information Center
Jackson, Lori A.
2009-01-01
This study used qualitative methods to determine whether kindergarten children exhibited stress behaviors during the academic work period of the day. Sixteen children (8 male, 8 female) ages 5-6 years were observed. The data consisted of classroom observations by the researcher, open-ended interviews with teachers, artifacts collected from the…
Weight-Based Victimization toward Overweight Adolescents: Observations and Reactions of Peers
ERIC Educational Resources Information Center
Puhl, Rebecca M.; Luedicke, Joerg; Heuer, Cheslea
2011-01-01
Background: Weight-based victimization has become increasingly reported among overweight youth, but little is known about adolescents' perceptions and observations of weight-based teasing and bullying. This study examined adolescents' observations of and reactions to weight-based victimization toward overweight students at school. Methods:…
All Things Being Equal: Observing Australian Individual Academic Workloads
ERIC Educational Resources Information Center
Dobele, Angela; Rundle-Thiele, Sharyn; Kopanidis, Foula; Steel, Marion
2010-01-01
The achievement of greater gender equity within Australian universities is a significant issue for both the quality and the strength of Australian higher education. This paper contributes to our knowledge of academic workloads, observing individual workloads in business faculties. A multiple case study method was employed to observe individual…
Pre-Service Teachers Observations of Experienced Teachers
ERIC Educational Resources Information Center
Jenkins, Jayne M.
2014-01-01
Assigning pre-service teachers to observe experienced teachers is a common practice in teacher preparation programs. The purpose of this study was to identify what physical education pre-service teachers observe when watching an experienced teacher. While enrolled in a methods of teaching physical education course and engaged in their second…
Learning about Teachers' Literacy Instruction from Classroom Observations
ERIC Educational Resources Information Center
Kelcey, Ben; Carlisle, Joanne F.
2013-01-01
The purpose of this study is to contribute to efforts to improve methods for gathering and analyzing data from classroom observations in early literacy. The methodological approach addresses current problems of reliability and validity of classroom observations by taking into account differences in teachers' uses of instructional actions (e.g.,…
Core-shifts and proper-motion constraints in the S5 polar cap sample at the 15 and 43 GHz bands
NASA Astrophysics Data System (ADS)
Abellán, F. J.; Martí-Vidal, I.; Marcaide, J. M.; Guirado, J. C.
2018-06-01
We have studied a complete radio sample of active galactic nuclei with the very-long-baseline-interferometry (VLBI) technique and for the first time successfully obtained high-precision phase-delay astrometry at Q band (43 GHz) from observations acquired in 2010. We have compared our astrometric results with those obtained with the same technique at U band (15 GHz) from data collected in 2000. The differences in source separations among all the source pairs observed in common at the two epochs are compatible at the 1σ level between U and Q bands. With the benefit of quasi-simultaneous U and Q band observations in 2010, we have studied chromatic effects (core-shift) at the radio source cores with three different methods. The magnitudes of the core-shifts are of the same order (about 0.1 mas) for all methods. However, some discrepancies arise in the orientation of the core-shifts determined through the different methods. In some cases these discrepancies are due to insufficient signal for the method used. In others, the discrepancies reflect assumptions of the methods and could be explained by curvatures in the jets and departures from conical jets.
Gimenez, Thais; Braga, Mariana Minatel; Raggio, Daniela Procida; Deery, Chris; Ricketts, David N; Mendes, Fausto Medeiros
2013-01-01
Fluorescence-based methods have been proposed to aid caries lesion detection. Summarizing and analysing findings of studies about fluorescence-based methods could clarify their real benefits. We aimed to perform a comprehensive systematic review and meta-analysis to evaluate the accuracy of fluorescence-based methods in detecting caries lesions. Two independent reviewers searched PubMed, Embase and Scopus through June 2012 to identify papers/articles published. Other sources were checked to identify non-published literature. STUDY ELIGIBILITY CRITERIA, PARTICIPANTS AND DIAGNOSTIC METHODS: The eligibility criteria were studies that: (1) have assessed the accuracy of fluorescence-based methods of detecting caries lesions on occlusal, approximal or smooth surfaces, in both primary or permanent human teeth, in the laboratory or clinical setting; (2) have used a reference standard; and (3) have reported sufficient data relating to the sample size and the accuracy of methods. A diagnostic 2×2 table was extracted from included studies to calculate the pooled sensitivity, specificity and overall accuracy parameters (Diagnostic Odds Ratio and Summary Receiver-Operating curve). The analyses were performed separately for each method and different characteristics of the studies. The quality of the studies and heterogeneity were also evaluated. Seventy five studies met the inclusion criteria from the 434 articles initially identified. The search of the grey or non-published literature did not identify any further studies. In general, the analysis demonstrated that the fluorescence-based method tend to have similar accuracy for all types of teeth, dental surfaces or settings. There was a trend of better performance of fluorescence methods in detecting more advanced caries lesions. We also observed moderate to high heterogeneity and evidenced publication bias. Fluorescence-based devices have similar overall performance; however, better accuracy in detecting more advanced caries lesions has been observed.
A review of the use of a systematic observation method in coaching research between 1997 and 2016.
Cope, Ed; Partington, Mark; Harvey, Stephen
2017-10-01
A systematic observation method has been one of the most popularly employed methods in coaching research. Kahan's review of this method conducted between 1975 and 1997 highlighted the key trends in this research, and offered methodological guidance for researchers wishing to use this method in their research. The purpose of this review was to provide an update of the use of a systematic observation method in coaching research and assess the extent to which the calls made by Kahan have been addressed. While in some respect this field of study has progressed (i.e., the introduction of qualitative methods), researchers adopting this method have failed to attend to many of the issues Kahan raised. For this method to continue to make a positive contribution towards the coaching research literature, researchers need to more critically reflect on how and why they are employing this method. At present, some of the decisions made by researchers who have conducted work in this area are not justified with a rationale. It is our intention that this review will serve as guidance for researchers and practitioners, and editors and reviewers of journals when attempting to assess the quality of this type of work.
2014-01-01
Background In order to characterize the intracranial pressure-volume reserve capacity, the correlation coefficient (R) between the ICP wave amplitude (A) and the mean ICP level (P), the RAP index, has been used to improve the diagnostic value of ICP monitoring. Baseline pressure errors (BPEs), caused by spontaneous shifts or drifts in baseline pressure, cause erroneous readings of mean ICP. Consequently, BPEs could also affect ICP indices such as the RAP where in the mean ICP is incorporated. Methods A prospective, observational study was carried out on patients with aneurysmal subarachnoid hemorrhage (aSAH) undergoing ICP monitoring as part of their surveillance. Via the same burr hole in the scull, two separate ICP sensors were placed close to each other. For each consecutive 6-sec time window, the dynamic mean ICP wave amplitude (MWA; measure of the amplitude of the single pressure waves) and the static mean ICP, were computed. The RAP index was computed as the Pearson correlation coefficient between the MWA and the mean ICP for 40 6-sec time windows, i.e. every subsequent 4-min period (method 1). We compared this approach with a method of calculating RAP using a 4-min moving window updated every 6 seconds (method 2). Results The study included 16 aSAH patients. We compared 43,653 4-min RAP observations of signals 1 and 2 (method 1), and 1,727,000 6-sec RAP observations (method 2). The two methods of calculating RAP produced similar results. Differences in RAP ≥0.4 in at least 7% of observations were seen in 5/16 (31%) patients. Moreover, the combination of a RAP of ≥0.6 in one signal and <0.6 in the other was seen in ≥13% of RAP-observations in 4/16 (25%) patients, and in ≥8% in another 4/16 (25%) patients. The frequency of differences in RAP >0.2 was significantly associated with the frequency of BPEs (5 mmHg ≤ BPE <10 mmHg). Conclusions Simultaneous monitoring from two separate, close-by ICP sensors reveals significant differences in RAP that correspond to the occurrence of BPEs. As differences in RAP are of magnitudes that may alter patient management, we do not advocate the use of RAP in the management of neurosurgical patients. PMID:25052470
NASA Astrophysics Data System (ADS)
Hamid, Nor Zila Abd; Adenan, Nur Hamiza; Noorani, Mohd Salmi Md
2017-08-01
Forecasting and analyzing the ozone (O3) concentration time series is important because the pollutant is harmful to health. This study is a pilot study for forecasting and analyzing the O3 time series in one of Malaysian educational area namely Shah Alam using chaotic approach. Through this approach, the observed hourly scalar time series is reconstructed into a multi-dimensional phase space, which is then used to forecast the future time series through the local linear approximation method. The main purpose is to forecast the high O3 concentrations. The original method performed poorly but the improved method addressed the weakness thereby enabling the high concentrations to be successfully forecast. The correlation coefficient between the observed and forecasted time series through the improved method is 0.9159 and both the mean absolute error and root mean squared error are low. Thus, the improved method is advantageous. The time series analysis by means of the phase space plot and Cao method identified the presence of low-dimensional chaotic dynamics in the observed O3 time series. Results showed that at least seven factors affect the studied O3 time series, which is consistent with the listed factors from the diurnal variations investigation and the sensitivity analysis from past studies. In conclusion, chaotic approach has been successfully forecast and analyzes the O3 time series in educational area of Shah Alam. These findings are expected to help stakeholders such as Ministry of Education and Department of Environment in having a better air pollution management.
Generalizing observational study results: applying propensity score methods to complex surveys.
Dugoff, Eva H; Schuler, Megan; Stuart, Elizabeth A
2014-02-01
To provide a tutorial for using propensity score methods with complex survey data. Simulated data and the 2008 Medical Expenditure Panel Survey. Using simulation, we compared the following methods for estimating the treatment effect: a naïve estimate (ignoring both survey weights and propensity scores), survey weighting, propensity score methods (nearest neighbor matching, weighting, and subclassification), and propensity score methods in combination with survey weighting. Methods are compared in terms of bias and 95 percent confidence interval coverage. In Example 2, we used these methods to estimate the effect on health care spending of having a generalist versus a specialist as a usual source of care. In general, combining a propensity score method and survey weighting is necessary to achieve unbiased treatment effect estimates that are generalizable to the original survey target population. Propensity score methods are an essential tool for addressing confounding in observational studies. Ignoring survey weights may lead to results that are not generalizable to the survey target population. This paper clarifies the appropriate inferences for different propensity score methods and suggests guidelines for selecting an appropriate propensity score method based on a researcher's goal. © Health Research and Educational Trust.
Depth-Resolved Cathodoluminescence Study of Annealed Silicon Implanted Gallium Arsenide.
1982-12-01
samples were Cr doped semi-insulat- ing GaAs crystals grown using the horizontal Bridgman method. Nine samples were prepared for this study, four were...function of depth. Cathodoluminescence was the excitation method. The crystals studied were grown using the horizontal Bridgman method. Four samples were...achieved by taking spectral data and successively chemically etching the surface of the crystal in 250 R steps. No new peaks were observed in the
Doctor performance assessment in daily practise: does it help doctors or not? A systematic review.
Overeem, Karlijn; Faber, Marjan J; Arah, Onyebuchi A; Elwyn, Glyn; Lombarts, Kiki M J M H; Wollersheim, Hub C; Grol, Richard P T M
2007-11-01
Continuous assessment of individual performance of doctors is crucial for life-long learning and quality of care. Policy-makers and health educators should have good insights into the strengths and weaknesses of the methods available. The aim of this study was to systematically evaluate the feasibility of methods, the psychometric properties of instruments that are especially important for summative assessments, and the effectiveness of methods serving formative assessments used in routine practise to assess the performance of individual doctors. We searched the MEDLINE (1966-January 2006), PsychINFO (1972-January 2006), CINAHL (1982-January 2006), EMBASE (1980-January 2006) and Cochrane (1966-2006) databases for English language articles, and supplemented this with a hand-search of reference lists of relevant studies and bibliographies of review articles. Studies that aimed to assess the performance of individual doctors in routine practise were included. Two reviewers independently abstracted data regarding study design, setting and findings related to reliability, validity, feasibility and effectiveness using a standard data abstraction form. A total of 64 articles met our inclusion criteria. We observed 6 different methods of evaluating performance: simulated patients; video observation; direct observation; peer assessment; audit of medical records, and portfolio or appraisal. Peer assessment is the most feasible method in terms of costs and time. Little psychometric assessment of the instruments has been undertaken so far. Effectiveness of formative assessments is poorly studied. All systems but 2 rely on a single method to assess performance. There is substantial potential to assess performance of doctors in routine practise. The longterm impact and effectiveness of formative performance assessments on education and quality of care remains hardly known. Future research designs need to pay special attention to unmasking effectiveness in terms of performance improvement.
Arabski, Michał; Wasik, Sławomir; Piskulak, Patrycja; Góźdź, Natalia; Slezak, Andrzej; Kaca, Wiesław
2011-01-01
The aim of this study was to analysis of antibiotics (ampicilin, streptomycin, ciprofloxacin or colistin) release from agarose gel by spectrophotmetry and laser interferometry methods. The interferometric system consisted of a Mach-Zehnder interferometer with a He-Ne laser, TV-CCD camera, computerised data acquisition system and a gel system. The gel system under study consists of two cuvettes. We filled the lower cuvette with an aqueous 1% agarose solution with the antibiotics at initial concentration of antibiotics in the range of 0.12-2 mg/ml for spectrophotmetry analysis or 0.05-0.5 mg/ml for laser interferometry methods, while in the upper cuvette there was pure water. The diffusion was analysed from 120 to 2400 s with a time interval of deltat = 120 s by both methods. We observed that 0.25-1 mg/ml and 0,05 mg/ml are minimal initial concentrations detected by spectrophotometric and laser interferometry methods, respectively. Additionally, we observed differences in kinetic of antibiotic diffusion from gel measured by both methods. In conclusion, the laser interferometric method is a useful tool for studies of antibiotic release from agarose gel, especially for substances are not fully soluble in water, for example: colistin.
Voskuijl, Wieger; Potani, Isabel; Bandsma, Robert; Baan, Anne; White, Sarah; Bourdon, Celine; Kerac, Marko
2017-06-07
Approximately 50% of the deaths of children under the age of 5 can be attributed to undernutrition, which also encompasses severe acute malnutrition (SAM). Diarrhoea is strongly associated with these deaths and is commonly diagnosed solely based on stool frequency and consistency obtained through maternal recall. This trial aims to determine whether this approach is equivalent to a 'directly observed method' in which a health care worker directly observed stool frequency using diapers in hospitalised children with complicated SAM. This study was conducted at 'Moyo' Nutritional Rehabilitation Unit, Queen Elizabeth Central Hospital, Malawi. Participants were children aged 5-59 months admitted with SAM. We compared 2 days of stool frequency data obtained with next-day maternal-recall versus a 'gold standard' in which a health care worker observed stool frequency every 2 h using diapers. After study completion, guardians were asked their preferred method and their level of education. We found poor agreement between maternal recall and the 'gold standard' of directly observed diapers. The sensitivity to detect diarrhoea based on maternal recall was poor, with only 75 and 56% of diarrhoea cases identified on days 1 and 2, respectively. However, the specificity was higher with more than 80% of children correctly classified as not having diarrhoea. On day 1, the mean stool frequency difference between the two methods was -0.17 (SD; 1.68) with limits of agreement (of stool frequency) of -3.55 and 3.20 and, similarly on day 2, the mean difference was -0.2 (SD; 1.59) with limits of agreement of -3.38 and 2.98. These limits extend beyond the pre-specified 'acceptable' limits of agreement (±1.5 stool per day) and indicate that the 2 methods are non-equivalent. The higher the stool frequency, the more discrepant the two methods were. Most primary care givers strongly preferred using diapers. This study shows lack of agreement between the assessment of stool frequency in SAM patients using maternal recall and direct observation of diapers. When designing studies, one should consider using diapers to determining diarrhoea incidence/prevalence in SAM patients especially when accuracy is essential. ISRCTN11571116 (registered 29/11/2013).
Ecoinformatics (Big Data) for Agricultural Entomology: Pitfalls, Progress, and Promise.
Rosenheim, Jay A; Gratton, Claudio
2017-01-31
Ecoinformatics, as defined in this review, is the use of preexisting data sets to address questions in ecology. We provide the first review of ecoinformatics methods in agricultural entomology. Ecoinformatics methods have been used to address the full range of questions studied by agricultural entomologists, enabled by the special opportunities associated with data sets, nearly all of which have been observational, that are larger and more diverse and that embrace larger spatial and temporal scales than most experimental studies do. We argue that ecoinformatics research methods and traditional, experimental research methods have strengths and weaknesses that are largely complementary. We address the important interpretational challenges associated with observational data sets, highlight common pitfalls, and propose some best practices for researchers using these methods. Ecoinformatics methods hold great promise as a vehicle for capitalizing on the explosion of data emanating from farmers, researchers, and the public, as novel sampling and sensing techniques are developed and digital data sharing becomes more widespread.
NASA Astrophysics Data System (ADS)
Shefer, V. A.
2010-12-01
A new method is suggested for computing the initial orbit of a small celestial body from its three or more pairs of angular measurements at three times. The method is based on using the approach that we previously developed for constructing the intermediate orbit from minimal number of observations. This intermediate orbit allows for most of the perturbations in the motion of the body under study. The method proposed uses the Herget's algorithmic scheme that makes it possible to involve additional observations as well. The methodical error of orbit computation by the proposed method is two orders smaller than the corresponding error of the Herget's approach based on the construction of the unperturbed Keplerian orbit. The new method is especially efficient if applied to high-accuracy observational data covering short orbital arcs.
NASA Astrophysics Data System (ADS)
Shefer, V. A.
2011-07-01
A new method is suggested for finding the preliminary orbit of a small celestial body from its three or more pairs of angular measurements at three times. The method is based on using the approach that we previously developed for constructing the intermediate orbit from minimal number of observations. This intermediate orbit allows for most of the perturbations in the motion of the body under study. The method proposed uses the Herget's algorithmic scheme that makes it possible to involve additional observations as well. The methodical error of orbit computation by the proposed method is two orders smaller than the corresponding error of the commonly used approach based on the construction of the unperturbed Keplerian orbit. The new method is especially efficient if applied to high-accuracy observational data covering short orbital arcs.
A novel method for correcting scanline-observational bias of discontinuity orientation
Huang, Lei; Tang, Huiming; Tan, Qinwen; Wang, Dingjian; Wang, Liangqing; Ez Eldin, Mutasim A. M.; Li, Changdong; Wu, Qiong
2016-01-01
Scanline observation is known to introduce an angular bias into the probability distribution of orientation in three-dimensional space. In this paper, numerical solutions expressing the functional relationship between the scanline-observational distribution (in one-dimensional space) and the inherent distribution (in three-dimensional space) are derived using probability theory and calculus under the independence hypothesis of dip direction and dip angle. Based on these solutions, a novel method for obtaining the inherent distribution (also for correcting the bias) is proposed, an approach which includes two procedures: 1) Correcting the cumulative probabilities of orientation according to the solutions, and 2) Determining the distribution of the corrected orientations using approximation methods such as the one-sample Kolmogorov-Smirnov test. The inherent distribution corrected by the proposed method can be used for discrete fracture network (DFN) modelling, which is applied to such areas as rockmass stability evaluation, rockmass permeability analysis, rockmass quality calculation and other related fields. To maximize the correction capacity of the proposed method, the observed sample size is suggested through effectiveness tests for different distribution types, dispersions and sample sizes. The performance of the proposed method and the comparison of its correction capacity with existing methods are illustrated with two case studies. PMID:26961249
Bae, Jong-Myon
2016-01-01
A common method for conducting a quantitative systematic review (QSR) for observational studies related to nutritional epidemiology is the "highest versus lowest intake" method (HLM), in which only the information concerning the effect size (ES) of the highest category of a food item is collected on the basis of its lowest category. However, in the interval collapsing method (ICM), a method suggested to enable a maximum utilization of all available information, the ES information is collected by collapsing all categories into a single category. This study aimed to compare the ES and summary effect size (SES) between the HLM and ICM. A QSR for evaluating the citrus fruit intake and risk of pancreatic cancer and calculating the SES by using the HLM was selected. The ES and SES were estimated by performing a meta-analysis using the fixed-effect model. The directionality and statistical significance of the ES and SES were used as criteria for determining the concordance between the HLM and ICM outcomes. No significant differences were observed in the directionality of SES extracted by using the HLM or ICM. The application of the ICM, which uses a broader information base, yielded more-consistent ES and SES, and narrower confidence intervals than the HLM. The ICM is advantageous over the HLM owing to its higher statistical accuracy in extracting information for QSR on nutritional epidemiology. The application of the ICM should hence be recommended for future studies.
Sustaining inquiry-based teaching methods in the middle school science classroom
NASA Astrophysics Data System (ADS)
Murphy, Amy Fowler
This dissertation used a combination of case study and phenomenological research methods to investigate how individual teachers of middle school science in the Alabama Math, Science, and Technology Initiative (AMSTI) program sustain their use of inquiry-based methods of teaching and learning. While the overall context for the cases was the AMSTI program, each of the four teacher participants in this study had a unique, individual context as well. The researcher collected data through a series of interviews, multiple-day observations, and curricular materials. The interview data was analyzed to develop a textural, structural, and composite description of the phenomenon. The Reformed Teaching Observation Protocol (RTOP) was used along with the Assesing Inquiry Potential (AIP) questionnaire to determine the level of inquiry-based instruction occuring in the participants classrooms. Analysis of the RTOP data and AIP data indicated all of the participants utilized inquiry-based methods in their classrooms during their observed lessons. The AIP data also indicated the level of inquiry in the AMSTI curricular materials utilized by the participants during the observations was structured inquiry. The findings from the interview data suggested the ability of the participants to sustain their use of structured inquiry was influenced by their experiences with, beliefs about, and understandings of inquiry. This study contributed to the literature by supporting existing studies regarding the influence of teachers' experiences, beliefs, and understandings of inquiry on their classroom practices. The inquiry approach stressed in current reforms in science education targets content knowledge, skills, and processes needed in a future scientifically literate citizenry.
Knowles, Charles H; Whyte, Greg P
2007-01-01
Objective To evaluate the risk of chronic traumatic brain injury from amateur boxing. Setting Secondary research performed by combination of sport physicians and clinical academics. Design, data sources, and methods Systematic review of observational studies in which chronic traumatic brain injury was defined as any abnormality on clinical neurological examination, psychometric testing, neuroimaging studies, and electroencephalography. Studies were identified through database (1950 to date) and bibliographic searches without language restrictions. Two reviewers extracted study characteristics, quality, and data, with adherence to a protocol developed from a widely recommended method for systematic review of observational studies (MOOSE). Results 36 papers had relevant extractable data (from a detailed evaluation of 93 studies of 943 identified from the initial search). Quality of evidence was generally poor. The best quality studies were those with a cohort design and those that used psychometric tests. These yielded the most negative results: only four of 17 (24%) better quality studies found any indication of chronic traumatic brain injury in a minority of boxers studied. Conclusion There is no strong evidence to associate chronic traumatic brain injury with amateur boxing. PMID:17916811
Loosemore, Mike; Knowles, Charles H; Whyte, Greg P
2007-10-20
To evaluate the risk of chronic traumatic brain injury from amateur boxing. Secondary research performed by combination of sport physicians and clinical academics. DESIGN, DATA SOURCES, AND METHODS: Systematic review of observational studies in which chronic traumatic brain injury was defined as any abnormality on clinical neurological examination, psychometric testing, neuroimaging studies, and electroencephalography. Studies were identified through database (1950 to date) and bibliographic searches without language restrictions. Two reviewers extracted study characteristics, quality, and data, with adherence to a protocol developed from a widely recommended method for systematic review of observational studies (MOOSE). 36 papers had relevant extractable data (from a detailed evaluation of 93 studies of 943 identified from the initial search). Quality of evidence was generally poor. The best quality studies were those with a cohort design and those that used psychometric tests. These yielded the most negative results: only four of 17 (24%) better quality studies found any indication of chronic traumatic brain injury in a minority of boxers studied. There is no strong evidence to associate chronic traumatic brain injury with amateur boxing.
Bird Radar Validation in the Field by Time-Referencing Line-Transect Surveys
Dokter, Adriaan M.; Baptist, Martin J.; Ens, Bruno J.; Krijgsveld, Karen L.; van Loon, E. Emiel
2013-01-01
Track-while-scan bird radars are widely used in ornithological studies, but often the precise detection capabilities of these systems are unknown. Quantification of radar performance is essential to avoid observational biases, which requires practical methods for validating a radar’s detection capability in specific field settings. In this study a method to quantify the detection capability of a bird radar is presented, as well a demonstration of this method in a case study. By time-referencing line-transect surveys, visually identified birds were automatically linked to individual tracks using their transect crossing time. Detection probabilities were determined as the fraction of the total set of visual observations that could be linked to radar tracks. To avoid ambiguities in assigning radar tracks to visual observations, the observer’s accuracy in determining a bird’s transect crossing time was taken into account. The accuracy was determined by examining the effect of a time lag applied to the visual observations on the number of matches found with radar tracks. Effects of flight altitude, distance, surface substrate and species size on the detection probability by the radar were quantified in a marine intertidal study area. Detection probability varied strongly with all these factors, as well as species-specific flight behaviour. The effective detection range for single birds flying at low altitude for an X-band marine radar based system was estimated at ∼1.5 km. Within this range the fraction of individual flying birds that were detected by the radar was 0.50±0.06 with a detection bias towards higher flight altitudes, larger birds and high tide situations. Besides radar validation, which we consider essential when quantification of bird numbers is important, our method of linking radar tracks to ground-truthed field observations can facilitate species-specific studies using surveillance radars. The methodology may prove equally useful for optimising tracking algorithms. PMID:24066103
Uncertain Classification of Variable Stars: Handling Observational GAPS and Noise
NASA Astrophysics Data System (ADS)
Castro, Nicolás; Protopapas, Pavlos; Pichara, Karim
2018-01-01
Automatic classification methods applied to sky surveys have revolutionized the astronomical target selection process. Most surveys generate a vast amount of time series, or “lightcurves,” that represent the brightness variability of stellar objects in time. Unfortunately, lightcurves’ observations take several years to be completed, producing truncated time series that generally remain without the application of automatic classifiers until they are finished. This happens because state-of-the-art methods rely on a variety of statistical descriptors or features that present an increasing degree of dispersion when the number of observations decreases, which reduces their precision. In this paper, we propose a novel method that increases the performance of automatic classifiers of variable stars by incorporating the deviations that scarcity of observations produces. Our method uses Gaussian process regression to form a probabilistic model of each lightcurve’s observations. Then, based on this model, bootstrapped samples of the time series features are generated. Finally, a bagging approach is used to improve the overall performance of the classification. We perform tests on the MAssive Compact Halo Object (MACHO) and Optical Gravitational Lensing Experiment (OGLE) catalogs, results show that our method effectively classifies some variability classes using a small fraction of the original observations. For example, we found that RR Lyrae stars can be classified with ~80% accuracy just by observing the first 5% of the whole lightcurves’ observations in the MACHO and OGLE catalogs. We believe these results prove that, when studying lightcurves, it is important to consider the features’ error and how the measurement process impacts it.
Observation of autoionization in O 2 by an electron-electron coincidence method
NASA Astrophysics Data System (ADS)
Doering, J. P.; Yang, J.; Cooper, J. W.
1995-01-01
A strong transition to an autoionizing stata has been observed in O 2 at 16.83 ± 0.11 eV by means of a new electron-electron conincidence method. The method uses the fact that electrons arising from autoionizing states appear at a constant energy loss corresponding to the excitation energy of the autoionizing state rather than at a constant ionization potential as do electrons produced by direct ionization. Comparison of the present data with previous photoionization studies suggests that the autoionizing O 2 state is the same state deduced to be responsible for abnormal vibrational intensities in the O 2+X 2Πg ground state when 16.85 eV Ne(I) photons are used. These electron-electron coincidence experiments provide a direct new method for the study of autoionization produced by electron impact.
NASA Astrophysics Data System (ADS)
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
2017-04-01
Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts compared to simpler methods. It is pointed out that the ML methods do not differ dramatically from the stochastic methods, while it is interesting that the NN, RF and SVM algorithms used in this study offer potentially very good performance in terms of accuracy. It should be noted that, although this study focuses on hydrological processes, the results are of general scientific interest. Another important point in this study is the use of several methods and metrics. Using fewer methods and fewer metrics would have led to a very different overall picture, particularly if those fewer metrics corresponded to fewer criteria. For this reason, we consider that the proposed methodology is appropriate for the evaluation of forecasting methods.
Errors Using Observational Methods for Ergonomics Assessment in Real Practice.
Diego-Mas, Jose-Antonio; Alcaide-Marzal, Jorge; Poveda-Bautista, Rocio
2017-12-01
The degree in which practitioners use the observational methods for musculoskeletal disorder risks assessment correctly was evaluated. Ergonomics assessment is a key issue for the prevention and reduction of work-related musculoskeletal disorders in workplaces. Observational assessment methods appear to be better matched to the needs of practitioners than direct measurement methods, and for this reason, they are the most widely used techniques in real work situations. Despite the simplicity of observational methods, those responsible for assessing risks using these techniques should have some experience and know-how in order to be able to use them correctly. We analyzed 442 risk assessments of actual jobs carried out by 290 professionals from 20 countries to determine their reliability. The results show that approximately 30% of the assessments performed by practitioners had errors. In 13% of the assessments, the errors were severe and completely invalidated the results of the evaluation. Despite the simplicity of observational method, approximately 1 out of 3 assessments conducted by practitioners in actual work situations do not adequately evaluate the level of potential musculoskeletal disorder risks. This study reveals a problem that suggests greater effort is needed to ensure that practitioners possess better knowledge of the techniques used to assess work-related musculoskeletal disorder risks and that laws and regulations should be stricter as regards qualifications and skills required by professionals.
Changes in Patterns of Teacher Interaction in Primary Classrooms: 1976-96.
ERIC Educational Resources Information Center
Galton, Maurice; Hargreaves, Linda; Comber, Chris; Wall, Debbie; Pell, Tony
1999-01-01
Addresses the effectiveness of teaching methods in English primary school classrooms. Evaluates the interventions designed to change primary educators' teaching methods by replicating the Observational Research and Classroom Learning Evaluation (ORACLE) study that was originally conducted in 1976. Compares the results from the original study to…
Territorial Behavior in Public Settings
ERIC Educational Resources Information Center
Costa, Marco
2012-01-01
This study provides a novel observational method to observe repetitive seating patterns chosen by students in a classroom. Although prior work that relied on self-reports suggests that students claim the same seats repeatedly, the main hypothesis of the study was that in a repeated use of a public space, people tend to occupy the same position,…
ERIC Educational Resources Information Center
Tillman, Beverly A.; Richards, Stephen B.; Frank, Catherine Lawless
2011-01-01
This study employed a Likert-type survey, "Praxis/Pathwise" written observations, as well as guided and open-ended reflections to assess the perceptions of preparedness for the first year of teaching for special education student teaching candidates. Cooperating teachers completed the survey and "Praxis/Pathwise" observations.…
ERIC Educational Resources Information Center
Rigby, Jessica G.; Larbi-Cherif, Adrian; Rosenquist, Brooks A.; Sharpe, Charlotte J.; Cobb, Paul; Smith, Thomas
2017-01-01
Purpose: This study examines the content and efficacy of instructional leaders' expectations and feedback (press) in relation to the improvement of middle school mathematics teachers' instruction in the context of coherent systems of supports. Research Method/Approach: This mixed methods study is a part of a larger, 8-year longitudinal study in…
Using Mobile Technology to Observe Student Study Behaviors and Track Library Space Usage
ERIC Educational Resources Information Center
Thompson, Susan
2015-01-01
Libraries have become increasingly interested in studying the use of spaces within their buildings. Traditional methods for tracking library building use, such as gate counts, provide little information on what patrons do once they are in the library; therefore, new methods for studying space usage are being developed. Particularly promising are…
ERIC Educational Resources Information Center
Mahmoudabadi, Zahra
2017-01-01
This study has two main objectives: first, to find traces of teaching methods in a language class and second, to study the relationship between intended learning outcomes and uptake, which is defined as what students claim to have learned. In order to identify the teaching method, after five sessions of observation, class activities and procedures…
ERIC Educational Resources Information Center
Güngör, Sema Nur; Özkan, Muhlis
2016-01-01
The aim of this study is to teach enzymes, which are one of the biology subjects in understanding which students have a big difficulty, to pre-service teachers through POE method in the case of catalase, which is an oxidoreductase. Descriptive analysis method was employed in this study in which 38 second grade pre-service teachers attending Uludag…
Geophysical investigation using gravity data in Kinigi geothermal field, northwest Rwanda
NASA Astrophysics Data System (ADS)
Uwiduhaye, Jean d.'Amour; Mizunaga, Hideki; Saibi, Hakim
2018-03-01
A land gravity survey was carried out in the Kinigi geothermal field, Northwest Rwanda using 184 gravity stations during August and September, 2015. The aim of the gravity survey was to understand the subsurface structure and its relation to the observed surface manifestations in the study area. The complete Bouguer Gravity anomaly was produced with a reduction density of 2.4 g/cm3. Bouguer anomalies ranging from -52 to -35 mGals were observed in the study area with relatively high anomalies in the east and northwest zones while low anomalies are observed in the southwest side of the studied area. A decrease of 17 mGals is observed in the southwestern part of the study area and caused by the low-density of the Tertiary rocks. Horizontal gradient, tilt angle and analytical signal methods were applied to the observed gravity data and showed that Mubona, Mpenge and Cyabararika surface springs are structurally controlled while Rubindi spring is not. The integrated results of gravity gradient interpretation methods delineated a dominant geological structure trending in the NW-SE, which is in agreement with the regional geological trend. The results of this gravity study will help aid future geothermal exploration and development in the Kinigi geothermal field.
NASA Astrophysics Data System (ADS)
Teodorani, M.; Strand, E.
Unexplained plasma-like atmospheric `light balls' are observed at very low altitudes during alternate phases of maximum and minimum in the Hessdalen area, located in central Norway. Several theories are presented in order to explain the observed phenomenon; among these: piezo-electricity from rocks, atmospheric ionization triggered by solar activity and cosmic rays. The presented study is aimed at proposing the use of a dedicated instrumental set-up, research experimental procedures and methods in order to prove or disprove every single theory: in this context several kinds of observational techniques, measurement strategies and physical tests of tactical relevance are discussed in detail. An introduction on any considered theory is presented together with a detailed discussion regarding the subsequent experimental phase. For each specific theory brief descriptions of the observable parameters and of the essential instrumental choices and a detailed discussion of measurement procedures coupled with suitable flow-charts, are presented.
Sediment and nutrients transport in watershed and their impact on coastal environment
Ikeda, Syunsuke; Osawa, Kazutoshi; Akamatsu, Yoshihisa
2009-01-01
Sediment and nutrients yields especially from farmlands were studied in a watershed in Ishigaki island, Okinawa, Japan. The transport processes of these materials in rivers, mangrove, lagoon and coastal zones were studied by using various observation methods including stable isotope analysis. They were simulated by using a WEPP model which was modified to be applicable to such small islands by identifying several factors from the observations. The model predicts that a proper combination of civil engineering countermeasure and change of farming method can reduce the sediment yield from the watershed by 74%. Observations of water quality and coral recruitment test in Nagura bay indicate that the water is eutrophicated and the corals cannot grow for a long time. Based on these observations, a quantitative target of the reduction of sediment and nutrients yield in watershed can be decided rationally. PMID:19907124
Sargeant, J M; O'Connor, A M; Dohoo, I R; Erb, H N; Cevallos, M; Egger, M; Ersbøll, A K; Martin, S W; Nielsen, L R; Pearl, D L; Pfeiffer, D U; Sanchez, J; Torrence, M E; Vigre, H; Waldner, C; Ward, M P
2016-11-01
The reporting of observational studies in veterinary research presents many challenges that often are not adequately addressed in published reporting guidelines. To develop an extension of the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) statement that addresses unique reporting requirements for observational studies in veterinary medicine related to health, production, welfare, and food safety. A consensus meeting of experts was organized to develop an extension of the STROBE statement to address observational studies in veterinary medicine with respect to animal health, animal production, animal welfare, and food safety outcomes. Consensus meeting May 11-13, 2014 in Mississauga, Ontario, Canada. Seventeen experts from North America, Europe, and Australia attended the meeting. The experts were epidemiologists and biostatisticians, many of whom hold or have held editorial positions with relevant journals. Prior to the meeting, 19 experts completed a survey about whether they felt any of the 22 items of the STROBE statement should be modified and if items should be added to address unique issues related to observational studies in animal species with health, production, welfare, or food safety outcomes. At the meeting, the participants were provided with the survey responses and relevant literature concerning the reporting of veterinary observational studies. During the meeting, each STROBE item was discussed to determine whether or not re-wording was recommended, and whether additions were warranted. Anonymous voting was used to determine whether there was consensus for each item change or addition. The consensus was that six items needed no modifications or additions. Modifications or additions were made to the STROBE items numbered: 1 (title and abstract), 3 (objectives), 5 (setting), 6 (participants), 7 (variables), 8 (data sources/measurement), 9 (bias), 10 (study size), 12 (statistical methods), 13 (participants), 14 (descriptive data), 15 (outcome data), 16 (main results), 17 (other analyses), 19 (limitations), and 22 (funding). Published literature was not always available to support modification to, or inclusion of, an item. The methods and processes used in the development of this statement were similar to those used for other extensions of the STROBE statement. The use of this extension to the STROBE statement should improve the reporting of observational studies in veterinary research related to animal health, production, welfare, or food safety outcomes by recognizing the unique features of observational studies involving food-producing and companion animals, products of animal origin, aquaculture, and wildlife. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Souri, E.; Aghdami, A. Negahban; Adib, N.
2014-01-01
An HPLC method for determination of mebeverine hydrochloride (MH) in the presence of its degradation products was developed. The degradation of MH was studied under hydrolysis, oxidative and photolysis stress conditions. Under alkaline, acidic and oxidative conditions, degradation of MH was observed. The separation was performed using a Symmetry C18 column and a mixture of 50 mM KH2PO4, acetonitrile and tetrahydrfuran (THF) (63:35:2; v/v/v) as the mobile phase. No interference peaks from degradation products in acidic, alkaline and oxidative conditions were observed. The linearity, accuracy and precision of the method were studied. The method was linear over the range of 1-100 μg/ml MH (r2>0.999) and the CV values for intra-day and inter-day variations were in the range of 1.0-1.8%. The limit of quantification (LOQ) and the limit of detection (LOD) of the method were 1.0 and 0.2 μg/ml, respectively. Determination of MH in pharmaceutical dosage forms was performed using the developed method. Furthermore the kinetics of the degradation of MH in the presence of hydrogen peroxide was investigated. The proposed method could be a suitable method for routine quality control studies of mebeverine dosage forms. PMID:25657790
Souri, E; Aghdami, A Negahban; Adib, N
2014-01-01
An HPLC method for determination of mebeverine hydrochloride (MH) in the presence of its degradation products was developed. The degradation of MH was studied under hydrolysis, oxidative and photolysis stress conditions. Under alkaline, acidic and oxidative conditions, degradation of MH was observed. The separation was performed using a Symmetry C18 column and a mixture of 50 mM KH2PO4, acetonitrile and tetrahydrfuran (THF) (63:35:2; v/v/v) as the mobile phase. No interference peaks from degradation products in acidic, alkaline and oxidative conditions were observed. The linearity, accuracy and precision of the method were studied. The method was linear over the range of 1-100 μg/ml MH (r(2)>0.999) and the CV values for intra-day and inter-day variations were in the range of 1.0-1.8%. The limit of quantification (LOQ) and the limit of detection (LOD) of the method were 1.0 and 0.2 μg/ml, respectively. Determination of MH in pharmaceutical dosage forms was performed using the developed method. Furthermore the kinetics of the degradation of MH in the presence of hydrogen peroxide was investigated. The proposed method could be a suitable method for routine quality control studies of mebeverine dosage forms.
NASA Astrophysics Data System (ADS)
Lea, J.
2017-12-01
The quantification of glacier change is a key variable within glacier monitoring, with the method used potentially being crucial to ensuring that data can be appropriately compared with environmental data. The topic and timescales of study (e.g. land/marine terminating environments; sub-annual/decadal/centennial/millennial timescales) often mean that different methods are more suitable for different problems. However, depending on the GIS/coding expertise of the user, some methods can potentially be time consuming to undertake, making large-scale studies problematic. In addition, examples exist where different users have nominally applied the same methods in different studies, though with minor methodological inconsistencies in their approach. In turn, this will have implications for data homogeneity where regional/global datasets may be constructed. Here, I present a simple toolbox scripted in a Matlab® environment that requires only glacier margin and glacier centreline data to quantify glacier length, glacier change between observations, rate of change, in addition to other metrics. The toolbox includes the option to apply the established centreline or curvilinear box methods, or a new method: the variable box method - designed for tidewater margins where box width is defined as the total width of the individual terminus observation. The toolbox is extremely flexible, and has the option to be applied as either Matlab® functions within user scripts, or via a graphical user interface (GUI) for those unfamiliar with a coding environment. In both instances, there is potential to apply the methods quickly to large datasets (100s-1000s of glaciers, with potentially similar numbers of observations each), thus ensuring large scale methodological consistency (and therefore data homogeneity) and allowing regional/global scale analyses to be achievable for those with limited GIS/coding experience. The toolbox has been evaluated against idealised scenarios demonstrating its accuracy, while feedback from undergraduate students who have trialled the toolbox is that it is intuitive and simple to use. When released, the toolbox will be free and open source allowing users to potentially modify, improve and expand upon the current version.
ERIC Educational Resources Information Center
Schneider, Barry H.
2009-01-01
Background: The friendships of socially withdrawn/anxious children and early adolescents have been found to lack critical rewarding qualities. Observational research may help elucidate the obstacles they face in forming and maintaining high-quality friendships with sociable peers. Method: We observed the interactions of 38 socially withdrawn early…
The Affordances of Mobile-App Supported Teacher Observations for Peer Feedback
ERIC Educational Resources Information Center
Çelik, Sercan; Baran, Evrim; Sert, Olcay
2018-01-01
Mobile technologies offer new affordances for teacher observation in teacher education programs, albeit under-examined in contrast to video technologies. The purpose of this article is to investigate the integration of mobile technologies into teacher observation. Using a case study method, the authors compare the traditional narrative paper-pen,…
Support of surgical process modeling by using adaptable software user interfaces
NASA Astrophysics Data System (ADS)
Neumuth, T.; Kaschek, B.; Czygan, M.; Goldstein, D.; Strauß, G.; Meixensberger, J.; Burgert, O.
2010-03-01
Surgical Process Modeling (SPM) is a powerful method for acquiring data about the evolution of surgical procedures. Surgical Process Models are used in a variety of use cases including evaluation studies, requirements analysis and procedure optimization, surgical education, and workflow management scheme design. This work proposes the use of adaptive, situation-aware user interfaces for observation support software for SPM. We developed a method to support the modeling of the observer by using an ontological knowledge base. This is used to drive the graphical user interface for the observer to restrict the search space of terminology depending on the current situation. In the evaluation study it is shown, that the workload of the observer was decreased significantly by using adaptive user interfaces. 54 SPM observation protocols were analyzed by using the NASA Task Load Index and it was shown that the use of the adaptive user interface disburdens the observer significantly in workload criteria effort, mental demand and temporal demand, helping him to concentrate on his essential task of modeling the Surgical Process.
NASA Astrophysics Data System (ADS)
Belikov, Dmitry A.; Maksyutov, Shamil; Ganshin, Alexander; Zhuravlev, Ruslan; Deutscher, Nicholas M.; Wunch, Debra; Feist, Dietrich G.; Morino, Isamu; Parker, Robert J.; Strong, Kimberly; Yoshida, Yukio; Bril, Andrey; Oshchepkov, Sergey; Boesch, Hartmut; Dubey, Manvendra K.; Griffith, David; Hewson, Will; Kivi, Rigel; Mendonca, Joseph; Notholt, Justus; Schneider, Matthias; Sussmann, Ralf; Velazco, Voltaire A.; Aoki, Shuji
2017-01-01
The Total Carbon Column Observing Network (TCCON) is a network of ground-based Fourier transform spectrometers (FTSs) that record near-infrared (NIR) spectra of the sun. From these spectra, accurate and precise observations of CO2 column-averaged dry-air mole fractions (denoted XCO2) are retrieved. TCCON FTS observations have previously been used to validate satellite estimations of XCO2; however, our knowledge of the short-term spatial and temporal variations in XCO2 surrounding the TCCON sites is limited. In this work, we use the National Institute for Environmental Studies (NIES) Eulerian three-dimensional transport model and the FLEXPART (FLEXible PARTicle dispersion model) Lagrangian particle dispersion model (LPDM) to determine the footprints of short-term variations in XCO2 observed by operational, past, future and possible TCCON sites. We propose a footprint-based method for the collocation of satellite and TCCON XCO2 observations and estimate the performance of the method using the NIES model and five GOSAT (Greenhouse Gases Observing Satellite) XCO2 product data sets. Comparison of the proposed approach with a standard geographic method shows a higher number of collocation points and an average bias reduction up to 0.15 ppm for a subset of 16 stations for the period from January 2010 to January 2014. Case studies of the Darwin and Reunion Island sites reveal that when the footprint area is rather curved, non-uniform and significantly different from a geographical rectangular area, the differences between these approaches are more noticeable. This emphasises that the collocation is sensitive to local meteorological conditions and flux distributions.
STRengthening analytical thinking for observational studies: the STRATOS initiative.
Sauerbrei, Willi; Abrahamowicz, Michal; Altman, Douglas G; le Cessie, Saskia; Carpenter, James
2014-12-30
The validity and practical utility of observational medical research depends critically on good study design, excellent data quality, appropriate statistical methods and accurate interpretation of results. Statistical methodology has seen substantial development in recent times. Unfortunately, many of these methodological developments are ignored in practice. Consequently, design and analysis of observational studies often exhibit serious weaknesses. The lack of guidance on vital practical issues discourages many applied researchers from using more sophisticated and possibly more appropriate methods when analyzing observational studies. Furthermore, many analyses are conducted by researchers with a relatively weak statistical background and limited experience in using statistical methodology and software. Consequently, even 'standard' analyses reported in the medical literature are often flawed, casting doubt on their results and conclusions. An efficient way to help researchers to keep up with recent methodological developments is to develop guidance documents that are spread to the research community at large. These observations led to the initiation of the strengthening analytical thinking for observational studies (STRATOS) initiative, a large collaboration of experts in many different areas of biostatistical research. The objective of STRATOS is to provide accessible and accurate guidance in the design and analysis of observational studies. The guidance is intended for applied statisticians and other data analysts with varying levels of statistical education, experience and interests. In this article, we introduce the STRATOS initiative and its main aims, present the need for guidance documents and outline the planned approach and progress so far. We encourage other biostatisticians to become involved. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.
Acter, Thamina; Kim, Donghwi; Ahmed, Arif; Ha, Ji-Hyoung; Kim, Sunghwan
2017-08-01
Herein we report the observation of atmospheric pressure in-source hydrogen-deuterium exchange (HDX) of thiol group for the first time. The HDX for thiol group was optimized for positive atmospheric pressure photoionization (APPI) mass spectrometry (MS). The optimized HDX-MS was applied for 31 model compounds (thiols, thiophenes, and sulfides) to demonstrate that exchanged peaks were observed only for thiols. The optimized method has been successfully applied to the isolated fractions of sulfur-rich oil samples. The exchange of one and two thiol hydrogens with deuterium was observed in the thiol fraction; no HDX was observed in the other fractions. Thus, the results presented in this study demonstrate that the HDX-MS method using APPI ionization source can be effective for speciation of sulfur compounds. This method has the potential to be used to access corrosion problems caused by thiol-containing compounds. Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Sakata, T.; Suzuki, M.; Yamamoto, T.; Nakanishi, S.; Funahashi, M.; Tsurumachi, N.
2017-10-01
We investigated the optical transmission properties of one-dimensional photonic crystal (1D-PC) microcavity structures containing the liquid-crystalline (LC) perylene tetracarboxylic bisimide (PTCBI) derivative. We fabricated the microcavity structures for this study by two different methods and observed the cavity polaritons successfully in both samples. For one sample, since the PTCBI molecules were aligned in the cavity layer of the 1D-PC by utilizing a friction transfer method, vacuum Rabi splitting energy was strongly dependent on the polarization of the incident light produced by the peculiar optical features of the LC organic semiconductor. For the other sample, we did not utilize the friction transfer method and did not observe such polarization dependence. However, we did observe a relatively large Rabi splitting energy of 187 meV, probably due to the improvement of optical confinement effect.
Frequency Analysis Using Bootstrap Method and SIR Algorithm for Prevention of Natural Disasters
NASA Astrophysics Data System (ADS)
Kim, T.; Kim, Y. S.
2017-12-01
The frequency analysis of hydrometeorological data is one of the most important factors in response to natural disaster damage, and design standards for a disaster prevention facilities. In case of frequency analysis of hydrometeorological data, it assumes that observation data have statistical stationarity, and a parametric method considering the parameter of probability distribution is applied. For a parametric method, it is necessary to sufficiently collect reliable data; however, snowfall observations are needed to compensate for insufficient data in Korea, because of reducing the number of days for snowfall observations and mean maximum daily snowfall depth due to climate change. In this study, we conducted the frequency analysis for snowfall using the Bootstrap method and SIR algorithm which are the resampling methods that can overcome the problems of insufficient data. For the 58 meteorological stations distributed evenly in Korea, the probability of snowfall depth was estimated by non-parametric frequency analysis using the maximum daily snowfall depth data. The results show that probabilistic daily snowfall depth by frequency analysis is decreased at most stations, and most stations representing the rate of change were found to be consistent in both parametric and non-parametric frequency analysis. This study shows that the resampling methods can do the frequency analysis of the snowfall depth that has insufficient observed samples, which can be applied to interpretation of other natural disasters such as summer typhoons with seasonal characteristics. Acknowledgment.This research was supported by a grant(MPSS-NH-2015-79) from Disaster Prediction and Mitigation Technology Development Program funded by Korean Ministry of Public Safety and Security(MPSS).
Orban, Kristina; Ekelin, Maria; Edgren, Gudrun; Sandgren, Olof; Hovbrandt, Pia; Persson, Eva K
2017-09-11
Outcome- or competency-based education is well established in medical and health sciences education. Curricula are based on courses where students develop their competences and assessment is also usually course-based. Clinical reasoning is an important competence, and the aim of this study was to monitor and describe students' progression in professional clinical reasoning skills during health sciences education using observations of group discussions following the case method. In this qualitative study students from three different health education programmes were observed while discussing clinical cases in a modified Harvard case method session. A rubric with four dimensions - problem-solving process, disciplinary knowledge, character of discussion and communication - was used as an observational tool to identify clinical reasoning. A deductive content analysis was performed. The results revealed the students' transition over time from reasoning based strictly on theoretical knowledge to reasoning ability characterized by clinical considerations and experiences. Students who were approaching the end of their education immediately identified the most important problem and then focused on this in their discussion. Practice knowledge increased over time, which was seen as progression in the use of professional language, concepts, terms and the use of prior clinical experience. The character of the discussion evolved from theoretical considerations early in the education to clinical reasoning in later years. Communication within the groups was supportive and conducted with a professional tone. Our observations revealed progression in several aspects of students' clinical reasoning skills on a group level in their discussions of clinical cases. We suggest that the case method can be a useful tool in assessing quality in health sciences education.
The integration of astro-geodetic data observed with ACSYS to the local geoid models Istanbul-Turkey
NASA Astrophysics Data System (ADS)
Halicioglu, Kerem; Ozludemir, M. Tevfik; Deniz, Rasim; Ozener, Haluk; Albayrak, Muge; Ulug, Rasit; Basoglu, Burak
2017-04-01
Astro-geodetic deflections of the vertical components provide accurate and valuable information of Earth's gravity filed. Conventional methods require considerable effort and time whereas new methods, namely digital zenith camera systems (DZCS), have been designed to eliminate drawbacks of the conventional methods, such as observer dependent errors, long observation times, and to improve the observation accuracy. The observation principle is based on capturing star images near zenithal direction to determine astronomical coordinates of the station point with the integration of CCD, telescope, tiltmeters, and GNSS devices. In Turkey a new DZCS have been designed and tested on control network located in Istanbul, of which the geoid height differences were known with the accuracy of ±3.5 cm. Astro-geodetic Camera System (ACSYS) was used to determine the deflections of the vertical components with an accuracy of ±0.1 - 0.3 arc seconds, and results were compared with geoid height differences using astronomical levelling procedure. The results have also been compared with the ones calculated from global geopotential models. In this study the recent results of the first digital zenith camera system of Turkey are presented and the future studies are introduced such as the current developments of the system including hardware and software upgrades as well as the new observation strategy of the ACSYS. We also discuss the contribution and integration of the astro-geodetic deflections of the vertical components to the geoid determination studies in the light of information of current ongoing projects being operated in Turkey.
Shen, Jenny I; Lum, Erik L; Chang, Tara I
2016-09-01
Because large randomized clinical trials (RCTs) in dialysis have been relatively scarce, evidence-based dialysis care has depended heavily on the results of observational studies. However, when results from RCTs appear to contradict the findings of observational studies, nephrologists are left to wonder which type of study they should believe. In this editorial, we explore the key differences between observational studies and RCTs in the context of such seemingly conflicting studies in dialysis. Confounding is the major limitation of observational studies, whereas low statistical power and problems with external validity are more likely to limit the findings of RCTs. Differences in the specification of the population, exposure, and outcomes can also contribute to different results among RCTs and observational studies. Rigorous methods are required regardless of what type of study is conducted, and readers should not automatically assume that one type of study design is superior to the other. Ultimately, dialysis care requires both well-designed, well-conducted observational studies and RCTs to move the field forward. © 2016 Wiley Periodicals, Inc.
Shen, Jenny I.; Lum, Erik L.; Chang, Tara I.
2016-01-01
Because large randomized clinical trials (RCTs) in dialysis have been relatively scarce, evidence-based dialysis care has depended heavily on the results of observational studies. However, when results from RCTs appear to contradict the findings of observational studies, nephrologists are left to wonder which type of study they should believe. In this editorial we explore the key differences between observational studies and RCTs in the context of such seemingly conflicting studies in dialysis. Confounding is the major limitation of observational studies, while low statistical power and problems with external validity are more likely to limit the findings of RCTs. Differences in the specification of the population, exposure, and outcomes can also contribute to different results among RCTs and observational studies. Rigorous methods are required regardless of what type of study is conducted, and readers should not automatically assume that one type of study design is superior to the other. Ultimately, dialysis care requires both well-designed, well-conducted observational studies and RCTs to move the field forward. PMID:27207819
Observational methods for solar origin diagnostics of energetic protons
NASA Astrophysics Data System (ADS)
Miteva, Rositsa
2017-12-01
The aim of the present report is to outline the observational methods used to determine the solar origin - in terms of flares and coronal mass ejections (CMEs) - of the in situ observed solar energetic protons. Several widely used guidelines are given and different sources of uncertainties are summarized and discussed. In the present study, a new quality factor is proposed as a certainty check on the so-identified flare-CME pairs. In addition, the correlations between the proton peak intensity and the properties of their solar origin are evaluated as a function of the quality factor.
Chi, Michelene T H; Roy, Marguerite; Hausmann, Robert G M
2008-03-01
The goals of this study are to evaluate a relatively novel learning environment, as well as to seek greater understanding of why human tutoring is so effective. This alternative learning environment consists of pairs of students collaboratively observing a videotape of another student being tutored. Comparing this collaboratively observing environment to four other instructional methods-one-on-one human tutoring, observing tutoring individually, collaborating without observing, and studying alone-the results showed that students learned to solve physics problems just as effectively from observing tutoring collaboratively as the tutees who were being tutored individually. We explain the effectiveness of this learning environment by postulating that such a situation encourages learners to become active and constructive observers through interactions with a peer. In essence, collaboratively observing combines the benefit of tutoring with the benefit of collaborating. The learning outcomes of the tutees and the collaborative observers, along with the tutoring dialogues, were used to further evaluate three hypotheses explaining why human tutoring is an effective learning method. Detailed analyses of the protocols at several grain sizes suggest that tutoring is effective when tutees are independently or jointly constructing knowledge: with the tutor, but not when the tutor independently conveys knowledge. 2008 Cognitive Science Society, Inc.
ERIC Educational Resources Information Center
Tackie-Ofosu, Vivian; Bentum, Kwesi
2013-01-01
In the current study, the authors explored how early childhood educators used observation to support children in the learning environment. The objectives set were to find out the observation methods teachers used, ascertain their understanding of child observation, find out activities children undertook, and how teachers documented what children…
Comparative evaluation of RetCam vs. gonioscopy images in congenital glaucoma
Azad, Raj V; Chandra, Parijat; Chandra, Anuradha; Gupta, Aparna; Gupta, Viney; Sihota, Ramanjit
2014-01-01
Purpose: To compare clarity, exposure and quality of anterior chamber angle visualization in congenital glaucoma patients, using RetCam and indirect gonioscopy images. Design: Cross-sectional study Participants. Congenital glaucoma patients over age of 5 years. Materials and Methods: A prospective consecutive pilot study was done in congenital glaucoma patients who were older than 5 years. Methods used are indirect gonioscopy and RetCam imaging. Clarity of the image, extent of angle visible and details of angle structures seen were graded for both methods, on digitally recorded images, in each eye, by two masked observers. Outcome Measures: Image clarity, interobserver agreement. Results: 40 eyes of 25 congenital glaucoma patients were studied. RetCam image had excellent clarity in 77.5% of patients versus 47.5% by gonioscopy. The extent of angle seen was similar by both methods. Agreement between RetCam and gonioscopy images regarding details of angle structures was 72.50% by observer 1 and 65.00% by observer 2. Conclusions: There was good agreement between RetCam and indirect gonioscopy images in detecting angle structures of congenital glaucoma patients. However, RetCam provided greater clarity, with better quality, and higher magnification images. RetCam can be a useful alternative to gonioscopy in infants and small children without the need for general anesthesia. PMID:24008788
Automated analysis of brachial ultrasound time series
NASA Astrophysics Data System (ADS)
Liang, Weidong; Browning, Roger L.; Lauer, Ronald M.; Sonka, Milan
1998-07-01
Atherosclerosis begins in childhood with the accumulation of lipid in the intima of arteries to form fatty streaks, advances through adult life when occlusive vascular disease may result in coronary heart disease, stroke and peripheral vascular disease. Non-invasive B-mode ultrasound has been found useful in studying risk factors in the symptom-free population. Large amount of data is acquired from continuous imaging of the vessels in a large study population. A high quality brachial vessel diameter measurement method is necessary such that accurate diameters can be measured consistently in all frames in a sequence, across different observers. Though human expert has the advantage over automated computer methods in recognizing noise during diameter measurement, manual measurement suffers from inter- and intra-observer variability. It is also time-consuming. An automated measurement method is presented in this paper which utilizes quality assurance approaches to adapt to specific image features, to recognize and minimize the noise effect. Experimental results showed the method's potential for clinical usage in the epidemiological studies.
Leifman, Håkan; Rehnman, Charlotta; Sjöblom, Erika; Holgersson, Stefan
2011-01-01
The purpose of this study was to estimate the prevalence of anabolic androgenic steroid (AAS) use and offers to use among gym users in Stockholm County (Sweden), and to conduct a comparison of concordance in estimates of AAS and supplements at gyms between two data collection methods. A questionnaire was distributed to members at 36 training facilities and 1,752 gym users participated in the study. An observation study was conducted as covert participant observations at 64 gyms. According to the questionnaire, 3.9% of men reported life time use of AAS, 1.4% use during the past 12 months and 0.4% AAS use during past 30 days. Not only were there similar patterns found in the two methods, i.e., similar age and gender distributions for AAS use, but analyses of concordance showed that gyms with a higher prevalence of self-reported AAS-use and supplement use (questionnaire) showed a significantly higher proportion of observer-assessed AAS users. Analyses of individual predictors showed that AAS users were almost always young men, regular weight trainers and more often users of drugs and nutritional supplements. The higher prevalence of AAS use among gym users than in the general population makes the former an appropriate target group for AAS prevention. The connection between supplements, drugs and AAS use suggests that effective AAS prevention need to focus on several risk factors for AAS use. The clear resemblance in estimates between the observation and questionnaire data strengthen the credibility of the two methods. PMID:21845151
Leifman, Håkan; Rehnman, Charlotta; Sjöblom, Erika; Holgersson, Stefan
2011-07-01
The purpose of this study was to estimate the prevalence of anabolic androgenic steroid (AAS) use and offers to use among gym users in Stockholm County (Sweden), and to conduct a comparison of concordance in estimates of AAS and supplements at gyms between two data collection methods. A questionnaire was distributed to members at 36 training facilities and 1,752 gym users participated in the study. An observation study was conducted as covert participant observations at 64 gyms. According to the questionnaire, 3.9% of men reported life time use of AAS, 1.4% use during the past 12 months and 0.4% AAS use during past 30 days. Not only were there similar patterns found in the two methods, i.e., similar age and gender distributions for AAS use, but analyses of concordance showed that gyms with a higher prevalence of self-reported AAS-use and supplement use (questionnaire) showed a significantly higher proportion of observer-assessed AAS users. Analyses of individual predictors showed that AAS users were almost always young men, regular weight trainers and more often users of drugs and nutritional supplements. The higher prevalence of AAS use among gym users than in the general population makes the former an appropriate target group for AAS prevention. The connection between supplements, drugs and AAS use suggests that effective AAS prevention need to focus on several risk factors for AAS use. The clear resemblance in estimates between the observation and questionnaire data strengthen the credibility of the two methods.
Robbrecht, Cedric; Claes, Steven; Cromheecke, Michiel; Mahieu, Peter; Kakavelakis, Kyriakos; Victor, Jan; Bellemans, Johan; Verdonk, Peter
2014-10-01
Post-operative widening of tibial and/or femoral bone tunnels is a common observation after ACL reconstruction, especially with soft-tissue grafts. There are no studies comparing tunnel widening in hamstring autografts versus tibialis anterior allografts. The goal of this study was to observe the difference in tunnel widening after the use of allograft vs. autograft for ACL reconstruction, by measuring it with a novel 3-D computed tomography based method. Thirty-five ACL-deficient subjects were included, underwent anatomic single-bundle ACL reconstruction and were evaluated at one year after surgery with the use of 3-D CT imaging. Three independent observers semi-automatically delineated femoral and tibial tunnel outlines, after which a best-fit cylinder was derived and the tunnel diameter was determined. Finally, intra- and inter-observer reliability of this novel measurement protocol was defined. In femoral tunnels, the intra-observer ICC was 0.973 (95% CI: 0.922-0.991) and the inter-observer ICC was 0.992 (95% CI: 0.982-0.996). In tibial tunnels, the intra-observer ICC was 0.955 (95% CI: 0.875-0.985). The combined inter-observer ICC was 0.970 (95% CI: 0.987-0.917). Tunnel widening was significantly higher in allografts compared to autografts, in the tibial tunnels (p=0.013) as well as in the femoral tunnels (p=0.007). To our knowledge, this novel, semi-automated 3D-computed tomography image processing method has shown to yield highly reproducible results for the measurement of bone tunnel diameter and area. This series showed a significantly higher amount of tunnel widening observed in the allograft group at one-year follow-up. Level II, Prospective comparative study. Copyright © 2014 Elsevier B.V. All rights reserved.
Mian, Nicholas D.; Carter, Alice S.; Pine, Daniel S.; Wakschlag, Lauren S.; Briggs-Gowan, Margaret J.
2015-01-01
Background Identifying anxiety disorders in preschool-age children represents an important clinical challenge. Observation is essential to clinical assessment and can help differentiate normative variation from clinically significant anxiety. Yet, most anxiety assessment methods for young children rely on parent-reports. The goal of this article is to present and preliminarily test the reliability and validity of a novel observational paradigm for assessing a range of fearful and anxious behaviors in young children, the Anxiety Dimensional Observation Schedule (Anx-DOS). Methods A diverse sample of 403 children, aged 3 to 6 years, and their mothers was studied. Reliability and validity in relation to parent reports (Preschool Age Psychiatric Assessment) and known risk factors, including indicators of behavioral inhibition (latency to touch novel objects) and attention bias to threat (in the dot-probe task) were investigated. Results The Anx-DOS demonstrated good inter-rater reliability and internal consistency. Evidence for convergent validity was demonstrated relative to mother-reported separation anxiety, social anxiety, phobic avoidance, trauma symptoms, and past service use. Finally, fearfulness was associated with observed latency and attention bias toward threat. Conclusions Findings support the Anx-DOS as a method for capturing early manifestations of fearfulness and anxiety in young children. Multimethod assessments incorporating standardized methods for assessing discrete, observable manifestations of anxiety may be beneficial for early identification and clinical intervention efforts. PMID:25773515
How is the weather? Forecasting inpatient glycemic control
Saulnier, George E; Castro, Janna C; Cook, Curtiss B; Thompson, Bithika M
2017-01-01
Aim: Apply methods of damped trend analysis to forecast inpatient glycemic control. Method: Observed and calculated point-of-care blood glucose data trends were determined over 62 weeks. Mean absolute percent error was used to calculate differences between observed and forecasted values. Comparisons were drawn between model results and linear regression forecasting. Results: The forecasted mean glucose trends observed during the first 24 and 48 weeks of projections compared favorably to the results provided by linear regression forecasting. However, in some scenarios, the damped trend method changed inferences compared with linear regression. In all scenarios, mean absolute percent error values remained below the 10% accepted by demand industries. Conclusion: Results indicate that forecasting methods historically applied within demand industries can project future inpatient glycemic control. Additional study is needed to determine if forecasting is useful in the analyses of other glucometric parameters and, if so, how to apply the techniques to quality improvement. PMID:29134125
Model Uncertainty Quantification Methods In Data Assimilation
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.
2017-12-01
Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.
Baldwin, Alex S; Baker, Daniel H; Hess, Robert F
2016-01-01
The internal noise present in a linear system can be quantified by the equivalent noise method. By measuring the effect that applying external noise to the system's input has on its output one can estimate the variance of this internal noise. By applying this simple "linear amplifier" model to the human visual system, one can entirely explain an observer's detection performance by a combination of the internal noise variance and their efficiency relative to an ideal observer. Studies using this method rely on two crucial factors: firstly that the external noise in their stimuli behaves like the visual system's internal noise in the dimension of interest, and secondly that the assumptions underlying their model are correct (e.g. linearity). Here we explore the effects of these two factors while applying the equivalent noise method to investigate the contrast sensitivity function (CSF). We compare the results at 0.5 and 6 c/deg from the equivalent noise method against those we would expect based on pedestal masking data collected from the same observers. We find that the loss of sensitivity with increasing spatial frequency results from changes in the saturation constant of the gain control nonlinearity, and that this only masquerades as a change in internal noise under the equivalent noise method. Part of the effect we find can be attributed to the optical transfer function of the eye. The remainder can be explained by either changes in effective input gain, divisive suppression, or a combination of the two. Given these effects the efficiency of our observers approaches the ideal level. We show the importance of considering these factors in equivalent noise studies.
Varga, Zsuzsanna; Cassoly, Estelle; Li, Qiyu; Oehlschlegel, Christian; Tapia, Coya; Lehr, Hans Anton; Klingbiel, Dirk; Thürlimann, Beat; Ruhstaller, Thomas
2015-01-01
Background Proliferative activity (Ki-67 Labelling Index) in breast cancer increasingly serves as an additional tool in the decision for or against adjuvant chemotherapy in midrange hormone receptor positive breast cancer. Ki-67 Index has been previously shown to suffer from high inter-observer variability especially in midrange (G2) breast carcinomas. In this study we conducted a systematic approach using different Ki-67 assessments on large tissue sections in order to identify the method with the highest reliability and the lowest variability. Materials and Methods Five breast pathologists retrospectively analyzed proliferative activity of 50 G2 invasive breast carcinomas using large tissue sections by assessing Ki-67 immunohistochemistry. Ki-67-assessments were done on light microscopy and on digital images following these methods: 1) assessing five regions, 2) assessing only darkly stained nuclei and 3) considering only condensed proliferative areas (‘hotspots’). An individual review (the first described assessment from 2008) was also performed. The assessments on light microscopy were done by estimating. All measurements were performed three times. Inter-observer and intra-observer reliabilities were calculated using the approach proposed by Eliasziw et al. Clinical cutoffs (14% and 20%) were tested using Fleiss’ Kappa. Results There was a good intra-observer reliability in 5 of 7 methods (ICC: 0.76–0.89). The two highest inter-observer reliability was fair to moderate (ICC: 0.71 and 0.74) in 2 methods (region-analysis and individual-review) on light microscopy. Fleiss’-kappa-values (14% cut-off) were the highest (moderate) using the original recommendation on light-microscope (Kappa 0.58). Fleiss’ kappa values (20% cut-off) were the highest (Kappa 0.48 each) in analyzing hotspots on light-microscopy and digital-analysis. No methodologies using digital-analysis were superior to the methods on light microscope. Conclusion Our results show that all methods on light-microscopy for Ki-67 assessment in large tissue sections resulted in a good intra-observer reliability. Region analysis and individual review (the original recommendation) on light-microscopy yielded the highest inter-observer reliability. These results show slight improvement to previously published data on poor-reproducibility and thus might be a practical-pragmatic way for routine assessment of Ki-67 Index in G2 breast carcinomas. PMID:25885288
Designing a mixed methods study in primary care.
Creswell, John W; Fetters, Michael D; Ivankova, Nataliya V
2004-01-01
Mixed methods or multimethod research holds potential for rigorous, methodologically sound investigations in primary care. The objective of this study was to use criteria from the literature to evaluate 5 mixed methods studies in primary care and to advance 3 models useful for designing such investigations. We first identified criteria from the social and behavioral sciences to analyze mixed methods studies in primary care research. We then used the criteria to evaluate 5 mixed methods investigations published in primary care research journals. Of the 5 studies analyzed, 3 included a rationale for mixing based on the need to develop a quantitative instrument from qualitative data or to converge information to best understand the research topic. Quantitative data collection involved structured interviews, observational checklists, and chart audits that were analyzed using descriptive and inferential statistical procedures. Qualitative data consisted of semistructured interviews and field observations that were analyzed using coding to develop themes and categories. The studies showed diverse forms of priority: equal priority, qualitative priority, and quantitative priority. Data collection involved quantitative and qualitative data gathered both concurrently and sequentially. The integration of the quantitative and qualitative data in these studies occurred between data analysis from one phase and data collection from a subsequent phase, while analyzing the data, and when reporting the results. We recommend instrument-building, triangulation, and data transformation models for mixed methods designs as useful frameworks to add rigor to investigations in primary care. We also discuss the limitations of our study and the need for future research.
Anguera, M Teresa; Portell, Mariona; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana
2018-01-01
Indirect observation is a recent concept in systematic observation. It largely involves analyzing textual material generated either indirectly from transcriptions of audio recordings of verbal behavior in natural settings (e.g., conversation, group discussions) or directly from narratives (e.g., letters of complaint, tweets, forum posts). It may also feature seemingly unobtrusive objects that can provide relevant insights into daily routines. All these materials constitute an extremely rich source of information for studying everyday life, and they are continuously growing with the burgeoning of new technologies for data recording, dissemination, and storage. Narratives are an excellent vehicle for studying everyday life, and quantitization is proposed as a means of integrating qualitative and quantitative elements. However, this analysis requires a structured system that enables researchers to analyze varying forms and sources of information objectively. In this paper, we present a methodological framework detailing the steps and decisions required to quantitatively analyze a set of data that was originally qualitative. We provide guidelines on study dimensions, text segmentation criteria, ad hoc observation instruments, data quality controls, and coding and preparation of text for quantitative analysis. The quality control stage is essential to ensure that the code matrices generated from the qualitative data are reliable. We provide examples of how an indirect observation study can produce data for quantitative analysis and also describe the different software tools available for the various stages of the process. The proposed method is framed within a specific mixed methods approach that involves collecting qualitative data and subsequently transforming these into matrices of codes (not frequencies) for quantitative analysis to detect underlying structures and behavioral patterns. The data collection and quality control procedures fully meet the requirement of flexibility and provide new perspectives on data integration in the study of biopsychosocial aspects in everyday contexts.
Review of research designs and statistical methods employed in dental postgraduate dissertations.
Shirahatti, Ravi V; Hegde-Shetiya, Sahana
2015-01-01
There is a need to evaluate the quality of postgraduate dissertations of dentistry submitted to university in the light of the international standards of reporting. We conducted the review with an objective to document the use of sampling methods, measurement standardization, blinding, methods to eliminate bias, appropriate use of statistical tests, appropriate use of data presentation in postgraduate dental research and suggest and recommend modifications. The public access database of the dissertations from Rajiv Gandhi University of Health Sciences was reviewed. Three hundred and thirty-three eligible dissertations underwent preliminary evaluation followed by detailed evaluation of 10% of randomly selected dissertations. The dissertations were assessed based on international reporting guidelines such as strengthening the reporting of observational studies in epidemiology (STROBE), consolidated standards of reporting trials (CONSORT), and other scholarly resources. The data were compiled using MS Excel and SPSS 10.0. Numbers and percentages were used for describing the data. The "in vitro" studies were the most common type of research (39%), followed by observational (32%) and experimental studies (29%). The disciplines conservative dentistry (92%) and prosthodontics (75%) reported high numbers of in vitro research. Disciplines oral surgery (80%) and periodontics (67%) had conducted experimental studies as a major share of their research. Lacunae in the studies included observational studies not following random sampling (70%), experimental studies not following random allocation (75%), not mentioning about blinding, confounding variables and calibrations in measurements, misrepresenting the data by inappropriate data presentation, errors in reporting probability values and not reporting confidence intervals. Few studies showed grossly inappropriate choice of statistical tests and many studies needed additional tests. Overall observations indicated the need to comply with standard guidelines of reporting research.
Chen, Xiang-Bai; Hien, Nguyen Thi Minh; Han, Kiok; Nam, Ji-Yeon; Huyen, Nguyen Thi; Shin, Seong-Il; Wang, Xueyun; Cheong, S. W.; Lee, D.; Noh, T. W.; Sung, N. H.; Cho, B. K.; Yang, In-Sang
2015-01-01
Spin-wave (magnon) scattering, when clearly observed by Raman spectroscopy, can be simple and powerful for studying magnetic phase transitions. In this paper, we present how to observe magnon scattering clearly by Raman spectroscopy, then apply the Raman method to study spin-ordering and spin-reorientation transitions of hexagonal manganite single crystal and thin films and compare directly with the results of magnetization measurements. Our results show that by choosing strong resonance condition and appropriate polarization configuration, magnon scattering can be clearly observed, and the temperature dependence of magnon scattering can be simple and powerful quantity for investigating spin-ordering as well as spin-reorientation transitions. Especially, the Raman method would be very helpful for investigating the weak spin-reorientation transitions by selectively probing the magnons in the Mn3+ sublattices, while leaving out the strong effects of paramagnetic moments of the rare earth ions. PMID:26300075
Learning Bayesian Networks from Correlated Data
NASA Astrophysics Data System (ADS)
Bae, Harold; Monti, Stefano; Montano, Monty; Steinberg, Martin H.; Perls, Thomas T.; Sebastiani, Paola
2016-05-01
Bayesian networks are probabilistic models that represent complex distributions in a modular way and have become very popular in many fields. There are many methods to build Bayesian networks from a random sample of independent and identically distributed observations. However, many observational studies are designed using some form of clustered sampling that introduces correlations between observations within the same cluster and ignoring this correlation typically inflates the rate of false positive associations. We describe a novel parameterization of Bayesian networks that uses random effects to model the correlation within sample units and can be used for structure and parameter learning from correlated data without inflating the Type I error rate. We compare different learning metrics using simulations and illustrate the method in two real examples: an analysis of genetic and non-genetic factors associated with human longevity from a family-based study, and an example of risk factors for complications of sickle cell anemia from a longitudinal study with repeated measures.
Designing A Mixed Methods Study In Primary Care
Creswell, John W.; Fetters, Michael D.; Ivankova, Nataliya V.
2004-01-01
BACKGROUND Mixed methods or multimethod research holds potential for rigorous, methodologically sound investigations in primary care. The objective of this study was to use criteria from the literature to evaluate 5 mixed methods studies in primary care and to advance 3 models useful for designing such investigations. METHODS We first identified criteria from the social and behavioral sciences to analyze mixed methods studies in primary care research. We then used the criteria to evaluate 5 mixed methods investigations published in primary care research journals. RESULTS Of the 5 studies analyzed, 3 included a rationale for mixing based on the need to develop a quantitative instrument from qualitative data or to converge information to best understand the research topic. Quantitative data collection involved structured interviews, observational checklists, and chart audits that were analyzed using descriptive and inferential statistical procedures. Qualitative data consisted of semistructured interviews and field observations that were analyzed using coding to develop themes and categories. The studies showed diverse forms of priority: equal priority, qualitative priority, and quantitative priority. Data collection involved quantitative and qualitative data gathered both concurrently and sequentially. The integration of the quantitative and qualitative data in these studies occurred between data analysis from one phase and data collection from a subsequent phase, while analyzing the data, and when reporting the results. DISCUSSION We recommend instrument-building, triangulation, and data transformation models for mixed methods designs as useful frameworks to add rigor to investigations in primary care. We also discuss the limitations of our study and the need for future research. PMID:15053277
From blackbirds to black holes: Investigating capture-recapture methods for time domain astronomy
NASA Astrophysics Data System (ADS)
Laycock, Silas G. T.
2017-07-01
In time domain astronomy, recurrent transients present a special problem: how to infer total populations from limited observations. Monitoring observations may give a biassed view of the underlying population due to limitations on observing time, visibility and instrumental sensitivity. A similar problem exists in the life sciences, where animal populations (such as migratory birds) or disease prevalence, must be estimated from sparse and incomplete data. The class of methods termed Capture-Recapture is used to reconstruct population estimates from time-series records of encounters with the study population. This paper investigates the performance of Capture-Recapture methods in astronomy via a series of numerical simulations. The Blackbirds code simulates monitoring of populations of transients, in this case accreting binary stars (neutron star or black hole accreting from a stellar companion) under a range of observing strategies. We first generate realistic light-curves for populations of binaries with contrasting orbital period distributions. These models are then randomly sampled at observing cadences typical of existing and planned monitoring surveys. The classical capture-recapture methods, Lincoln-Peterson, Schnabel estimators, related techniques, and newer methods implemented in the Rcapture package are compared. A general exponential model based on the radioactive decay law is introduced which is demonstrated to recover (at 95% confidence) the underlying population abundance and duty cycle, in a fraction of the observing visits (10-50%) required to discover all the sources in the simulation. Capture-Recapture is a promising addition to the toolbox of time domain astronomy, and methods implemented in R by the biostats community can be readily called from within python.
ERIC Educational Resources Information Center
Jay, Tim
2012-01-01
Verbal reports are a common method of data collection in studies of mathematics learning, often in studies with a longitudinal component or those employing microgenetic methods where several observations of problem-solving are made over a short period of time. Whilst there is a fairly substantial literature on reactivity to verbal reports,…
Evaluation of different methods for determining growing degree-day thresholds in apricot cultivars
NASA Astrophysics Data System (ADS)
Ruml, Mirjana; Vuković, Ana; Milatović, Dragan
2010-07-01
The aim of this study was to examine different methods for determining growing degree-day (GDD) threshold temperatures for two phenological stages (full bloom and harvest) and select the optimal thresholds for a greater number of apricot ( Prunus armeniaca L.) cultivars grown in the Belgrade region. A 10-year data series were used to conduct the study. Several commonly used methods to determine the threshold temperatures from field observation were evaluated: (1) the least standard deviation in GDD; (2) the least standard deviation in days; (3) the least coefficient of variation in GDD; (4) regression coefficient; (5) the least standard deviation in days with a mean temperature above the threshold; (6) the least coefficient of variation in days with a mean temperature above the threshold; and (7) the smallest root mean square error between the observed and predicted number of days. In addition, two methods for calculating daily GDD, and two methods for calculating daily mean air temperatures were tested to emphasize the differences that can arise by different interpretations of basic GDD equation. The best agreement with observations was attained by method (7). The lower threshold temperature obtained by this method differed among cultivars from -5.6 to -1.7°C for full bloom, and from -0.5 to 6.6°C for harvest. However, the “Null” method (lower threshold set to 0°C) and “Fixed Value” method (lower threshold set to -2°C for full bloom and to 3°C for harvest) gave very good results. The limitations of the widely used method (1) and methods (5) and (6), which generally performed worst, are discussed in the paper.
Studies on Training Ground Observers to Estimate Range to Aerial Targets.
ERIC Educational Resources Information Center
McCluskey, Michael R.; And Others
Six pilot studies were conducted to determine the effects of training on range estimation performance for aerial targets, and to identify some of the relevant variables. Observers were trained to estimate ranges of 350, 400, 800, 1,500, or 2,500 meters. Several variations of range estimation training methods were used, including immediate…
Observing the Technological Pedagogical and Content Knowledge Levels of Science Teacher Candidates
ERIC Educational Resources Information Center
Keçeci, Gonca; Zengin, Fikriye Kirbag
2017-01-01
This study was planned to observe the technological pedagogical and content knowledge of teacher candidates. The study group consists of 4th grade students of Firat University Faculty of Education who were asked to describe any desired topic in the secondary school science curriculum, using the methods and techniques of their choosing. Teacher…
Design Fixation and Cooperative Learning in Elementary Engineering Design Project: A Case Study
ERIC Educational Resources Information Center
Luo, Yi
2015-01-01
This paper presents a case study examining 3rd, 4th and 5th graders' design fixation and cooperative learning in an engineering design project. A mixed methods instrument, the Cooperative Learning Observation Protocol (CLOP), was adapted to record frequency and class observation on cooperative learning engagement through detailed field notes.…
NASA Astrophysics Data System (ADS)
Lamouroux, Julien; Charria, Guillaume; De Mey, Pierre; Raynaud, Stéphane; Heyraud, Catherine; Craneguy, Philippe; Dumas, Franck; Le Hénaff, Matthieu
2016-04-01
In the Bay of Biscay and the English Channel, in situ observations represent a key element to monitor and to understand the wide range of processes in the coastal ocean and their direct impacts on human activities. An efficient way to measure the hydrological content of the water column over the main part of the continental shelf is to consider ships of opportunity as the surface to cover is wide and could be far from the coast. In the French observation strategy, the RECOPESCA programme, as a component of the High frequency Observation network for the environment in coastal SEAs (HOSEA), aims to collect environmental observations from sensors attached to fishing nets. In the present study, we assess that network using the Array Modes (ArM) method (a stochastic implementation of Le Hénaff et al. Ocean Dyn 59: 3-20. doi: 10.1007/s10236-008-0144-7, 2009). That model ensemble-based method is used here to compare model and observation errors and to quantitatively evaluate the performance of the observation network at detecting prior (model) uncertainties, based on hypotheses on error sources. A reference network, based on fishing vessel observations in 2008, is assessed using that method. Considering the various seasons, we show the efficiency of the network at detecting the main model uncertainties. Moreover, three scenarios, based on the reference network, a denser network in 2010 and a fictive network aggregated from a pluri-annual collection of profiles, are also analysed. Our sensitivity study shows the importance of the profile positions with respect to the sheer number of profiles for ensuring the ability of the network to describe the main error modes. More generally, we demonstrate the capacity of this method, with a low computational cost, to assess and to design new in situ observation networks.
ERIC Educational Resources Information Center
Ritz, Mariah; Noltemeyer, Amity; Davis, Darrel; Green, Jennifer
2014-01-01
This mixed methods study examined behavior management strategies used by preschool teachers to address student noncompliance in the classroom. Specifically, the study aimed to (1) examine the methods that preschool teachers are currently using to respond to noncompliant behavior in their classrooms, (2) measure the frequency with which each…
The Importance of Using Games in EFL Classrooms
ERIC Educational Resources Information Center
Gozcu, Emine; Caganaga, Cagda Kivanc
2016-01-01
This paper aims to find out how games are important and effective when used in EFL classrooms. Two different kinds of qualitative research methods: semi-structured interviews and observation were conducted in this study. Multi-method triangulation is used throughout the study. The data was carried out through audio-recorded interview and…
Technology Adoption in Secondary Mathematics Teaching in Kenya: An Explanatory Mixed Methods Study
ERIC Educational Resources Information Center
Kamau, Leonard Mwathi
2014-01-01
This study examined the factors related to technology adoption by secondary mathematics teachers in Nyandarua and Nairobi counties in the Republic of Kenya. Using a sequential explanatory mixed methods approach, I collected qualitative data from interviews and classroom observations of six teachers to better understand statistical results from the…
Perceptions of Prospective Teachers about School Principals: Prejudice or Real
ERIC Educational Resources Information Center
Tosun, Figen Çam
2018-01-01
The aim of this study is to reveal prospective teachers' thoughts and observations about school principals. In the study, the qualitative and quantitative research methods were used together. In quantitative research method, a questionnaire was developed and survey research was conducted with the help of this questionnaire. In the qualitative…
A study of pressure-based methodology for resonant flows in non-linear combustion instabilities
NASA Technical Reports Server (NTRS)
Yang, H. Q.; Pindera, M. Z.; Przekwas, A. J.; Tucker, K.
1992-01-01
This paper presents a systematic assessment of a large variety of spatial and temporal differencing schemes on nonstaggered grids by the pressure-based methods for the problems of fast transient flows. The observation from the present study is that for steady state flow problems, pressure-based methods can be very competitive with the density-based methods. For transient flow problems, pressure-based methods utilizing the same differencing scheme are less accurate, even though the wave speeds are correctly predicted.
Identity method to study chemical fluctuations in relativistic heavy-ion collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gazdzicki, Marek; Grebieszkow, Katarzyna; Mackowiak, Maja
Event-by-event fluctuations of the chemical composition of the hadronic final state of relativistic heavy-ion collisions carry valuable information on the properties of strongly interacting matter produced in the collisions. However, in experiments incomplete particle identification distorts the observed fluctuation signals. The effect is quantitatively studied and a new technique for measuring chemical fluctuations, the identity method, is proposed. The method fully eliminates the effect of incomplete particle identification. The application of the identity method to experimental data is explained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuo, Rui; Wu, C. F. Jeff
Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.
NASA Astrophysics Data System (ADS)
Ulug, R.; Ozludemir, M. T.
2016-12-01
After 2011, through the modernization process of GLONASS, the number of satellites increased rapidly. This progress has made the GLONASS the only fully operational system alternative to GPS in point positioning. So far, many researches have been conducted to investigate the contribution of GLONASS to point positioning considering different methods such as Real Time Kinematic (RTK) and Precise Point Positioning (PPP). The latter one, PPP, is a method that performs precise position determination using a single GNSS receiver. PPP method has become very attractive since the early 2000s and it provided great advantages for engineering and scientific applications. However, PPP method needs at least 2 hours observation time and the required observation length may be longer depending on several factors, such as the number of satellites, satellite configuration etc. The more satellites, the less observation time. Nevertheless the impact of the number of satellites included must be known very well. In this study, to determine the contribution of GLONASS on PPP, GLONASS satellite observations were added one by one from 1 to 5 satellite in 2, 4 and 6 hours of observations. For this purpose, the data collected at the IGS site ISTA was used. Data processing has been done for Day of Year (DOY) 197 in 2016. 24 hours GPS observations have been processed by Bernese 5.2 PPP module and the output was selected as the reference while 2, 4 and 6 hours GPS and GPS/GLONASS observations have been processed by magic GNSS PPP module. The results clearly showed that GPS/GLONASS observations improved positional accuracy, precision, dilution of precision and convergence to the reference coordinates. In this context, coordinate differences between 24 hours GPS observations and 6 hours GPS/GLONASS observations have been obtained as less than 2 cm.
Teleseismic Array Studies of Earth's Core-Mantle Boundary
NASA Astrophysics Data System (ADS)
Alexandrakis, Catherine
2011-12-01
The core mantle boundary (CMB) is an inaccessible and complex region, knowledge of which is vital to our understanding of many Earth processes. Above it is the heterogeneous lower-mantle. Below the boundary is the outer-core, composed of liquid iron, and/or nickel and some lighter elements. Elucidation of how these two distinct layers interact may enable researchers to better understand the geodynamo, global tectonics, and overall Earth history. One parameter that can be used to study structure and limit potential chemical compositions is seismic-wave velocity. Current global-velocity models have significant uncertainties in the 200 km above and below the CMB. In this thesis, these regions are studied using three methods. The upper outer core is studied using two seismic array methods. First, a modified vespa, or slant-stack method is applied to seismic observations at broadband seismic arrays, and at large, dense groups of broadband seismic stations dubbed 'virtual' arrays. Observations of core-refracted teleseismic waves, such as SmKS, are used to extract relative arrivaltimes. As with previous studies, lower -mantle heterogeneities influence the extracted arrivaltimes, giving significant scatter. To remove raypath effects, a new method was developed, called Empirical Transfer Functions (ETFs). When applied to SmKS waves, this method effectively isolates arrivaltime perturbations caused by outer core velocities. By removing raypath effects, the signals can be stacked further reducing scatter. The results of this work were published as a new 1D outer-core model, called AE09. This model describes a well-mixed outer core. Two array methods are used to detect lower mantle heterogeneities, in particular Ultra-Low Velocity Zones (ULVZs). The ETF method and beam forming are used to isolate a weak P-wave that diffracts along the CMB. While neither the ETF method nor beam forming could adequately image the low-amplitude phase, beam forms of two events indicate precursors to the SKS and SKKS phase, which may be ULVZ indicators. Finally, cross-correlated observed and modelled beams indicate a tendency towards a ULVZ-like lower mantle in the study region.
Development and evaluation of a method of calibrating medical displays based on fixed adaptation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sund, Patrik, E-mail: patrik.sund@vgregion.se; Månsson, Lars Gunnar; Båth, Magnus
2015-04-15
Purpose: The purpose of this work was to develop and evaluate a new method for calibration of medical displays that includes the effect of fixed adaptation and by using equipment and luminance levels typical for a modern radiology department. Methods: Low contrast sinusoidal test patterns were derived at nine luminance levels from 2 to 600 cd/m{sup 2} and used in a two alternative forced choice observer study, where the adaptation level was fixed at the logarithmic average of 35 cd/m{sup 2}. The contrast sensitivity at each luminance level was derived by establishing a linear relationship between the ten pattern contrastmore » levels used at every luminance level and a detectability index (d′) calculated from the fraction of correct responses. A Gaussian function was fitted to the data and normalized to the adaptation level. The corresponding equation was used in a display calibration method that included the grayscale standard display function (GSDF) but compensated for fixed adaptation. In the evaluation study, the contrast of circular objects with a fixed pixel contrast was displayed using both calibration methods and was rated on a five-grade scale. Results were calculated using a visual grading characteristics method. Error estimations in both observer studies were derived using a bootstrap method. Results: The contrast sensitivities for the darkest and brightest patterns compared to the contrast sensitivity at the adaptation luminance were 37% and 56%, respectively. The obtained Gaussian fit corresponded well with similar studies. The evaluation study showed a higher degree of equally distributed contrast throughout the luminance range with the calibration method compensated for fixed adaptation than for the GSDF. The two lowest scores for the GSDF were obtained for the darkest and brightest patterns. These scores were significantly lower than the lowest score obtained for the compensated GSDF. For the GSDF, the scores for all luminance levels were statistically separated from the average value; three were lower and two were higher. For the compensated GSDF, three of the scores could not be separated from the average value. Conclusions: An observer study using clinically relevant displays and luminance settings has demonstrated that the calibration of displays according to the GSDF causes the perceived contrast to be unevenly distributed when using displays with a high luminance range. As the luminance range increases, the perceived contrast in the dark and bright regions will be significantly lower than the perceived contrast in the middle of the luminance range. A new calibration method that includes the effect of fixed adaptation was developed and evaluated in an observer study and was found to distribute the contrast of the display more evenly throughout the grayscale than the GSDF.« less
Diagnostic value of DIAGNOdent in detecting caries under composite restorations of primary molars
Sichani, Ava Vali; Javadinejad, Shahrzad; Ghafari, Roshanak
2016-01-01
Background: Direct observation cannot detect caries under restorations; therefore, the aim of this study was to compare the accuracy of radiographs and DIAGNOdent in detecting caries under restorations in primary teeth using histologic evaluation. Materials and Methods: A total of 74 previously extracted primary molars (37 with occlusal caries and 37 without caries) were used. Class 1 cavity preparations were made on each tooth by a single clinician and then the preparations were filled with composite resin. The accuracy of radiographs and DIAGNOdent in detecting caries was compared using histologic evaluation. The data were analyzed by SPSS version 21 using Chi-square, Mc Namara statistical tests and receiver operating characteristic curve. The significance was set at 0.05. Results: The sensitivity and specificity for DIAGNOdent were 70.97 and 83.72, respectively. Few false negative results were observed, and the positive predictive value was high (+PV = 75.9) and the area under curve was more than 0.70 therefore making DIAGNOdenta great method for detecting caries (P = 0.0001). Two observers evaluated the radiographs and both observers had low sensitivity ( first observer: 48.39) (second observer: 51.61) and high specificity (both observers: 79.07). The +PV was lower than DIAGNOdent and the area under curve for both observers was less than 0.70. However, the difference between the two methods was not significant. Conclusion: DIAGNOdent showed a greater accuracy in detecting secondary caries under primary molar restorations, compared to radiographs. Although DIAGNOdent is an effective method for detecting caries under composite restorations, it is better to be used as an adjunctive method alongside other detecting procedures. PMID:27605990
Norris, Peter M; da Silva, Arlindo M
2016-07-01
A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.
NASA Technical Reports Server (NTRS)
Norris, Peter M.; Da Silva, Arlindo M.
2016-01-01
A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.
Norris, Peter M.; da Silva, Arlindo M.
2018-01-01
A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC. PMID:29618847
Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming
2011-01-01
Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005–2007. PMID:21776223
Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming
2011-06-01
Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.
Cold dark matter. 1: The formation of dark halos
NASA Technical Reports Server (NTRS)
Gelb, James M.; Bertschinger, Edmund
1994-01-01
We use numerical simulations of critically closed cold dark matter (CDM) models to study the effects of numerical resolution on observable quantities. We study simulations with up to 256(exp 3) particles using the particle-mesh (PM) method and with up to 144(exp 3) particles using the adaptive particle-particle-mesh (P3M) method. Comparisons of galaxy halo distributions are made among the various simulations. We also compare distributions with observations, and we explore methods for identifying halos, including a new algorithm that finds all particles within closed contours of the smoothed density field surrounding a peak. The simulated halos show more substructure than predicted by the Press-Schechter theory. We are able to rule out all omega = 1 CDM models for linear amplitude sigma(sub 8) greater than or approximately = 0.5 because the simulations produce too many massive halos compared with the observations. The simulations also produce too many low-mass halos. The distribution of halos characterized by their circular velocities for the P3M simulations is in reasonable agreement with the observations for 150 km/s less than or = V(sub circ) less than or = 350 km/s.
Ways of learning: Observational studies versus experiments
Shaffer, T.L.; Johnson, D.H.
2008-01-01
Manipulative experimentation that features random assignment of treatments, replication, and controls is an effective way to determine causal relationships. Wildlife ecologists, however, often must take a more passive approach to investigating causality. Their observational studies lack one or more of the 3 cornerstones of experimentation: controls, randomization, and replication. Although an observational study can be analyzed similarly to an experiment, one is less certain that the presumed treatment actually caused the observed response. Because the investigator does not actively manipulate the system, the chance that something other than the treatment caused the observed results is increased. We reviewed observational studies and contrasted them with experiments and, to a lesser extent, sample surveys. We identified features that distinguish each method of learning and illustrate or discuss some complications that may arise when analyzing results of observational studies. Findings from observational studies are prone to bias. Investigators can reduce the chance of reaching erroneous conclusions by formulating a priori hypotheses that can be pursued multiple ways and by evaluating the sensitivity of study conclusions to biases of various magnitudes. In the end, however, professional judgment that considers all available evidence is necessary to render a decision regarding causality based on observational studies.
Controller design approach based on linear programming.
Tanaka, Ryo; Shibasaki, Hiroki; Ogawa, Hiromitsu; Murakami, Takahiro; Ishida, Yoshihisa
2013-11-01
This study explains and demonstrates the design method for a control system with a load disturbance observer. Observer gains are determined by linear programming (LP) in terms of the Routh-Hurwitz stability criterion and the final-value theorem. In addition, the control model has a feedback structure, and feedback gains are determined to be the linear quadratic regulator. The simulation results confirmed that compared with the conventional method, the output estimated by our proposed method converges to a reference input faster when a load disturbance is added to a control system. In addition, we also confirmed the effectiveness of the proposed method by performing an experiment with a DC motor. © 2013 ISA. Published by ISA. All rights reserved.
Hong-Seng, Gan; Sayuti, Khairil Amir; Karim, Ahmad Helmy Abdul
2017-01-01
Existing knee cartilage segmentation methods have reported several technical drawbacks. In essence, graph cuts remains highly susceptible to image noise despite extended research interest; active shape model is often constraint by the selection of training data while shortest path have demonstrated shortcut problem in the presence of weak boundary, which is a common problem in medical images. The aims of this study is to investigate the capability of random walks as knee cartilage segmentation method. Experts would scribble on knee cartilage image to initialize random walks segmentation. Then, reproducibility of the method is assessed against manual segmentation by using Dice Similarity Index. The evaluation consists of normal cartilage and diseased cartilage sections which is divided into whole and single cartilage categories. A total of 15 normal images and 10 osteoarthritic images were included. The results showed that random walks method has demonstrated high reproducibility in both normal cartilage (observer 1: 0.83±0.028 and observer 2: 0.82±0.026) and osteoarthritic cartilage (observer 1: 0.80±0.069 and observer 2: 0.83±0.029). Besides, results from both experts were found to be consistent with each other, suggesting the inter-observer variation is insignificant (Normal: P=0.21; Diseased: P=0.15). The proposed segmentation model has overcame technical problems reported by existing semi-automated techniques and demonstrated highly reproducible and consistent results against manual segmentation method.
ERIC Educational Resources Information Center
Artzt, Alice F.; Armour-Thomas, Eleanor
This activity-oriented book for preservice mathematics teachers who are taking methods courses or who have been student teaching offers a framework for teacher reflection and self- assessment. It supplies detailed observation instruments for observing other teachers, reflective activities, and guidelines and instruments for supervisors. There are…
ERIC Educational Resources Information Center
Foster, Tanina S.
2014-01-01
Introduction: Observational research using the thin slice technique has been routinely incorporated in observational research methods, however there is limited evidence supporting use of this technique compared to full interaction coding. The purpose of this study was to determine if this technique could be reliability coded, if ratings are…
ERIC Educational Resources Information Center
McIver, Kerry L.; Brown, William H.; Pfeiffer, Karin A.; Dowda, Marsha; Pate, Russell R.
2016-01-01
Purpose: This study describes the development and pilot testing of the Observational System for Recording Physical Activity-Elementary School (OSRAC-E) Version. Method: This system was developed to observe and document the levels and types of physical activity and physical and social contexts of physical activity in elementary school students…
Peer Observation: A Key Factor to Improve Iranian EFL Teachers' Professional Development
ERIC Educational Resources Information Center
Motallebzadeh, Khalil; Hosseinnia, Mansooreh; Domskey, Javad G. H.
2017-01-01
This study reports on the perspectives of a group of Iranian EFL teachers about peer observation effects. The aim was to investigate if peer observation as a reflective tool could significantly affect EFL teachers' professional development. It has been done based on a mixed method approach. The participants have stated their viewpoints on the…
Ménard, Richard; Deshaies-Jacques, Martin; Gasset, Nicolas
2016-09-01
An objective analysis is one of the main components of data assimilation. By combining observations with the output of a predictive model we combine the best features of each source of information: the complete spatial and temporal coverage provided by models, with a close representation of the truth provided by observations. The process of combining observations with a model output is called an analysis. To produce an analysis requires the knowledge of observation and model errors, as well as its spatial correlation. This paper is devoted to the development of methods of estimation of these error variances and the characteristic length-scale of the model error correlation for its operational use in the Canadian objective analysis system. We first argue in favor of using compact support correlation functions, and then introduce three estimation methods: the Hollingsworth-Lönnberg (HL) method in local and global form, the maximum likelihood method (ML), and the [Formula: see text] diagnostic method. We perform one-dimensional (1D) simulation studies where the error variance and true correlation length are known, and perform an estimation of both error variances and correlation length where both are non-uniform. We show that a local version of the HL method can capture accurately the error variances and correlation length at each observation site, provided that spatial variability is not too strong. However, the operational objective analysis requires only a single and globally valid correlation length. We examine whether any statistics of the local HL correlation lengths could be a useful estimate, or whether other global estimation methods such as by the global HL, ML, or [Formula: see text] should be used. We found in both 1D simulation and using real data that the ML method is able to capture physically significant aspects of the correlation length, while most other estimates give unphysical and larger length-scale values. This paper describes a proposed improvement of the objective analysis of surface pollutants at Environment and Climate Change Canada (formerly known as Environment Canada). Objective analyses are essentially surface maps of air pollutants that are obtained by combining observations with an air quality model output, and are thought to provide a complete and more accurate representation of the air quality. The highlight of this study is an analysis of methods to estimate the model (or background) error correlation length-scale. The error statistics are an important and critical component to the analysis scheme.
Reliability and Validity of Observational Risk Screening in Evaluating Dynamic Knee Valgus
Ekegren, Christina L.; Miller, William C.; Celebrini, Richard G.; Eng, Janice J.; MacIntyre, Donna L.
2012-01-01
Study Design Nonexperimental methodological study. Objectives To determine the interrater and intrarater reliability and validity of using observational risk screening guidelines to evaluate dynamic knee valgus. Background A deficiency in the neuromuscular control of the hip has been identified as a key risk factor for non-contact anterior cruciate ligament (ACL) injury in post pubescent females. This deficiency can manifest itself as a valgus knee alignment during tasks involving hip and knee flexion. There are currently no scientifically tested methods to screen for dynamic knee valgus in the clinic or on the field. Methods Three physiotherapists used observational risk screening guidelines to rate 40 adolescent female soccer players according to their risk of ACL injury. The rating was based on the amount of dynamic knee valgus observed on a drop jump landing. Ratings were evaluated for intrarater and interrater agreement using kappa coefficients. Sensitivity and specificity of ratings were evaluated by comparing observational ratings with measurements obtained using 3-dimensional (3D) motion analysis. Results Kappa coefficients for intrarater and interrater agreement ranged from 0.75 to 0.85, indicating that ratings were reasonably consistent over time and between physiotherapists. Sensitivity values were inadequate, ranging from 67–87%. This indicated that raters failed to detect up to a third of “truly high risk” individuals. Specificity values ranged from 60–72% which was considered adequate for the purposes of the screen. Conclusion Observational risk screening is a practical and cost-effective method of screening for ACL injury risk. Rater agreement and specificity were acceptable for this method but sensitivity was not. To detect a greater proportion of individuals at risk of ACL injury, coaches and clinicians should ensure that they include additional tests for other high risk characteristics in their screening protocols. PMID:19721212
Kroeker, Kristine; Widdifield, Jessica; Muthukumarana, Saman; Jiang, Depeng; Lix, Lisa M
2017-01-01
Objective This research proposes a model-based method to facilitate the selection of disease case definitions from validation studies for administrative health data. The method is demonstrated for a rheumatoid arthritis (RA) validation study. Study design and setting Data were from 148 definitions to ascertain cases of RA in hospital, physician and prescription medication administrative data. We considered: (A) separate univariate models for sensitivity and specificity, (B) univariate model for Youden’s summary index and (C) bivariate (ie, joint) mixed-effects model for sensitivity and specificity. Model covariates included the number of diagnoses in physician, hospital and emergency department records, physician diagnosis observation time, duration of time between physician diagnoses and number of RA-related prescription medication records. Results The most common case definition attributes were: 1+ hospital diagnosis (65%), 2+ physician diagnoses (43%), 1+ specialist physician diagnosis (51%) and 2+ years of physician diagnosis observation time (27%). Statistically significant improvements in sensitivity and/or specificity for separate univariate models were associated with (all p values <0.01): 2+ and 3+ physician diagnoses, unlimited physician diagnosis observation time, 1+ specialist physician diagnosis and 1+ RA-related prescription medication records (65+ years only). The bivariate model produced similar results. Youden’s index was associated with these same case definition criteria, except for the length of the physician diagnosis observation time. Conclusion A model-based method provides valuable empirical evidence to aid in selecting a definition(s) for ascertaining diagnosed disease cases from administrative health data. The choice between univariate and bivariate models depends on the goals of the validation study and number of case definitions. PMID:28645978
NASA Astrophysics Data System (ADS)
Mazzoleni, Maurizio; Cortes Arevalo, Juliette; Alfonso, Leonardo; Wehn, Uta; Norbiato, Daniele; Monego, Martina; Ferri, Michele; Solomatine, Dimitri
2017-04-01
In the past years, a number of methods have been proposed to reduce uncertainty in flood prediction by means of model updating techniques. Traditional physical observations are usually integrated into hydrological and hydraulic models to improve model performances and consequent flood predictions. Nowadays, low-cost sensors can be used for crowdsourced observations. Different type of social sensors can measure, in a more distributed way, physical variables such as precipitation and water level. However, these crowdsourced observations are not integrated into a real-time fashion into water-system models due to their varying accuracy and random spatial-temporal coverage. We assess the effect in model performance due to the assimilation of crowdsourced observations of water level. Our method consists in (1) implementing a Kalman filter into a cascade of hydrological and hydraulic models. (2) defining observation errors depending on the type of sensor either physical or social. Randomly distributed errors are based on accuracy ranges that slightly improve according to the citizens' expertise level. (3) Using a simplified social model to realistically represent citizen engagement levels based on population density and citizens' motivation scenarios. To test our method, we synthetically derive crowdsourced observations for different citizen engagement levels from a distributed network of physical and social sensors. The observations are assimilated during a particular flood event occurred in the Bacchiglione catchment, Italy. The results of this study demonstrate that sharing crowdsourced water level observations (often motivated by a feeling of belonging to a community of friends) can help in improving flood prediction. On the other hand, a growing participation of individual citizens or weather enthusiasts sharing hydrological observations in cities can help to improve model performance. This study is a first step to assess the effects of crowdsourced observations in flood model predictions. Effective communication and feedback about the quality of observations from water authorities to engaged citizens are further required to minimize their intrinsic low-variable accuracy.
Applicability of optical scanner method for fine root dynamics
NASA Astrophysics Data System (ADS)
Kume, Tomonori; Ohashi, Mizue; Makita, Naoki; Khoon Kho, Lip; Katayama, Ayumi; Matsumoto, Kazuho; Ikeno, Hidetoshi
2016-04-01
Fine root dynamics is one of the important components in forest carbon cycling, as ~60 % of tree photosynthetic production can be allocated to root growth and metabolic activities. Various techniques have been developed for monitoring fine root biomass, production, mortality in order to understand carbon pools and fluxes resulting from fine roots dynamics. The minirhizotron method is now a widely used technique, in which a transparent tube is inserted into the soil and researchers count an increase and decrease of roots along the tube using images taken by a minirhizotron camera or minirhizotron video camera inside the tube. This method allows us to observe root behavior directly without destruction, but has several weaknesses; e.g., the difficulty of scaling up the results to stand level because of the small observation windows. Also, most of the image analysis are performed manually, which may yield insufficient quantitative and objective data. Recently, scanner method has been proposed, which can produce much bigger-size images (A4-size) with lower cost than those of the minirhizotron methods. However, laborious and time-consuming image analysis still limits the applicability of this method. In this study, therefore, we aimed to develop a new protocol for scanner image analysis to extract root behavior in soil. We evaluated applicability of this method in two ways; 1) the impact of different observers including root-study professionals, semi- and non-professionals on the detected results of root dynamics such as abundance, growth, and decomposition, and 2) the impact of window size on the results using a random sampling basis exercise. We applied our new protocol to analyze temporal changes of root behavior from sequential scanner images derived from a Bornean tropical forests. The results detected by the six observers showed considerable concordance in temporal changes in the abundance and the growth of fine roots but less in the decomposition. We also examined potential errors due to window size in the temporal changes in abundance and growth using the detected results, suggesting high applicability of the scanner methods with wide observation windows.
Visual memories for perceived length are well preserved in older adults.
Norman, J Farley; Holmin, Jessica S; Bartholomew, Ashley N
2011-09-15
Three experiments compared younger (mean age was 23.7years) and older (mean age was 72.1years) observers' ability to visually discriminate line length using both explicit and implicit standard stimuli. In Experiment 1, the method of constant stimuli (with an explicit standard) was used to determine difference thresholds, whereas the method of single stimuli (where the knowledge of the standard length was only implicit and learned from previous test stimuli) was used in Experiments 2 and 3. The study evaluated whether increases in age affect older observers' ability to learn, retain, and utilize effective implicit visual standards. Overall, the observers' length difference thresholds were 5.85% of the standard when the method of constant stimuli was used and improved to 4.39% of the standard for the method of single stimuli (a decrease of 25%). Both age groups performed similarly in all conditions. The results demonstrate that older observers retain the ability to create, remember, and utilize effective implicit standards from a series of visual stimuli. Copyright © 2011 Elsevier Ltd. All rights reserved.
Causal inference with missing exposure information: Methods and applications to an obstetric study.
Zhang, Zhiwei; Liu, Wei; Zhang, Bo; Tang, Li; Zhang, Jun
2016-10-01
Causal inference in observational studies is frequently challenged by the occurrence of missing data, in addition to confounding. Motivated by the Consortium on Safe Labor, a large observational study of obstetric labor practice and birth outcomes, this article focuses on the problem of missing exposure information in a causal analysis of observational data. This problem can be approached from different angles (i.e. missing covariates and causal inference), and useful methods can be obtained by drawing upon the available techniques and insights in both areas. In this article, we describe and compare a collection of methods based on different modeling assumptions, under standard assumptions for missing data (i.e. missing-at-random and positivity) and for causal inference with complete data (i.e. no unmeasured confounding and another positivity assumption). These methods involve three models: one for treatment assignment, one for the dependence of outcome on treatment and covariates, and one for the missing data mechanism. In general, consistent estimation of causal quantities requires correct specification of at least two of the three models, although there may be some flexibility as to which two models need to be correct. Such flexibility is afforded by doubly robust estimators adapted from the missing covariates literature and the literature on causal inference with complete data, and by a newly developed triply robust estimator that is consistent if any two of the three models are correct. The methods are applied to the Consortium on Safe Labor data and compared in a simulation study mimicking the Consortium on Safe Labor. © The Author(s) 2013.
Study on the evaluation method for fault displacement based on characterized source model
NASA Astrophysics Data System (ADS)
Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.
2016-12-01
In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.
Power-law behaviour evaluation from foreign exchange market data using a wavelet transform method
NASA Astrophysics Data System (ADS)
Wei, H. L.; Billings, S. A.
2009-09-01
Numerous studies in the literature have shown that the dynamics of many time series including observations in foreign exchange markets exhibit scaling behaviours. A simple new statistical approach, derived from the concept of the continuous wavelet transform correlation function (WTCF), is proposed for the evaluation of power-law properties from observed data. The new method reveals that foreign exchange rates obey power-laws and thus belong to the class of self-similarity processes.
Assimilating uncertain, dynamic and intermittent streamflow observations in hydrological models
NASA Astrophysics Data System (ADS)
Mazzoleni, Maurizio; Alfonso, Leonardo; Chacon-Hurtado, Juan; Solomatine, Dimitri
2015-09-01
Catastrophic floods cause significant socio-economical losses. Non-structural measures, such as real-time flood forecasting, can potentially reduce flood risk. To this end, data assimilation methods have been used to improve flood forecasts by integrating static ground observations, and in some cases also remote sensing observations, within water models. Current hydrologic and hydraulic research works consider assimilation of observations coming from traditional, static sensors. At the same time, low-cost, mobile sensors and mobile communication devices are becoming also increasingly available. The main goal and innovation of this study is to demonstrate the usefulness of assimilating uncertain streamflow observations that are dynamic in space and intermittent in time in the context of two different semi-distributed hydrological model structures. The developed method is applied to the Brue basin, where the dynamic observations are imitated by the synthetic observations of discharge. The results of this study show how model structures and sensors locations affect in different ways the assimilation of streamflow observations. In addition, it proves how assimilation of such uncertain observations from dynamic sensors can provide model improvements similar to those of streamflow observations coming from a non-optimal network of static physical sensors. This can be a potential application of recent efforts to build citizen observatories of water, which can make the citizens an active part in information capturing, evaluation and communication, helping simultaneously to improvement of model-based flood forecasting.
Study on biofilm-forming properties of clinical isolates of Staphylococcus aureus.
Taj, Yasmeen; Essa, Farhan; Aziz, Faisal; Kazmi, Shahana Urooj
2012-05-14
The purpose of this study was to observe the formation of biofilm, an important virulence factor, by isolates of Staphylococcus aureus (S. aureus) in Pakistan by different conventional methods and through electron microscopy. We screened 115 strains of S. aureus isolated from different clinical specimens by tube method (TM), air-liquid interface coverslip assay method, Congo red agar (CRA) method, and scanning electron microscopy (SEM). Out of 115 S. aureus isolates, 63 (54.78%) showed biofilm formation by tube method. Biofilm forming bacteria were further categorized as high producers (n = 23, 20%) and moderate producers (n = 40, 34.78%). TM coordinated well with the coverslip assay for strong biofilm-producing strains in 19 (16.5%) isolates. By coverslip method, weak producers were difficult to differentiate from biofilm negative isolates. Screening on CRA showed biofilm formation only in four (3.47%) strains. Scanning electron micrographs showed the biofilm-forming strains of S. aureus arranged in a matrix on the propylene surface and correlated well with the TM. Biofilm production is a marker of virulence for clinically relevant staphylococcal infections. It can be studied by various methods but screening on CRA is not recommended for investigation of biofilm formation in Staphylococcus aureus. Electron micrograph images correlate well with the biofilm production as observed by TM.
Yang, Jingjing; Cox, Dennis D; Lee, Jong Soo; Ren, Peng; Choi, Taeryon
2017-12-01
Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected on discretized grids with measurement errors. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo methods. Compared to the standard Bayesian inference that suffers serious computational burden and instability in analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results to those obtainable by the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids when the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes. © 2017, The International Biometric Society.
Accelerating assimilation development for new observing systems using EFSO
NASA Astrophysics Data System (ADS)
Lien, Guo-Yuan; Hotta, Daisuke; Kalnay, Eugenia; Miyoshi, Takemasa; Chen, Tse-Chun
2018-03-01
To successfully assimilate data from a new observing system, it is necessary to develop appropriate data selection strategies, assimilating only the generally useful data. This development work is usually done by trial and error using observing system experiments (OSEs), which are very time and resource consuming. This study proposes a new, efficient methodology to accelerate the development using ensemble forecast sensitivity to observations (EFSO). First, non-cycled assimilation of the new observation data is conducted to compute EFSO diagnostics for each observation within a large sample. Second, the average EFSO conditionally sampled in terms of various factors is computed. Third, potential data selection criteria are designed based on the non-cycled EFSO statistics, and tested in cycled OSEs to verify the actual assimilation impact. The usefulness of this method is demonstrated with the assimilation of satellite precipitation data. It is shown that the EFSO-based method can efficiently suggest data selection criteria that significantly improve the assimilation results.
The Use of Eclectic Method in Teaching Turkish to Foreign Students
ERIC Educational Resources Information Center
Iscan, Adem
2017-01-01
Although there are debates in the post-method era, it is seen that the traditional methods used in teaching foreign languages continue to be used persistently in Turkey. It has been observed in the literature review that there is a method problem in foreign language teaching of Turkish; and studies have been conducted in limited number on the…
Antifungal susceptibility testing of Malassezia yeast: comparison of two different methodologies.
Rojas, Florencia D; Córdoba, Susana B; de Los Ángeles Sosa, María; Zalazar, Laura C; Fernández, Mariana S; Cattana, María E; Alegre, Liliana R; Carrillo-Muñoz, Alfonso J; Giusiano, Gustavo E
2017-02-01
All Malassezia species are lipophilic; thus, modifications are required in susceptibility testing methods to ensure their growth. Antifungal susceptibility of Malassezia species using agar and broth dilution methods has been studied. Currently, few tests using disc diffusion methods are being performed. The aim was to evaluate the in vitro susceptibility of Malassezia yeast against antifungal agents using broth microdilution and disc diffusion methods, then to compare both methodologies. Fifty Malassezia isolates were studied. Microdilution method was performed as described in reference document and agar diffusion test was performed using antifungal tablets and discs. To support growth, culture media were supplemented. To correlate methods, linear regression analysis and categorical agreement was determined. The strongest linear association was observed for fluconazole and miconazole. The highest agreement between both methods was observed for itraconazole and voriconazole and the lowest for amphotericin B and fluconazole. Although modifications made to disc diffusion method allowed to obtain susceptibility data for Malassezia yeast, variables cannot be associated through a linear correlation model, indicating that inhibition zone values cannot predict MIC value. According to the results, disc diffusion assay may not represent an alternative to determine antifungal susceptibility of Malassezia yeast. © 2016 Blackwell Verlag GmbH.
NASA Astrophysics Data System (ADS)
Picot, Joris; Glockner, Stéphane
2018-07-01
We present an analytical study of discretization stencils for the Poisson problem and the incompressible Navier-Stokes problem when used with some direct forcing immersed boundary methods. This study uses, but is not limited to, second-order discretization and Ghost-Cell Finite-Difference methods. We show that the stencil size increases with the aspect ratio of rectangular cells, which is undesirable as it breaks assumptions of some linear system solvers. To circumvent this drawback, a modification of the Ghost-Cell Finite-Difference methods is proposed to reduce the size of the discretization stencil to the one observed for square cells, i.e. with an aspect ratio equal to one. Numerical results validate this proposed method in terms of accuracy and convergence, for the Poisson problem and both Dirichlet and Neumann boundary conditions. An improvement on error levels is also observed. In addition, we show that the application of the chosen Ghost-Cell Finite-Difference methods to the Navier-Stokes problem, discretized by a pressure-correction method, requires an additional interpolation step. This extra step is implemented and validated through well known test cases of the Navier-Stokes equations.
Measuring acetabular component position on lateral radiographs - ischio-lateral method.
Pulos, Nicholas; Tiberi Iii, John V; Schmalzried, Thomas P
2011-01-01
The standard method for the evaluation of arthritis and postoperative assessment of arthroplasty treatment is observation and measurement from plain films, using the flm edge for orientation. A more recent employment of an anatomical landmark, the ischial tuberosity, has come into use as orientation for evaluation and is called the ischio-lateral method. In this study, the use of this method was evaluated as a first report to the literature on acetabular component measurement using a skeletal reference with lateral radiographs. Postoperative radiographs of 52 hips, with at least three true lateral radiographs taken at different time periods, were analyzed. Component position was measured with the historical method (using the flm edge for orientation) and with the new method using the ischio-lateral method. The mean standard deviation (SD) for the historical approach was 3.7° and for the ischio-lateral method, 2.2° (p < 0.001). With the historical method, 19 (36.5%) hips had a SD greater than ± 4°, compared to six hips (11.5%) with the ischio-lateral method. By using a skeletal reference, the ischio-lateral method provides a more consistent measurement of acetabular component position. The high intra-class correlation coefficients for both intra- and inter-observer reliability indicate that the angle measured with this simple method, which employs no further technology, increased time, or cost, is consistent and reproducible for multiple observers.
A comprehensive method for GNSS data quality determination to improve ionospheric data analysis.
Kim, Minchan; Seo, Jiwon; Lee, Jiyun
2014-08-14
Global Navigation Satellite Systems (GNSS) are now recognized as cost-effective tools for ionospheric studies by providing the global coverage through worldwide networks of GNSS stations. While GNSS networks continue to expand to improve the observability of the ionosphere, the amount of poor quality GNSS observation data is also increasing and the use of poor-quality GNSS data degrades the accuracy of ionospheric measurements. This paper develops a comprehensive method to determine the quality of GNSS observations for the purpose of ionospheric studies. The algorithms are designed especially to compute key GNSS data quality parameters which affect the quality of ionospheric product. The quality of data collected from the Continuously Operating Reference Stations (CORS) network in the conterminous United States (CONUS) is analyzed. The resulting quality varies widely, depending on each station and the data quality of individual stations persists for an extended time period. When compared to conventional methods, the quality parameters obtained from the proposed method have a stronger correlation with the quality of ionospheric data. The results suggest that a set of data quality parameters when used in combination can effectively select stations with high-quality GNSS data and improve the performance of ionospheric data analysis.
A Comprehensive Method for GNSS Data Quality Determination to Improve Ionospheric Data Analysis
Kim, Minchan; Seo, Jiwon; Lee, Jiyun
2014-01-01
Global Navigation Satellite Systems (GNSS) are now recognized as cost-effective tools for ionospheric studies by providing the global coverage through worldwide networks of GNSS stations. While GNSS networks continue to expand to improve the observability of the ionosphere, the amount of poor quality GNSS observation data is also increasing and the use of poor-quality GNSS data degrades the accuracy of ionospheric measurements. This paper develops a comprehensive method to determine the quality of GNSS observations for the purpose of ionospheric studies. The algorithms are designed especially to compute key GNSS data quality parameters which affect the quality of ionospheric product. The quality of data collected from the Continuously Operating Reference Stations (CORS) network in the conterminous United States (CONUS) is analyzed. The resulting quality varies widely, depending on each station and the data quality of individual stations persists for an extended time period. When compared to conventional methods, the quality parameters obtained from the proposed method have a stronger correlation with the quality of ionospheric data. The results suggest that a set of data quality parameters when used in combination can effectively select stations with high-quality GNSS data and improve the performance of ionospheric data analysis. PMID:25196005
Robustness of fit indices to outliers and leverage observations in structural equation modeling.
Yuan, Ke-Hai; Zhong, Xiaoling
2013-06-01
Normal-distribution-based maximum likelihood (NML) is the most widely used method in structural equation modeling (SEM), although practical data tend to be nonnormally distributed. The effect of nonnormally distributed data or data contamination on the normal-distribution-based likelihood ratio (LR) statistic is well understood due to many analytical and empirical studies. In SEM, fit indices are used as widely as the LR statistic. In addition to NML, robust procedures have been developed for more efficient and less biased parameter estimates with practical data. This article studies the effect of outliers and leverage observations on fit indices following NML and two robust methods. Analysis and empirical results indicate that good leverage observations following NML and one of the robust methods lead most fit indices to give more support to the substantive model. While outliers tend to make a good model superficially bad according to many fit indices following NML, they have little effect on those following the two robust procedures. Implications of the results to data analysis are discussed, and recommendations are provided regarding the use of estimation methods and interpretation of fit indices. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Physics of the Solar Active Regions from Radio Observations
NASA Astrophysics Data System (ADS)
Gelfreikh, G. B.
1999-12-01
Localized increase of the magnetic field observed by routine methods on the photosphere result in the growth of a number of active processes in the solar atmosphere and the heliosphere. These localized regions of increased magnetic field are called active regions (AR). The main processes of transfer, accumulation and release of energy in an AR is, however, out of scope of photospheric observations being essentially a 3D-process and happening either under photosphere or up in the corona. So, to investigate these plasma structures and processes we are bound to use either extrapolation of optical observational methods or observations in EUV, X-rays and radio. In this review, we stress and illustrate the input to the problem gained from radio astronomical methods and discuss possible future development of their applicatications. Historically speaking each new step in developing radio technique of observations resulted in detecting some new physics of ARs. The most significant progress in the last few years in radio diagnostics of the plasma structures of magnetospheres of the solar ARs is connected with the developing of the 2D full disk analysis on regular basis made at Nobeyama and detailed multichannel spectral-polarization (but one-dimensional and one per day) solar observations at the RATAN-600. In this report the bulk of attention is paid to the new approach to the study of solar activity gained with the Nobeyama radioheliograph and analyzing the ways for future progress. The most important new features of the multicomponent radio sources of the ARs studied using Nobeyama radioheliograph are as follow: 1. The analysis of magnetic field structures in solar corona above sunspot with 2000 G. Their temporal evolution and fluctuations with the periods around 3 and 5 minutes, due to MHD-waves in sunspot magnetic tubes and surrounding plasma. These investigations are certainly based on an analysis of thermal cyclotron emission of lower corona and CCTR above sunspot umbra. 2. Magnetography of the solar active regions presenting the weak magnetic fields (with the sensitivity of several G) reflecting longitude component of the magnetic field in chromosphere and corona and solar faculae structure. The method is based on an analysis of the weak polarization (of the order of 1% or less). 3. An analysis of the structure, temperature, and density of arches seen above neutral magnetic field lines (seen in most ARs with spots and without ones). 4. Study of temporal and spatial behavior of inversion of the sign of the circular polarization with the result of magnetography of the solar corona. 5. An analysis of the solar activity at high heliographic latitudes, observed mostly as polar faculae (increased brightness structures having counterparts in optical white light observations). In modern study of the solar activity analysis of the activity of polar zones are of principal importance. Nobeyama probably presents the most reliable way to study this. The above points present not exactly completed results but rather the directions for future studies. These should use full time coverage of observations at different phases of the solar activity and combination of observations with other radio, optical, EUV and X-ray observations whenever possible.
NASA Astrophysics Data System (ADS)
Bonetto, P.; Qi, Jinyi; Leahy, R. M.
2000-08-01
Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.
Vaidya, Jueeli D.; van den Bogert, Bartholomeus; Edwards, Joan E.; Boekhorst, Jos; van Gastelen, Sanne; Saccenti, Edoardo; Plugge, Caroline M.; Smidt, Hauke
2018-01-01
DNA based methods have been widely used to study the complexity of the rumen microbiota, and it is well known that the method of DNA extraction is a critical step in enabling accurate assessment of this complexity. Rumen fluid (RF) and fibrous content (FC) fractions differ substantially in terms of their physical nature and associated microorganisms. The aim of this study was therefore to assess the effect of four DNA extraction methods (RBB, PBB, FDSS, PQIAmini) differing in cell lysis and/or DNA recovery methods on the observed microbial diversity in RF and FC fractions using samples from four rumen cannulated dairy cows fed 100% grass silage (GS100), 67% GS and 33% maize silage (GS67MS33), 33% GS and 67% MS (GS33MS67), or 100% MS (MS100). An ANOVA statistical test was applied on DNA quality and yield measurements, and it was found that the DNA yield was significantly affected by extraction method (p < 0.001) and fraction (p < 0.001). The 260/280 ratio was not affected by extraction (p = 0.08) but was affected by fraction (p = 0.03). On the other hand, the 260/230 ratio was affected by extraction method (p < 0.001) but not affected by fraction (p = 0.8). However, all four extraction procedures yielded DNA suitable for further analysis of bacterial, archaeal and anaerobic fungal communities using quantitative PCR and pyrosequencing of relevant taxonomic markers. Redundancy analysis (RDA) of bacterial 16S rRNA gene sequence data at the family level showed that there was a significant effect of rumen fraction (p = 0.012), and that PBB (p = 0.012) and FDSS (p = 0.024) also significantly contributed to explaining the observed variation in bacterial community composition. Whilst the DNA extraction method affected the apparent bacterial community composition, no single extraction method could be concluded to be ineffective. No obvious effect of DNA extraction method on the anaerobic fungi or archaea was observed, although fraction effects were evident for both. In summary, the comprehensive assessment of observed communities of bacteria, archaea and anaerobic fungi described here provides insight into a rational basis for selecting an optimal methodology to obtain a representative picture of the rumen microbiota. PMID:29445366
Vaidya, Jueeli D; van den Bogert, Bartholomeus; Edwards, Joan E; Boekhorst, Jos; van Gastelen, Sanne; Saccenti, Edoardo; Plugge, Caroline M; Smidt, Hauke
2018-01-01
DNA based methods have been widely used to study the complexity of the rumen microbiota, and it is well known that the method of DNA extraction is a critical step in enabling accurate assessment of this complexity. Rumen fluid (RF) and fibrous content (FC) fractions differ substantially in terms of their physical nature and associated microorganisms. The aim of this study was therefore to assess the effect of four DNA extraction methods (RBB, PBB, FDSS, PQIAmini) differing in cell lysis and/or DNA recovery methods on the observed microbial diversity in RF and FC fractions using samples from four rumen cannulated dairy cows fed 100% grass silage (GS100), 67% GS and 33% maize silage (GS67MS33), 33% GS and 67% MS (GS33MS67), or 100% MS (MS100). An ANOVA statistical test was applied on DNA quality and yield measurements, and it was found that the DNA yield was significantly affected by extraction method ( p < 0.001) and fraction ( p < 0.001). The 260/280 ratio was not affected by extraction ( p = 0.08) but was affected by fraction ( p = 0.03). On the other hand, the 260/230 ratio was affected by extraction method ( p < 0.001) but not affected by fraction ( p = 0.8). However, all four extraction procedures yielded DNA suitable for further analysis of bacterial, archaeal and anaerobic fungal communities using quantitative PCR and pyrosequencing of relevant taxonomic markers. Redundancy analysis (RDA) of bacterial 16S rRNA gene sequence data at the family level showed that there was a significant effect of rumen fraction ( p = 0.012), and that PBB ( p = 0.012) and FDSS ( p = 0.024) also significantly contributed to explaining the observed variation in bacterial community composition. Whilst the DNA extraction method affected the apparent bacterial community composition, no single extraction method could be concluded to be ineffective. No obvious effect of DNA extraction method on the anaerobic fungi or archaea was observed, although fraction effects were evident for both. In summary, the comprehensive assessment of observed communities of bacteria, archaea and anaerobic fungi described here provides insight into a rational basis for selecting an optimal methodology to obtain a representative picture of the rumen microbiota.
[Quality analysis of observational studies on pelvic organ prolapse in China].
Wang, Y T; Tao, L Y; He, H J; Han, J S
2017-06-25
Objective: To evaluate the quality of observational studies on pelvic organ prolapse in China. Methods: The checklist of strengthening the reporting of observational studies in epidemiology (STROBE) statement was applied to evaluate the observational studies. The articles were searched in the SinoMed database using the terms: prolapse, uterine prolapse, cystocele, rectal prolapse and pelvic floor; limited to Chinese core journals in obstetrics and gynecology from January 1996 to December 2015. With two 10-year groups (1996-2005 and 2006-2015), the χ(2) test was used to evaluate inter-group differences. Results: (1) A total of 386 observational studies were selected, including 15.5%(60/386) of case-control studies, 80.6%(311/386) of cohort studies and 3.9% (15/386) of cross-sectional studies. (2) There were totally 22 items including 34 sub-items in the checklist. There were 17 sub-items (50.0%, 17/34) had a reporting ratio less than 50% in all of aticles, including: 1a (study's design) 3.9% (15/386), 6a (participants) 24.6% (95/386), 6b (matched studies) 0 (0/386), 9 (bias) 8.3% (32/386), 10 (study size) 3.9%, 11 (quantitative variables) 41.2% (159/386), 12b-12e (statistical methods in detail) 0-2.6% (10/386), 13a (numbers of individuals at each stage of study) 18.9% (73/386), 13b (reasons for non-participation at each stage) 18.9%, 13c (flow diagram) 0, 16b and 16c (results of category boundaries and relative risk) 9.6% (37/386) and 0, 19 (limitations) 31.6% (122/386), 22 (funding) 20.5% (79/386). (3) The quality of articles published in the two decades (1996-2005 and 2006-2015) were compared, and 38.2%(13/34) of sub-items had been significantly improved in the second 10-year (all P< 0.05). The improved items were as follows: 1b (integrity of abstract), 2 (background/rationale), 6a (participants), 7 (variables), 8 (data sources/measurement), 9 (bias), 11 (quantitative variables), 12a (statistical methods), 17 (other analyses), 18 (key results), 19 (limitations), 21 (generalisability), 22 (funding). Conclusions: The quality of observational studies on POP in China is suboptimal in half of evaluation items. However, the quality of articles published in the second 10-year have significantly improved.
Nie, Xiaolu; Zhang, Ying; Wu, Zehao; Jia, Lulu; Wang, Xiaoling; Langan, Sinéad M; Benchimol, Eric I; Peng, Xiaoxia
2018-06-01
To appraise the reporting quality of studies which concerned linezolid related thrombocytopenia referring to REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) statement. Medline, Embase, Cochrane library and clinicaltrial.gov were searched for observational studies concerning linezolid related thrombocytopenia using routinely collected health data from 2000 to 2017. Two reviewers screened potential eligible articles and extracted data independently. Finally, reporting quality assessment was performed by two senior researchers using RECORD statement. Of 25 included studies, 11 (44.0%) mentioned the type of data in the title and/or abstract. In 38 items derived from RECORD statement, the median number of items reported in the included studies was 22 (interquartile range (IQR) 18 to 27). Inadequate reporting issues were discovered in the following aspects: validation studies of the codes or algorithms, study size estimation, quantitative variables, subgroup statistical methods, missing data, follow-up/matching or sampling strategy, sensitivity analysis and cleaning methods, funding and role of funders and accessibility of protocol, raw data. This study provides the evidence that the reporting quality of post-marketing safety evaluation studies conducted using routinely collected health data was often insufficient. Future stakeholders are encouraged to endorse the RECORD guidelines in pharmacovigilance.
WRF nested large-eddy simulations of deep convection during SEAC4RS
NASA Astrophysics Data System (ADS)
Heath, Nicholas Kyle
Deep convection is an important component of atmospheric circulations that affects many aspects of weather and climate. Therefore, improved understanding and realistic simulations of deep convection are critical to both operational and climate forecasts. Large-eddy simulations (LESs) often are used with observations to enhance understanding of convective processes. This study develops and evaluates a nested-LES method using the Weather Research and Forecasting (WRF) model. Our goal is to evaluate the extent to which the WRF nested-LES approach is useful for studying deep convection during a real-world case. The method was applied on 2 September 2013, a day of continental convection having a robust set of ground and airborne data available for evaluation. A three domain mesoscale WRF simulation is run first. Then, the finest mesoscale output (1.35 km grid length) is used to separately drive nested-LES domains with grid lengths of 450 and 150 m. Results reveal that the nested-LES approach reasonably simulates a broad spectrum of observations, from reflectivity distributions to vertical velocity profiles, during the study period. However, reducing the grid spacing does not necessarily improve results for our case, with the 450 m simulation outperforming the 150 m version. We find that simulated updrafts in the 150 m simulation are too narrow to overcome the negative effects of entrainment, thereby generating convection that is weaker than observed. Increasing the sub-grid mixing length in the 150 m simulation leads to deeper, more realistic convection, but comes at the expense of delaying the onset of the convection. Overall, results show that both the 450 m and 150 m simulations are influenced considerably by the choice of sub-grid mixing length used in the LES turbulence closure. Finally, the simulations and observations are used to study the processes forcing strong midlevel cloud-edge downdrafts that were observed on 2 September. Results suggest that these downdrafts are forced by evaporative cooling due to mixing near cloud edge and by vertical perturbation pressure gradient forces acting to restore mass continuity around neighboring updrafts. We conclude that the WRF nested-LES approach provides an effective method for studying deep convection for our real-world case. The method can be used to provide insight into physical processes that are important to understanding observations. The WRF nested-LES approach could be adapted for other case studies in which high-resolution observations are available for validation.
Assessment of forward head posture in females: observational and photogrammetry methods.
Salahzadeh, Zahra; Maroufi, Nader; Ahmadi, Amir; Behtash, Hamid; Razmjoo, Arash; Gohari, Mahmoud; Parnianpour, Mohamad
2014-01-01
There are different methods to assess forward head posture (FHP) but the accuracy and discrimination ability of these methods are not clear. Here, we want to compare three postural angles for FHP assessment and also study the discrimination accuracy of three photogrammetric methods to differentiate groups categorized based on observational method. All Seventy-eight healthy female participants (23 ± 2.63 years), were classified into three groups: moderate-severe FHP, slight FHP and non FHP based on observational postural assessment rules. Applying three photogrammetric methods - craniovertebral angle, head title angle and head position angle - to measure FHP objectively. One - way ANOVA test showed a significant difference in three categorized group's craniovertebral angle (P< 0.05, F=83.07). There was no dramatic difference in head tilt angle and head position angle methods in three groups. According to Linear Discriminate Analysis (LDA) results, the canonical discriminant function (Wilks'Lambda) was 0.311 for craniovertebral angle with 79.5% of cross-validated grouped cases correctly classified. Our results showed that, craniovertebral angle method may discriminate the females with moderate-severe and non FHP more accurate than head position angle and head tilt angle. The photogrammetric method had excellent inter and intra rater reliability to assess the head and cervical posture.
Garcia-Huidobro, Diego; Michael Oakes, J
2017-04-01
Randomised controlled trials (RCTs) are typically viewed as the gold standard for causal inference. This is because effects of interest can be identified with the fewest assumptions, especially imbalance in background characteristics. Yet because conducting RCTs are expensive, time consuming and sometimes unethical, observational studies are frequently used to study causal associations. In these studies, imbalance, or confounding, is usually controlled with multiple regression, which entails strong assumptions. The purpose of this manuscript is to describe strengths and weaknesses of several methods to control for confounding in observational studies, and to demonstrate their use in cross-sectional dataset that use patient registration data from the Juan Pablo II Primary Care Clinic in La Pintana-Chile. The dataset contains responses from 5855 families who provided complete information on family socio-demographics, family functioning and health problems among their family members. We employ regression adjustment, stratification, restriction, matching, propensity score matching, standardisation and inverse probability weighting to illustrate the approaches to better causal inference in non-experimental data and compare results. By applying study design and data analysis techniques that control for confounding in different ways than regression adjustment, researchers may strengthen the scientific relevance of observational studies. © 2016 International Union of Psychological Science.
Tang, Guoping; Shafer, Sarah L.; Barlein, Patrick J.; Holman, Justin O.
2009-01-01
Prognostic vegetation models have been widely used to study the interactions between environmental change and biological systems. This study examines the sensitivity of vegetation model simulations to: (i) the selection of input climatologies representing different time periods and their associated atmospheric CO2 concentrations, (ii) the choice of observed vegetation data for evaluating the model results, and (iii) the methods used to compare simulated and observed vegetation. We use vegetation simulated for Asia by the equilibrium vegetation model BIOME4 as a typical example of vegetation model output. BIOME4 was run using 19 different climatologies and their associated atmospheric CO2 concentrations. The Kappa statistic, Fuzzy Kappa statistic and a newly developed map-comparison method, the Nomad index, were used to quantify the agreement between the biomes simulated under each scenario and the observed vegetation from three different global land- and tree-cover data sets: the global Potential Natural Vegetation data set (PNV), the Global Land Cover Characteristics data set (GLCC), and the Global Land Cover Facility data set (GLCF). The results indicate that the 30-year mean climatology (and its associated atmospheric CO2 concentration) for the time period immediately preceding the collection date of the observed vegetation data produce the most accurate vegetation simulations when compared with all three observed vegetation data sets. The study also indicates that the BIOME4-simulated vegetation for Asia more closely matches the PNV data than the other two observed vegetation data sets. Given the same observed data, the accuracy assessments of the BIOME4 simulations made using the Kappa, Fuzzy Kappa and Nomad index map-comparison methods agree well when the compared vegetation types consist of a large number of spatially continuous grid cells. The results of this analysis can assist model users in designing experimental protocols for simulating vegetation.
A Voice-Radio Method for Collecting Human Factors Data.
ERIC Educational Resources Information Center
Askren, William B.; And Others
Available methods for collecting human factors data rely heavily on observations, interviews, and questionnaires. A need exists for other methods. The feasibility of using two-way voice-radio for this purpose was studied. The data collection methodology consisted of a human factors analyst talking from a radio base station with technicians wearing…
Equating Scores from Adaptive to Linear Tests
ERIC Educational Resources Information Center
van der Linden, Wim J.
2006-01-01
Two local methods for observed-score equating are applied to the problem of equating an adaptive test to a linear test. In an empirical study, the methods were evaluated against a method based on the test characteristic function (TCF) of the linear test and traditional equipercentile equating applied to the ability estimates on the adaptive test…
A Review of Web Information Seeking Research: Considerations of Method and Foci of Interest
ERIC Educational Resources Information Center
Martzoukou, Konstantina
2005-01-01
Introduction: This review shows that Web information seeking research suffers from inconsistencies in method and a lack of homogeneity in research foci. Background: Qualitative and quantitative methods are needed to produce a comprehensive view of information seeking. Studies also recommend observation as one of the most fundamental ways of…
The Sine Method: An Alternative Height Measurement Technique
Don C. Bragg; Lee E. Frelich; Robert T. Leverett; Will Blozan; Dale J. Luthringer
2011-01-01
Height is one of the most important dimensions of trees, but few observers are fully aware of the consequences of the misapplication of conventional height measurement techniques. A new approach, the sine method, can improve height measurement by being less sensitive to the requirements of conventional techniques (similar triangles and the tangent method). We studied...
Analyzing Empirical Evaluations of Non-Experimental Methods in Field Settings
ERIC Educational Resources Information Center
Steiner, Peter M.; Wong, Vivian
2016-01-01
Despite recent emphasis on the use of randomized control trials (RCTs) for evaluating education interventions, in most areas of education research, observational methods remain the dominant approach for assessing program effects. Over the last three decades, the within-study comparison (WSC) design has emerged as a method for evaluating the…
Factors related to nonuse of seat belts in Michigan.
DOT National Transportation Integrated Search
1987-09-01
This study combined direct observation of seat belt use with interview methods to : identify factors related to seat belt use in a state with a mandatory seat belt use law. Trained : observers recorded restraint use for a probability sample of motori...
NASA Astrophysics Data System (ADS)
Lartizien, Carole; Tomei, Sandrine; Maxim, Voichita; Odet, Christophe
2007-03-01
This study evaluates new observer models for 3D whole-body Positron Emission Tomography (PET) imaging based on a wavelet sub-band decomposition and compares them with the classical constant-Q CHO model. Our final goal is to develop an original method that performs guided detection of abnormal activity foci in PET oncology imaging based on these new observer models. This computer-aided diagnostic method would highly benefit to clinicians for diagnostic purpose and to biologists for massive screening of rodents populations in molecular imaging. Method: We have previously shown good correlation of the channelized Hotelling observer (CHO) using a constant-Q model with human observer performance for 3D PET oncology imaging. We propose an alternate method based on combining a CHO observer with a wavelet sub-band decomposition of the image and we compare it to the standard CHO implementation. This method performs an undecimated transform using a biorthogonal B-spline 4/4 wavelet basis to extract the features set for input to the Hotelling observer. This work is based on simulated 3D PET images of an extended MCAT phantom with randomly located lesions. We compare three evaluation criteria: classification performance using the signal-to-noise ratio (SNR), computation efficiency and visual quality of the derived 3D maps of the decision variable λ. The SNR is estimated on a series of test images for a variable number of training images for both observers. Results: Results show that the maximum SNR is higher with the constant-Q CHO observer, especially for targets located in the liver, and that it is reached with a smaller number of training images. However, preliminary analysis indicates that the visual quality of the 3D maps of the decision variable λ is higher with the wavelet-based CHO and the computation time to derive a 3D λ-map is about 350 times shorter than for the standard CHO. This suggests that the wavelet-CHO observer is a good candidate for use in our guided detection method.
van der Waal, Daniëlle; Broeders, Mireille J M; Verbeek, André L M; Duffy, Stephen W; Moss, Sue M
2015-07-01
Ongoing breast cancer screening programs can only be evaluated using observational study designs. Most studies have observed a reduction in breast cancer mortality, but design differences appear to have resulted in different estimates. Direct comparison of case-control and trial analyses gives more insight into this variation. Here, we performed case-control analyses within the randomized UK Age Trial. The Age Trial assessed the effect of screening on breast cancer mortality in women ages 40-49 years. In our approach, case subjects were defined as breast cancer deaths between trial entry (1991-1997) and 2004. Women were ages 39-41 years at entry. For every case subject, five control subjects were selected. All case subjects were included in analyses of screening invitation (356 case subjects, 1,780 controls), whereas analyses of attendance were restricted to women invited to screening (105 case subjects, 525 age-matched controls). Odds ratios (OR) were estimated with conditional logistic regression. We used and compared two methods to correct for self-selection bias. Screening invitation resulted in a breast cancer mortality reduction of 17% (95% confidence interval [CI]: -36%, +6%), similar to trial results. Different exposure definitions and self-selection adjustments influenced the observed breast cancer mortality reduction. Depending on the method, "ever screened" appeared to be associated with a small reduction (OR: 0.86, 95% CI: 0.40, 1.89) or no reduction (OR: 1.02, 95% CI: 0.48, 2.14) using the two methods of correction. Recent attendance resulted in an adjusted mortality reduction of 36% (95% CI: -69%, +31%) or 45% (95% CI: -71%, +5%). Observational studies, and particularly case-control studies, are an important monitoring tool for breast cancer screening programs. The focus should be on diminishing bias in observational studies and gaining a better understanding of the influence of study design on estimates of mortality reduction.
NASA Astrophysics Data System (ADS)
Kodama, Yasko; Rodrigues, Orlando, Jr.; Garcia, Rafael Henrique Lazzari; Santos, Paulo de Souza; Vasquez, Pablo A. S.
2016-07-01
Main subject of this article was to study room temperature stable radicals in Co-60 gamma irradiated contemporary paper using Electron Paramagnetic Resonance spectrometer (EPR). XRD was used to study the effect of ionizing radiation on the morphology of book paper. SEM images presented regions with cellulose fibers and regions with particles agglomeration on the cellulose fibers. Those agglomerations were rich in calcium, observed by EDS. XRD analysis confirmed presence of calcium carbonate diffraction peaks. The main objective of this study was to propose a method using conventional kinetics chemical reactions for the observed radical formed by ionizing radiation. Therefore, further analyses were made to study the half-life and the kinetics of the free radical created. This method can be suitably applied to study radicals on cultural heritage objects.
A spring system method for a mesh generation problem
NASA Astrophysics Data System (ADS)
Romanov, A.
2018-04-01
A new direct method for the 2d-mesh generation for a simply-connected domain using a spring system is observed. The method can be used with other methods to modify a mesh for growing solid problems. Advantages and disadvantages of the method are shown. Different types of boundary conditions are explored. The results of modelling for different target domains are given. Some applications for composite materials are studied.
ERIC Educational Resources Information Center
Davis, Melinda M.; Spurlock, Margaret; Ramsey, Katrina; Smith, Jamie; Beamer, Beth Ann; Aromaa, Susan; McGinnis, Paul B.
2017-01-01
Providing flavored milk in school lunches is controversial, with conflicting evidence on its impact on nutritional intake versus added sugar consumption and excess weight gain. Nonindustry-sponsored studies using individual-level analyses are needed. Therefore, we conducted this mixed-methods study of flavored milk removal at a rural primary…
Stab, Nicole; Hacker, Winfried; Weigl, Matthias
2016-09-01
Ward organization is a major determinant for nurses' well-being on the job. The majority of previous research on this relationship is based on single source methods, which have been criticized as skewed estimations mainly due to subjectivity of the ratings and due to common source bias. To investigate the association of ward organization characteristics and nurses' exhaustion by combining observation-based assessments with nurses' self-reports. Cross-sectional study on 25 wards of four hospitals and 245 nurses. Our multi-method approach to evaluate hospital ward organization consisted of on-site observations with a standardized assessment tool and of questionnaires to evaluate nurses' self-reports and exhaustion. After establishing the reliability of our measures, we applied multi-level regression analyses to determine associations between determinant and outcome variables. We found substantial convergence in ward organization between the observation-based assessments and nurses' self-reports, which supports the validity of our external assessments. Furthermore, two observation-based characteristics, namely participation and patient-focused care, were significantly associated with lower emotional exhaustion among the nurses. Our results suggest that observation-based assessments are a valid and feasible way to assess ward organization in hospitals. Nurses' self-reported as well as observation-based ratings on ward organization were associated with nurses' emotional exhaustion. This is of interest mainly for identifying alternative measures in evaluating nurses' work environments, to inform health promotion activities and to evaluate job redesign intervention. Copyright © 2016 Elsevier Ltd. All rights reserved.
Identification and analysis of recent temporal temperature trends for Dehradun, Uttarakhand, India
NASA Astrophysics Data System (ADS)
Piyoosh, Atul Kant; Ghosh, Sanjay Kumar
2018-05-01
Maximum and minimum temperatures (T max and T min) are indicators of changes in climate. In this study, observed and gridded T max and T min data of Dehradun are analyzed for the period 1901-2014. Observed data obtained from India Meteorological Department and National Institute of Hydrology, whereas gridded data from Climatic Research Unit (CRU) were used. Efficacy of elevation-corrected CRU data was checked by cross validation using data of various stations at different elevations. In both the observed and gridded data, major change points were detected using Cumulative Sum chart. For T max, change points occur in the years 1974 and 1997, while, for T min, in 1959 and 1986. Statistical significance of trends was tested in three sub-periods based on change points using Mann-Kendall (MK) test, Sen's slope estimator, and linear regression (LR) method. It has been found that both the T max and T min have a sequence of rising, falling, and rising trends in sub-periods. Out of three different methods used for trend tests, MK and SS have indicated similar results, while LR method has also shown similar results for most of the cases. Root-mean-square error for actual and anomaly time series of CRU data was found to be within one standard deviation of observed data which indicates that the CRU data are very close to the observed data. The trends exhibited by CRU data were also found to be similar to the observed data. Thus, CRU temperature data may be quite useful for various studies in the regions of scarcity of observational data.
Patorno, Elisabetta; Patrick, Amanda R; Garry, Elizabeth M; Schneeweiss, Sebastian; Gillet, Victoria G; Bartels, Dorothee B; Masso-Gonzalez, Elvira; Seeger, John D
2014-11-01
Recent years have witnessed a growing body of observational literature on the association between glucose-lowering treatments and cardiovascular disease. However, many of the studies are based on designs or analyses that inadequately address the methodological challenges involved. We reviewed recent observational literature on the association between glucose-lowering medications and cardiovascular outcomes and assessed the design and analysis methods used, with a focus on their ability to address specific methodological challenges. We describe and illustrate these methodological issues and their impact on observed associations, providing examples from the reviewed literature. We suggest approaches that may be employed to manage these methodological challenges. From the evaluation of 81 publications of observational investigations assessing the association between glucose-lowering treatments and cardiovascular outcomes, we identified the following methodological challenges: 1) handling of temporality in administrative databases; 2) handling of risks that vary with time and treatment duration; 3) definitions of the exposure risk window; 4) handling of exposures that change over time; and 5) handling of confounding by indication. Most of these methodological challenges may be suitably addressed through application of appropriate methods. Observational research plays an increasingly important role in the evaluation of the clinical effects of diabetes treatment. Implementation of appropriate research methods holds the promise of reducing the potential for spurious findings and the risk that the spurious findings will mislead the medical community about risks and benefits of diabetes medications.
Stang, Paul E; Ryan, Patrick B; Overhage, J Marc; Schuemie, Martijn J; Hartzema, Abraham G; Welebob, Emily
2013-10-01
Researchers using observational data to understand drug effects must make a number of analytic design choices that suit the characteristics of the data and the subject of the study. Review of the published literature suggests that there is a lack of consistency even when addressing the same research question in the same database. To characterize the degree of similarity or difference in the method and analysis choices made by observational database research experts when presented with research study scenarios. On-line survey using research scenarios on drug-effect studies to capture method selection and analysis choices that follow a dependency branching based on response to key questions. Voluntary participants experienced in epidemiological study design solicited for participation through registration on the Observational Medical Outcomes Partnership website, membership in particular professional organizations, or links in relevant newsletters. Description (proportion) of respondents selecting particular methods and making specific analysis choices based on individual drug-outcome scenario pairs. The number of questions/decisions differed based on stem questions of study design, time-at-risk, outcome definition, and comparator. There is little consistency across scenarios, by drug or by outcome of interest, in the decisions made for design and analyses in scenarios using large healthcare databases. The most consistent choice was the cohort study design but variability in the other critical decisions was common. There is great variation among epidemiologists in the design and analytical choices that they make when implementing analyses in observational healthcare databases. These findings confirm that it will be important to generate empiric evidence to inform these decisions and to promote a better understanding of the impact of standardization on research implementation.
Sargeant, J M; O'Connor, A M; Dohoo, I R; Erb, H N; Cevallos, M; Egger, M; Ersbøll, A K; Martin, S W; Nielsen, L R; Pearl, D L; Pfeiffer, D U; Sanchez, J; Torrence, M E; Vigre, H; Waldner, C; Ward, M P
2016-12-01
The reporting of observational studies in veterinary research presents many challenges that often are not adequately addressed in published reporting guidelines. A consensus meeting of experts was organized to develop an extension of the STROBE statement to address observational studies in veterinary medicine with respect to animal health, animal production, animal welfare and food safety outcomes. The consensus meeting was held 11-13 May 2014 in Mississauga, Ontario, Canada. Seventeen experts from North America, Europe and Australia attended the meeting. The experts were epidemiologists and biostatisticians, many of whom hold or have held editorial positions with relevant journals. Prior to the meeting, 19 experts completed a survey about whether they felt any of the 22 items of the STROBE statement should be modified and whether items should be added to address unique issues related to observational studies in animal species with health, production, welfare or food safety outcomes. At the meeting, the participants were provided with the survey responses and relevant literature concerning the reporting of veterinary observational studies. During the meeting, each STROBE item was discussed to determine whether or not re-wording was recommended, and whether additions were warranted. Anonymous voting was used to determine whether there was consensus for each item change or addition. The consensus was that six items needed no modifications or additions. Modifications or additions were made to the STROBE items numbered as follows: 1 (title and abstract), 3 (objectives), 5 (setting), 6 (participants), 7 (variables), 8 (data sources/measurement), 9 (bias), 10 (study size), 12 (statistical methods), 13 (participants), 14 (descriptive data), 15 (outcome data), 16 (main results), 17 (other analyses), 19 (limitations) and 22 (funding). Published literature was not always available to support modification to, or inclusion of, an item. The methods and processes used in the development of this statement were similar to those used for other extensions of the STROBE statement. The use of this extension to the STROBE statement should improve the reporting of observational studies in veterinary research related to animal health, production, welfare or food safety outcomes by recognizing the unique features of observational studies involving food-producing and companion animals, products of animal origin, aquaculture and wildlife. © 2016 The Authors. Zoonoses and Public Health published by Blackwell Verlag GmbH.
Applying Propensity Score Methods in Medical Research: Pitfalls and Prospects
Luo, Zhehui; Gardiner, Joseph C.; Bradley, Cathy J.
2012-01-01
The authors review experimental and nonexperimental causal inference methods, focusing on assumptions for the validity of instrumental variables and propensity score (PS) methods. They provide guidance in four areas for the analysis and reporting of PS methods in medical research and selectively evaluate mainstream medical journal articles from 2000 to 2005 in the four areas, namely, examination of balance, overlapping support description, use of estimated PS for evaluation of treatment effect, and sensitivity analyses. In spite of the many pitfalls, when appropriately evaluated and applied, PS methods can be powerful tools in assessing average treatment effects in observational studies. Appropriate PS applications can create experimental conditions using observational data when randomized controlled trials are not feasible and, thus, lead researchers to an efficient estimator of the average treatment effect. PMID:20442340
Lonjon, Guillaume; Porcher, Raphael; Ergina, Patrick; Fouet, Mathilde; Boutron, Isabelle
2017-05-01
To describe the evolution of the use and reporting of propensity score (PS) analysis in observational studies assessing a surgical procedure. Assessing surgery in randomized controlled trials raises several challenges. Observational studies with PS analysis are a robust alternative for comparative effectiveness research. In this methodological systematic review, we identified all PubMed reports of observational studies with PS analysis that evaluated a surgical procedure and described the evolution of their use over time. Then, we selected a sample of articles published from August 2013 to July 2014 and systematically appraised the quality of reporting and potential bias of the PS analysis used. We selected 652 reports of observational studies with PS analysis. The publications increased over time, from 1 report in 1987 to 198 in 2013. Among the 129 reports assessed, 20% (n = 24) did not detail the covariates included in the PS and 77% (n = 100) did not report a justification for including these covariates in the PS. The rate of missing data for potential covariates was reported in 9% of articles. When a crossover by conversion was possible, only 14% of reports (n = 12) mentioned this issue. For matched analysis, 10% of articles reported all 4 key elements that allow for reproducibility of a PS-matched analysis (matching ratio, method to choose the nearest neighbors, replacement and method for statistical analysis). Observational studies with PS analysis in surgery are increasing in frequency, but specific methodological issues and weaknesses in reporting exist.
ERIC Educational Resources Information Center
Guimond, Fanny-Alexandra; Brendgen, Mara; Forget-Dubois, Nadine; Dionne, Ginette; Vitaro, Frank; Tremblay, Richard E.; Boivin, Michel
2012-01-01
This study used the monozygotic (MZ) twin difference method to examine whether the unique environmental effects of maternal and paternal overprotection and hostility at the age of 30 months predict twins' observed social reticence in a competitive situation in kindergarten, while controlling for the effect of family-wide influences, including…
The Random Forests Statistical Technique: An Examination of Its Value for the Study of Reading
ERIC Educational Resources Information Center
Matsuki, Kazunaga; Kuperman, Victor; Van Dyke, Julie A.
2016-01-01
Studies investigating individual differences in reading ability often involve data sets containing a large number of collinear predictors and a small number of observations. In this article, we discuss the method of Random Forests and demonstrate its suitability for addressing the statistical concerns raised by such data sets. The method is…
Charting the Learning Journey of a Group of Adults Returning to Education
ERIC Educational Resources Information Center
Mooney, Des
2011-01-01
Using a qualitative case study method the researcher studied a group of adult returning students completing a childcare course. Methods used included focus groups, a questionnaire and observations. Using a holistic analysis approach (Yin 2003) of the case the researcher then focused on a number of key issues. From this analysis the themes of…
Examining Current Beliefs, Practices and Barriers about Technology Integration: A Case Study
ERIC Educational Resources Information Center
Hsu, Pi-Sui
2016-01-01
The purpose of this mixed-methods study was to examine the current beliefs, practices and barriers concerning technology integration of Kindergarten through Grade Six teachers in the midwestern United States. The three data collection methods were online surveys with 152 teachers as well as interviews and observations with 8 teachers. The findings…
Coherent Waves in Seismic Researches
NASA Astrophysics Data System (ADS)
Emanov, A.; Seleznev, V. S.
2013-05-01
Development of digital processing algorithms of seismic wave fields for the purpose of useful event picking to study environment and other objects is the basis for the establishment of new seismic techniques. In the submitted paper a fundamental property of seismic wave field coherence is used. The authors extended conception of coherence types of observed wave fields and devised a technique of coherent component selection from observed wave field. Time coherence and space coherence are widely known. In this paper conception "parameter coherence" has been added. The parameter by which wave field is coherent can be the most manifold. The reason is that the wave field is a multivariate process described by a set of parameters. Coherence in the first place means independence of linear connection in wave field of parameter. In seismic wave fields, recorded in confined space, in building-blocks and stratified mediums time coherent standing waves are formed. In prospecting seismology at observation systems with multiple overlapping head waves are coherent by parallel correlation course or, in other words, by one measurement on generalized plane of observation system. For detail prospecting seismology at observation systems with multiple overlapping on basis of coherence property by one measurement of area algorithms have been developed, permitting seismic records to be converted to head wave time sections which have neither reflected nor other types of waves. Conversion in time section is executed on any specified observation base. Energy storage of head waves relative to noise on basis of multiplicity of observation system is realized within area of head wave recording. Conversion on base below the area of wave tracking is performed with lack of signal/noise ratio relative to maximum of this ratio, fit to observation system. Construction of head wave time section and dynamic plots a basis of automatic processing have been developed, similar to CDP procedure in method of reflected waves. With use of developed algorithms of head wave conversion in time sections a work of studying of refracting boundaries in Siberia have been executed. Except for the research by method of refracting waves, the conversion of head waves in time sections, applied to seismograms of reflected wave method, allows to obtain information about refracting horizons in upper part of section in addition to reflecting horizons data. Recovery method of wave field coherent components is the basis of the engineering seismology on the level of accuracy and detail. In seismic microzoning resonance frequency of the upper part of section are determined on the basis of this method. Maps of oscillation amplification and result accuracy are constructed for each of the frequencies. The same method makes it possible to study standing wave field in buildings and constructions with high accuracy and detail, realizing diagnostics of their physical state on set of natural frequencies and form of self-oscillations, examined with high detail. The method of standing waves permits to estimate a seismic stability of structure on new accuracy level.
Bahmani, Pegah; Maleki, Afshin; Sadeghi, Shahram; Shahmoradi, Behzad; Ghahremani, Esmaeil
2017-01-01
Intestinal parasites are still a serious public health problem in the world, especially in developing countries. This study aimed to assess the prevalence of intestinal protozoa infections and associated risk factors among schoolchildren in Sanandaj City, Iran. This cross-sectional study involving 400 schoolchildren was carried out in 2015. Each student was selected using systematic random sampling method. Questionnaire and observation were used to identify possible risk factors. Fresh stool samples were observed using formal-ether concentration method. Five species of intestinal protozoa were identified with an overall prevalence of 42.3%. No cases of helminthes infection were detected. The predominant protozoa were Blastocys hominis (21.3%) and Entamoeba coli (4.5%). Overall, 143 (35.9%) had single infections and 26 (6.4%) were infected with more than one intestinal protozoa, in which 23 (5.9%) had double intestinal protozoa infections and 3 (0.5%) had triple infections. A significant relationship was observed between intestinal protozoa infection with economic status, water resources for drinking uses, and the methods of washing vegetables ( P <0.05). Education programs on students and their families should be implemented for the prevention and control of protozoa infections in the study area.
Małuszyńska, Hanna; Czarnecki, Piotr; Czarnecka, Anna; Pająk, Zdzisław
2012-04-01
Pyridinium chlorochromate, [C(5)H(5)NH](+)[ClCrO(3)](-) (hereafter referred to as PyClCrO(3)), was studied by X-ray diffraction, differential scanning calorimetry (DSC) and dielectric methods. Studies reveal three reversible phase transitions at 346, 316 and 170 K with the following phase sequence: R ̅3m (I) → R3m (II) → Cm (III) → Cc (IV), c' = 2c. PyClCrO(3) is the first pyridinium salt in which all four phases have been successfully characterized by a single-crystal X-ray diffraction method. Structural results together with dielectric and calorimetric studies allow the classification of the two intermediate phases (II) and (III) as ferroelectric with the Curie point at 346 K, and the lowest phase (IV) as most probably ferroelectric. The ferroelectric hysteresis loop was observed only in phase (III). The high ionic conductivity hindered its observation in phase (II).
Varying coefficient subdistribution regression for left-truncated semi-competing risks data.
Li, Ruosha; Peng, Limin
2014-10-01
Semi-competing risks data frequently arise in biomedical studies when time to a disease landmark event is subject to dependent censoring by death, the observation of which however is not precluded by the occurrence of the landmark event. In observational studies, the analysis of such data can be further complicated by left truncation. In this work, we study a varying co-efficient subdistribution regression model for left-truncated semi-competing risks data. Our method appropriately accounts for the specifical truncation and censoring features of the data, and moreover has the flexibility to accommodate potentially varying covariate effects. The proposed method can be easily implemented and the resulting estimators are shown to have nice asymptotic properties. We also present inference, such as Kolmogorov-Smirnov type and Cramér Von-Mises type hypothesis testing procedures for the covariate effects. Simulation studies and an application to the Denmark diabetes registry demonstrate good finite-sample performance and practical utility of the proposed method.
Clinical Knowledge from Observational Studies: Everything You Wanted to Know but Were Afraid to Ask.
Gershon, Andrea S; Jafarzadeh, S Reza; Wilson, Kevin C; Walkey, Allan J
2018-05-07
Well-done randomized trials provide accurate estimates of treatment effect by producing groups that are similar on all measures except for the intervention of interest. However, inferences of efficacy in tightly-controlled experimental settings may not translate into similar effectiveness in real-world settings. Observational studies generally enable inferences over a wider range of patient characteristics and evaluation of a broader range of outcomes over a longer period than randomized trials. However, clinicians are often reluctant to incorporate the findings of observational studies into clinical practice. Reason for uncertainty regarding observational studies include a lack of familiarity with observational research methods, occasional disagreements between results of observational studies and randomized trials, the perceived risk of spurious results from systematic bias, and prior teaching that randomized trials are the most reliable source of medical evidence. We propose that a better understanding of observational research will enhance clinicians' ability to distinguish reliable observational studies from those that are subjected to biases and, therefore, provide more confidence to apply observational research results into clinical practice when appropriate. Herein, we explain why observational studies may be perceived as less conclusive than randomized trials, address situations in which observational research and randomized trials produced different findings, and provide information on observational study design so that quality can be evaluated. We conclude that observational research is a valuable source of medical evidence and that clinical action is strongest when supported by both high quality observational studies and randomized trials.
Studies of learned helplessness in honey bees (Apis mellifera ligustica).
Dinges, Christopher W; Varnon, Christopher A; Cota, Lisa D; Slykerman, Stephen; Abramson, Charles I
2017-04-01
The current study reports 2 experiments investigating learned helplessness in the honey bee (Apis mellifera ligustica). In Experiment 1, we used a traditional escape method but found the bees' activity levels too high to observe changes due to treatment conditions. The bees were not able to learn in this traditional escape procedure; thus, such procedures may be inappropriate to study learned helplessness in honey bees. In Experiment 2, we used an alternative punishment, or passive avoidance, method to investigate learned helplessness. Using a master and yoked design where bees were trained as either master or yoked and tested as either master or yoked, we found that prior training with unavoidable and inescapable shock in the yoked condition interfered with avoidance and escape behavior in the later master condition. Unlike control bees, learned helplessness bees failed to restrict their movement to the safe compartment following inescapable shock. Unlike learned helplessness studies in other animals, no decrease in general activity was observed. Furthermore, we did not observe a "freezing" response to inescapable aversive stimuli-a phenomenon, thus far, consistently observed in learned helplessness tests with other species. The bees, instead, continued to move back and forth between compartments despite punishment in the incorrect compartment. These findings suggest that, although traditional escape methods may not be suitable, honey bees display learned helplessness in passive avoidance procedures. Thus, regardless of behavioral differences from other species, honey bees can be a unique invertebrate model organism for the study of learned helplessness. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Hegde, Sapna; Patodia, Akash; Dixit, Uma
2017-08-01
Demirjian's method has been the most popular and extensively tested radiographic method of age estimation. More recently, Willems' method has been reported to be a better predictor of age. Nolla's and Häävikko's methods have been used to a lesser extent. Very few studies have compared all four methods in non-Indian and Indian populations. Most Indian research is limited by inadequate sample sizes, age structures and grouping and different approaches to statistical analysis. The present study aimed to evaluate and compare the validity of the Demirjian, Willems, Nolla and Häävikko methods in determination of chronological age of 5 to 15 year-old Indian children. In this cross-sectional observational study, four methods were compared for validity in estimating the age of 1200 Indian children aged 5-15 years. Demirjian's method overestimated age by +0.24 ± 0.80, +0.11 ± 0.81 and +0.19 ± 0.80 years in boys, girls and the total sample, respectively. With Willems' method, overestimations of +0.09 ± 0.80, +0.08 ± 0.80 and +0.09 ± 0.80 years were obtained in boys, girls and the total sample, respectively. Nolla's method underestimated age by -0.13 ± 0.80, -0.30 ± 0.82 and -0.20 ± 0.81 years in boys, girls and the total sample, respectively. Häävikko's method underestimated age by -0.17 ± 0.80, -0.29 ± 0.83 and -0.22 ± 0.82 years in boys, girls and the total sample, respectively. Statistically significant differences were observed between dental and chronological ages with all methods (p < 0.001). Significant gender-based differences were observed with all methods except Willems' (p < 0.05). Gender-specific regression formulae were derived for all methods. Willems' method most accurately estimated age, followed by Demirjian's, Nolla's and Häävikko's methods. All four methods could be applicable for estimating age in the present population, mean prediction errors being lower than 0.30 years (3.6 months). Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Clarke-OʼNeill, Sinead; Farbrot, Anne; Lagerstedt, Marie-Louise; Cottenden, Alan; Fader, Mandy
2015-01-01
The primary aim of this study was to determine whether the severity of incontinence-associated dermatitis (IAD) among nursing home-based incontinence pad users varies between pad designs. A second aim was to examine the utility of a simple method for reporting skin health problems in which healthcare assistants were asked to record basic observational data at each pad change. Randomized, multiple crossover, observational, exploratory. Twenty-one men and 57 women using absorbent continence products to contain urinary and/or fecal incontinence were recruited from 10 nursing homes in London and the south of England. A day-time variant and a night-time variant of each of the 4 main disposable pad designs on the market for moderate/heavy incontinence were tested: (1) insert pads with stretch pants; (2) 1-piece all-in-one diapers; (3) pull-up pants; and (4) belted/T-shape diapers. All pad variants for day-time use had an absorption capacity of 1900 mL ± 20% (measured using ISO 11948-1 International Standards Organization) while the capacity of night-time variants was 2400 mL ± 20%. Each resident used each of the 4 pad designs (day-time and night-time variants) for 2 weeks and the order of testing was randomized by nursing home. Skin health data were collected using 2 methods in parallel. Method 1 comprised visual observation by researchers (1 observation per pad design; 4 observations in total over 8 weeks). In method 2, healthcare assistants logged observational data on skin health at every pad change for the 8 weeks. The primary outcome variable was severity of the most severe skin problem noted by the researcher for each resident, and for each pad design (method 1). Descriptive data on skin care methods used in the nursing homes were also collected using short questionnaires and researcher observation. No significant differences in the severity or incidence of skin problems were found between observations using the 4 pad designs. However, a wide range of skin conditions was recorded that made classification difficult; the skin was often marked with creases from absorbent products, temporary marks, and pink/purple discoloration. We observed few cases of the severe erythema, rashes, and vesicles that are commonly used descriptors in previous skin tools. Nevertheless, the collected data reflect an abundance of skin problems that were difficult to categorize neatly. Researcher observations (method 1) showed that nearly all the residents (96%) had at least 1 IAD skin problem recorded over the 8-week period and 64% of residents had at least 1 problem that was rated as maximum severity. Healthcare assistants logged skin problems on 6.1% of pad changes. The discrepancy between researcher and healthcare assistant data appears to be largely due to healthcare assistants sometimes discounting low-grade IAD as normal for that population. Incontinence-associated dermatitis is common among nursing home residents who use incontinence pads, and it is often severe. No evidence was found that any design of pad was more likely than any others to be associated with skin problems. The method devised to enable healthcare assistants to record basic observational data on skin health in the diaper area at each pad change (Method 2) proved simple to use but still resulted in substantial underreporting of IAD, suggesting that further work is needed to develop a tool that more successfully encourages them to log and treat IAD problems.
Observation method to predict meander migration and vertical degradation of rivers.
DOT National Transportation Integrated Search
2014-05-01
Meander migration and vertical degradation of river bed are processes that have been studied for years. : Different methods have been proposed to make predictions of the behavior of rivers with respect to these : processes. These two erosion controll...
NASA Astrophysics Data System (ADS)
Hummels, Cameron
Computational hydrodynamical simulations are a very useful tool for understanding how galaxies form and evolve over cosmological timescales not easily revealed through observations. However, they are only useful if they reproduce the sorts of galaxies that we see in the real universe. One of the ways in which simulations of this sort tend to fail is in the prescription of stellar feedback, the process by which nascent stars return material and energy to their immediate environments. Careful treatment of this interaction in subgrid models, so-called because they operate on scales below the resolution of the simulation, is crucial for the development of realistic galaxy models. Equally important is developing effective methods for comparing simulation data against observations to ensure galaxy models which mimic reality and inform us about natural phenomena. This thesis examines the formation and evolution of galaxies and the observable characteristics of the resulting systems. We employ extensive use of cosmological hydrodynamical simulations in order to simulate and interpret the evolution of massive spiral galaxies like our own Milky Way. First, we create a method for producing synthetic photometric images of grid-based hydrodynamical models for use in a direct comparison against observations in a variety of filter bands. We apply this method to a simulation of a cluster of galaxies to investigate the nature of the red-sequence/blue-cloud dichotomy in the galaxy color-magnitude diagram. Second, we implement several subgrid models governing the complex behavior of gas and stars on small scales in our galaxy models. Several numerical simulations are conducted with similar initial conditions, where we systematically vary the subgrid models, afterward assessing their efficacy through comparisons of their internal kinematics with observed systems. Third, we generate an additional method to compare observations with simulations, focusing on the tenuous circumgalactic medium. Informed by our previous studies, we investigate the sensitivity of this new mode of comparison to hydrodynamical subgrid prescription. Finally, we synthesize the results of these studies and identify future avenues of research.
ERIC Educational Resources Information Center
Ackerman, Matthew; Egalite, Anna J.
2015-01-01
When lotteries are infeasible, researchers must rely on observational methods to estimate charter effectiveness at raising student test scores. Considerable attention has been paid to observational studies by the Stanford Center for Research on Education Outcomes (CREDO), which have analyzed charter performance in 27 states. However, the…
NASA Astrophysics Data System (ADS)
Sugiura, M.; Seika, M.
1994-02-01
In this study, a new technique to measure the density of slip-bands automatically is developed, namely, a TV image of the slip-bands observed through a microscope is directly processed by an image-processing system using a personal computer and an accurate value of the density of slip-bands is measured quickly. In the case of measuring the local stresses in machine parts of large size with the copper plating foil, the direct observation of slip-bands through an optical microscope is difficult. In this study, to facilitate a technique close to the direct microscopic observation of slip-bands in the foil attached to a large-sized specimen, the replica method using a platic film of acetyl cellulose is applied to replicate the slip-bands in the attached foil.
Portell, Mariona; Anguera, M Teresa; Hernández-Mendo, Antonio; Jonsson, Gudberg K
2015-01-01
Contextual factors are crucial for evaluative research in psychology, as they provide insights into what works, for whom, in what circumstances, in what respects, and why. Studying behavior in context, however, poses numerous methodological challenges. Although a comprehensive framework for classifying methods seeking to quantify biopsychosocial aspects in everyday contexts was recently proposed, this framework does not contemplate contributions from observational methodology. The aim of this paper is to justify and propose a more general framework that includes observational methodology approaches. Our analysis is rooted in two general concepts: ecological validity and methodological complementarity. We performed a narrative review of the literature on research methods and techniques for studying daily life and describe their shared properties and requirements (collection of data in real time, on repeated occasions, and in natural settings) and classification criteria (eg, variables of interest and level of participant involvement in the data collection process). We provide several examples that illustrate why, despite their higher costs, studies of behavior and experience in everyday contexts offer insights that complement findings provided by other methodological approaches. We urge that observational methodology be included in classifications of research methods and techniques for studying everyday behavior and advocate a renewed commitment to prioritizing ecological validity in behavioral research seeking to quantify biopsychosocial aspects. PMID:26089708
Zygmunt, Arkadiusz; Adamczewski, Zbigniew; Zygmunt, Agnieszka; Karbownik-Lewinska, Malgorzata; Lewinski, Andrzej
2017-01-01
Goitre incidence in school-aged children evaluated using ultrasonography is one of the essential indicators of iodine intake in a given area. The aim of the study was to examine what the difference is between the volume of the thyroid gland measured in the supine and sitting position and to determine the intra-observer, inter-observer, and inter-position variations. The survey was conducted among 87 children (56 girls and 31 boys aged 7-13 years, mean age 10.44 ± 1.72 years). The thyroid volume measured in a sitting position was significantly lower than that measured in the supine position. The intra-observer variations for the total thyroid volume equalled 9.56-9.65%. The inter-observer variations were significantly higher and amounted to 34.5-35.7%. The way in which ultrasound evaluation is performed is important for the analysis of the results. It is crucial to aim for the smallest inter-observer variation, which can be achieved by strictly defining the methods of the thyroid measurement and comparing one's measuring techniques with the reference method. The use of standards in ultrasound evaluation performed in the supine position, as well as the use of standards without a strict determination of the study method, can lead to erro-neous conclusions. © 2017 S. Karger AG, Basel.
Sharma, Reena; Kashyap, Nilotpol; Prajapati, Deepesh; Kappadi, Damodar; Wadhwa, Saakshe; Gandotra, Shina; Yadav, Poonam
2016-01-01
Introduction Chewing Side Preference (CSP) is said to occur when mastication is recognized exclusively/consistently or predominantly on the same side of the jaw. It can be assessed by using the direct method - visual observation and indirect methods by electric programs, such as cinematography, kinetography and computerized electromyography. Aim The present study was aimed at evaluating the prevalence of CSP in deciduous, mixed and permanent dentitions and relating its association with dental caries. Materials and Methods In a cross-sectional observational study, 240 school going children aged 3 to 18years were randomly allocated to three experimental groups according to the deciduous dentition, mixed dentition and permanent dentition period. The existence of a CSP was determined using a direct method by asking the children to chew on a piece of gum (trident sugarless). The Mann Whitney U-test was used to compare the CSP and also among the boys and girls. The Spearman’s Correlation Coefficient was used to correlate CSP and dental caries among the three study groups and also among the groups. Results CSP was observed in 69%, 83% and 76% of children with primary, mixed and permanent dentition respectively (p>0.05). There was no statistically significant association between the presence of CSP and dental caries among the three study groups. Conclusion There was a weak or no correlation between gender and distribution of CSP and between presence of CSP and dental caries. PMID:27790569
A method for recording verbal behavior in free-play settings.
Nordquist, V M
1971-01-01
The present study attempted to test the reliability of a new method of recording verbal behavior in a free-play preschool setting. Six children, three normal and three speech impaired, served as subjects. Videotaped records of verbal behavior were scored by two experimentally naive observers. The results suggest that the system provides a means of obtaining reliable records of both normal and impaired speech, even when the subjects exhibit nonverbal behaviors (such as hyperactivity) that interfere with direct observation techniques.
Investigation of Post-mortem Tissue Effects Using Long-time Decorrelation Ultrasound
NASA Astrophysics Data System (ADS)
Csány, Gergely; Balogh, Lajos; Gyöngy, Miklós
Decorrelation ultrasound is being increasingly used to investigate long-term biological phenomena. In the current work, ultrasound image sequences of mice who did not survive anesthesia (in a separate investigation) were analyzed and post-mortem tissue effects were observed via decorrelation calculation. A method was developed to obtain a quantitative parameter characterizing the rate of decorrelation. The results show that ultrasound decorrelation imaging is an effective method of observing post-mortem tissue effects and point to further studies elucidating the mechanism behind these effects.
Meiszberg, A M; Johnson, A K; Sadler, L J; Carroll, J A; Dailey, J W; Krebs, N
2009-12-01
Assimilating accurate behavioral events over a long period can be labor-intensive and relatively expensive. If an automatic device could accurately record the duration and frequency for a given behavioral event, it would be a valuable alternative to the traditional use of human observers for behavioral studies. Therefore, the objective of this study was to determine the accuracy in the time spent at the waterer and the number of visits to the waterer by individually housed nursery pigs between human observers scoring video files using Observer software (OBS) and an automatic water meter Hobo (WM, control) affixed onto the waterline. Eleven PIC USA genotype gilts (22 +/- 2 d of age; 6.5 +/- 1.4 kg of BW) were housed individually in pens with ad libitum access to a corn-based starter ration and one nipple waterer. Behavior was collected on d 0 (day of weaning), 7, and 14 of the trial using 1 color camera positioned over 4 attached pens and a RECO-204 DVR at 1 frame per second. For the OBS method, 2 experienced observers recorded drinking behavior from the video files, which was defined as when the gilt placed her mouth over the nipple waterer. Data were analyzed using nonparametric methods and the general linear model and regression procedures in SAS. The experimental unit was the individual pen housing 1 gilt. The GLM model included the method of observation (WM vs. OBS) and time (24 h) as variables, and the gilt nested within method was used as the error term. Gilts consumed more water (P = 0.04) on d 14 than on d 0. The time of day affected (P < 0.001) the number of visits and the time spent at the waterer regardless of the method. However, the OBS method underestimated (P < 0.001) the number of visits to the waterer (3.48 +/- 0.33 visits/h for OBS vs. 4.94 +/- 0.33 for WM) and overestimated (P < 0.001) the time spent at the waterer (22.6 +/- 1.46 s/h for OBS vs. 13.9 +/- 1.43 for WM) compared with WM. The relationship between the 2 methods for prediction of time spent at the waterer and number of visits made by the gilts was weak (R(2) = 0.56 and 0.69, respectively). Collectively, these data indicate that the use of the traditional OBS method for quantifying drinking behavior in pigs can be misleading. Quantifying drinking behavior and perhaps other behavioral events via the OBS method must be more accurately validated.
NASA Astrophysics Data System (ADS)
Young, Kenneth C.; Cook, James J. H.; Oduko, Jennifer M.; Bosmans, Hilde
2006-03-01
European Guidelines for quality control in digital mammography specify minimum and achievable standards of image quality in terms of threshold contrast, based on readings of images of the CDMAM test object by human observers. However this is time-consuming and has large inter-observer error. To overcome these problems a software program (CDCOM) is available to automatically read CDMAM images, but the optimal method of interpreting the output is not defined. This study evaluates methods of determining threshold contrast from the program, and compares these to human readings for a variety of mammography systems. The methods considered are (A) simple thresholding (B) psychometric curve fitting (C) smoothing and interpolation and (D) smoothing and psychometric curve fitting. Each method leads to similar threshold contrasts but with different reproducibility. Method (A) had relatively poor reproducibility with a standard error in threshold contrast of 18.1 +/- 0.7%. This was reduced to 8.4% by using a contrast-detail curve fitting procedure. Method (D) had the best reproducibility with an error of 6.7%, reducing to 5.1% with curve fitting. A panel of 3 human observers had an error of 4.4% reduced to 2.9 % by curve fitting. All automatic methods led to threshold contrasts that were lower than for humans. The ratio of human to program threshold contrasts varied with detail diameter and was 1.50 +/- .04 (sem) at 0.1mm and 1.82 +/- .06 at 0.25mm for method (D). There were good correlations between the threshold contrast determined by humans and the automated methods.
Veta, Mitko; van Diest, Paul J.; Jiwa, Mehdi; Al-Janabi, Shaimaa; Pluim, Josien P. W.
2016-01-01
Background Tumor proliferation speed, most commonly assessed by counting of mitotic figures in histological slide preparations, is an important biomarker for breast cancer. Although mitosis counting is routinely performed by pathologists, it is a tedious and subjective task with poor reproducibility, particularly among non-experts. Inter- and intraobserver reproducibility of mitosis counting can be improved when a strict protocol is defined and followed. Previous studies have examined only the agreement in terms of the mitotic count or the mitotic activity score. Studies of the observer agreement at the level of individual objects, which can provide more insight into the procedure, have not been performed thus far. Methods The development of automatic mitosis detection methods has received large interest in recent years. Automatic image analysis is viewed as a solution for the problem of subjectivity of mitosis counting by pathologists. In this paper we describe the results from an interobserver agreement study between three human observers and an automatic method, and make two unique contributions. For the first time, we present an analysis of the object-level interobserver agreement on mitosis counting. Furthermore, we train an automatic mitosis detection method that is robust with respect to staining appearance variability and compare it with the performance of expert observers on an “external” dataset, i.e. on histopathology images that originate from pathology labs other than the pathology lab that provided the training data for the automatic method. Results The object-level interobserver study revealed that pathologists often do not agree on individual objects, even if this is not reflected in the mitotic count. The disagreement is larger for objects from smaller size, which suggests that adding a size constraint in the mitosis counting protocol can improve reproducibility. The automatic mitosis detection method can perform mitosis counting in an unbiased way, with substantial agreement with human experts. PMID:27529701
Data assimilation to extract soil moisture information from SMAP observations
USDA-ARS?s Scientific Manuscript database
This study compares different methods to extract soil moisture information through the assimilation of Soil Moisture Active Passive (SMAP) observations. Neural Network(NN) and physically-based SMAP soil moisture retrievals were assimilated into the NASA Catchment model over the contiguous United Sta...
NASA Astrophysics Data System (ADS)
Pan, M.; Wood, E. F.
2004-05-01
This study explores a method to estimate various components of the water cycle (ET, runoff, land storage, etc.) based on a number of different info sources, including both observations and observation-enhanced model simulations. Different from existing data assimilations, this constrained Kalman filtering approach keeps the water budget perfectly closed while updating the states of the underlying model (VIC model) optimally using observations. Assimilating different data sources in this way has several advantages: (1) physical model is included to make estimation time series smooth, missing-free, and more physically consistent; (2) uncertainties in the model and observations are properly addressed; (3) model is constrained by observation thus to reduce model biases; (4) balance of water is always preserved along the assimilation. Experiments are carried out in Southern Great Plain region where necessary observations have been collected. This method may also be implemented in other applications with physical constraints (e.g. energy cycles) and at different scales.
Attenuation and velocity dispersion in the exploration seismic frequency band
NASA Astrophysics Data System (ADS)
Sun, Langqiu
In an anelastic medium, seismic waves are distorted by attenuation and velocity dispersion, which depend on petrophysical properties of reservoir rocks. The effective attenuation and velocity dispersion is a combination of intrinsic attenuation and apparent attenuation due to scattering, transmission response, and data acquisition system. Velocity dispersion is usually neglected in seismic data processing partly because of insufficient observations in the exploration seismic frequency band. This thesis investigates the methods of measuring velocity dispersion in the exploration seismic frequency band and interprets the velocity dispersion data in terms of petrophysical properties. Broadband, uncorrelated vibrator data are suitable for measuring velocity dispersion in the exploration seismic frequency band, and a broad bandwidth optimizes the observability of velocity dispersion. Four methods of measuring velocity dispersion in uncorrelated vibrator VSP data are investigated, which are the sliding window crosscorrelation (SWCC) method, the instantaneous phase method, the spectral decomposition method, and the cross spectrum method. Among them, the SWCC method is a new method and has satisfactory robustness, accuracy, and efficiency. Using the SWCC method, velocity dispersion is measured in the uncorrelated vibrator VSP data from three areas with different geological settings, i.e., Mallik gas hydrate zone, McArthur River uranium mines, and Outokumpu crystalline rocks. The observed velocity dispersion is fitted to a straight line with respect to log frequency for a constant (frequency-independent) Q value. This provides an alternative method for calculating Q. A constant Q value does not directly link to petrophysical properties. A modeling study is implemented for the Mallik and McArthur River data to interpret the velocity dispersion observations in terms of petrophysical properties. The detailed multi-parameter petrophysical reservoir models are built according to the well logs; the models' parameters are adjusted by fitting the synthetic data to the observed data. In this way, seismic attenuation and velocity dispersion provide new insight into petrophysics properties at the Mallik and McArthur River sites. Potentially, observations of attenuation and velocity dispersion in the exploration seismic frequency band can improve the deconvolution process for vibrator data, Q-compensation, near-surface analysis, and first break picking for seismic data.
Diagnostic value of DIAGNOdent in detecting caries under composite restorations of primary molars.
Sichani, Ava Vali; Javadinejad, Shahrzad; Ghafari, Roshanak
2016-01-01
Direct observation cannot detect caries under restorations; therefore, the aim of this study was to compare the accuracy of radiographs and DIAGNOdent in detecting caries under restorations in primary teeth using histologic evaluation. A total of 74 previously extracted primary molars (37 with occlusal caries and 37 without caries) were used. Class 1 cavity preparations were made on each tooth by a single clinician and then the preparations were filled with composite resin. The accuracy of radiographs and DIAGNOdent in detecting caries was compared using histologic evaluation. The data were analyzed by SPSS version 21 using Chi-square, Mc Namara statistical tests and receiver operating characteristic curve. The significance was set at 0.05. The sensitivity and specificity for DIAGNOdent were 70.97 and 83.72, respectively. Few false negative results were observed, and the positive predictive value was high (+PV = 75.9) and the area under curve was more than 0.70 therefore making DIAGNOdenta great method for detecting caries (P = 0.0001). Two observers evaluated the radiographs and both observers had low sensitivity ( first observer: 48.39) (second observer: 51.61) and high specificity (both observers: 79.07). The +PV was lower than DIAGNOdent and the area under curve for both observers was less than 0.70. However, the difference between the two methods was not significant. DIAGNOdent showed a greater accuracy in detecting secondary caries under primary molar restorations, compared to radiographs. Although DIAGNOdent is an effective method for detecting caries under composite restorations, it is better to be used as an adjunctive method alongside other detecting procedures.
Mehl, Matthias R; Robbins, Megan L; Deters, Fenne Große
2012-05-01
This article introduces a novel observational ambulatory monitoring method called the electronically activated recorder (EAR). The EAR is a digital audio recorder that runs on a handheld computer and periodically and unobtrusively records snippets of ambient sounds from participants' momentary environments. In tracking moment-to-moment ambient sounds, it yields acoustic logs of people's days as they naturally unfold. In sampling only a fraction of the time, it protects participants' privacy and makes large observational studies feasible. As a naturalistic observation method, it provides an observer's account of daily life and is optimized for the objective assessment of audible aspects of social environments, behaviors, and interactions (e.g., habitual preferences for social settings, idiosyncratic interaction styles, subtle emotional expressions). This article discusses the EAR method conceptually and methodologically, reviews prior research with it, and identifies three concrete ways in which it can enrich psychosomatic research. Specifically, it can (a) calibrate psychosocial effects on health against frequencies of real-world behavior; (b) provide ecological observational measures of health-related social processes that are independent of self-report; and (c) help with the assessment of subtle and habitual social behaviors that evade self-report but have important health implications. An important avenue for future research lies in merging traditional self-report-based ambulatory monitoring methods with observational approaches such as the EAR to allow for the simultaneous yet methodologically independent assessment of inner, experiential aspects (e.g., loneliness) and outer, observable aspects (e.g., social isolation) of real-world social processes to reveal their unique effects on health.
Sheila Conant; Mark S. Collins; C. John Ralph
1981-01-01
During a 5-week study of the Nihoa Millerbird and Nihoa Finch, we censused birds using these techniques: two line transect methods, a variable-distance circular plot method, and spot-mapping of territories (millerbirds only). Densities derived from these methods varied greatly. Due to differences in behavior, it appeared that the two species reacted differently to the...
ERIC Educational Resources Information Center
Grant, Mary C.; Zhang, Lilly; Damiano, Michele
2009-01-01
This study investigated kernel equating methods by comparing these methods to operational equatings for two tests in the SAT Subject Tests[TM] program. GENASYS (ETS, 2007) was used for all equating methods and scaled score kernel equating results were compared to Tucker, Levine observed score, chained linear, and chained equipercentile equating…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, S. Asad, E-mail: asadsyyed@gmail.com; Naseem, Swaleha; Khan, Wasi
2015-06-24
Barium doped lanthanum ferrite (LaFeO{sub 3}) nanoparticles (NPs) were prepared by gel combustion method and calcinated at 700°C. Microstructural studies were carried by XRD and SEM techniques. The results of structural characterization show the formation of all samples in single phase without any impurity. Optical properties were studied by UV- visible technique. The energy band gap was calculated and obtained 3.01 eV. Dielectric properties characterized by LCR meter and have been observed appreciable changes. The observed behavior of the dielectric properties can be attributed on the basis of Koop’s theory based on Maxwell-Wagner two layer models in studied nanoparticles.
Development of artificial meteor for observation of upper atmosphere
NASA Astrophysics Data System (ADS)
Watanabe, Masaki; Sahara, Hironori; Abe, Shinsuke; Watanabe, Takeo; Nojiri, Yuta; Okajima, Lena
2016-04-01
This study proposes a method for the observation of the upper atmosphere using an artificial meteor injected by a mass driver installed on a microsatellite. The mass driver injects a pill at a velocity of 200 m/s and deorbits it into the atmosphere. The emission of the pill can then be observed from the ground at the necessary time and location. This approach could contribute to a better understanding of the global environment as well as different aspects of astronomy and planetary science. To realize the proposed method, the required size and emission of the pill have to be determined. Therefore, we conducted flow-field simulations, spectroscopic estimations, and an experiment on an artificial meteor in the arc heater wind tunnel at the Institute of Space and Astronautical Science in the Japan Aerospace Exploration Agency (ISAS/JAXA). From the results, we confirmed that the light emission could be observed as a shooting star by the naked eye and thus verified the feasibility of the method.
Basic research for the geodynamics program
NASA Technical Reports Server (NTRS)
1983-01-01
Laser systems deployed in satellite tracking were upgraded to accuracy levels where biases from systematic unmodelled effects constitute the basic factor that prohibits extraction of the full amount of information contained in the observations. Taking into consideration that the quality of the instrument advances at a faster pace compared to the understanding and modeling of the physical processes involved, one can foresee that in the near future when all lasers are replaced with third generation ones the limiting factor for the estimated accuracies will be the aforementioned biases. Therefore, for the reduction of the observations, methods should be deployed in such a way that the effect of the biases will be kept well below the noise level. Such a method was proposed and studied. This method consists of using the observed part of the satellite pass and converting the laser ranges into range differences in hopes that they will be less affected by biases in the orbital models, the reference system, and the observations themselves.
Comparison of Peak-Flow Estimation Methods for Small Drainage Basins in Maine
Hodgkins, Glenn A.; Hebson, Charles; Lombard, Pamela J.; Mann, Alexander
2007-01-01
Understanding the accuracy of commonly used methods for estimating peak streamflows is important because the designs of bridges, culverts, and other river structures are based on these flows. Different methods for estimating peak streamflows were analyzed for small drainage basins in Maine. For the smallest basins, with drainage areas of 0.2 to 1.0 square mile, nine peak streamflows from actual rainfall events at four crest-stage gaging stations were modeled by the Rational Method and the Natural Resource Conservation Service TR-20 method and compared to observed peak flows. The Rational Method had a root mean square error (RMSE) of -69.7 to 230 percent (which means that approximately two thirds of the modeled flows were within -69.7 to 230 percent of the observed flows). The TR-20 method had an RMSE of -98.0 to 5,010 percent. Both the Rational Method and TR-20 underestimated the observed flows in most cases. For small basins, with drainage areas of 1.0 to 10 square miles, modeled peak flows were compared to observed statistical peak flows with return periods of 2, 50, and 100 years for 17 streams in Maine and adjoining parts of New Hampshire. Peak flows were modeled by the Rational Method, the Natural Resources Conservation Service TR-20 method, U.S. Geological Survey regression equations, and the Probabilistic Rational Method. The regression equations were the most accurate method of computing peak flows in Maine for streams with drainage areas of 1.0 to 10 square miles with an RMSE of -34.3 to 52.2 percent for 50-year peak flows. The Probabilistic Rational Method was the next most accurate method (-38.5 to 62.6 percent). The Rational Method (-56.1 to 128 percent) and particularly the TR-20 method (-76.4 to 323 percent) had much larger errors. Both the TR-20 and regression methods had similar numbers of underpredictions and overpredictions. The Rational Method overpredicted most peak flows and the Probabilistic Rational Method tended to overpredict peak flows from the smaller (less than 5 square miles) drainage basins and underpredict peak flows from larger drainage basins. The results of this study are consistent with the most comprehensive analysis of observed and modeled peak streamflows in the United States, which analyzed statistical peak flows from 70 drainage basins in the Midwest and the Northwest.
Matching CCD images to a stellar catalog using locality-sensitive hashing
NASA Astrophysics Data System (ADS)
Liu, Bo; Yu, Jia-Zong; Peng, Qing-Yu
2018-02-01
The usage of a subset of observed stars in a CCD image to find their corresponding matched stars in a stellar catalog is an important issue in astronomical research. Subgraph isomorphic-based algorithms are the most widely used methods in star catalog matching. When more subgraph features are provided, the CCD images are recognized better. However, when the navigation feature database is large, the method requires more time to match the observing model. To solve this problem, this study investigates further and improves subgraph isomorphic matching algorithms. We present an algorithm based on a locality-sensitive hashing technique, which allocates quadrilateral models in the navigation feature database into different hash buckets and reduces the search range to the bucket in which the observed quadrilateral model is located. Experimental results indicate the effectivity of our method.
NASA Astrophysics Data System (ADS)
Sakamoto, Takashi
2015-01-01
This study describes a color enhancement method that uses a color palette especially designed for protan and deutan defects, commonly known as red-green color blindness. The proposed color reduction method is based on a simple color mapping. Complicated computation and image processing are not required by using the proposed method, and the method can replace protan and deutan confusion (p/d-confusion) colors with protan and deutan safe (p/d-safe) colors. Color palettes for protan and deutan defects proposed by previous studies are composed of few p/d-safe colors. Thus, the colors contained in these palettes are insufficient for replacing colors in photographs. Recently, Ito et al. proposed a p/dsafe color palette composed of 20 particular colors. The author demonstrated that their p/d-safe color palette could be applied to image color reduction in photographs as a means to replace p/d-confusion colors. This study describes the results of the proposed color reduction in photographs that include typical p/d-confusion colors, which can be replaced. After the reduction process is completed, color-defective observers can distinguish these confusion colors.
Comparing multiple imputation methods for systematically missing subject-level data.
Kline, David; Andridge, Rebecca; Kaizar, Eloise
2017-06-01
When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Focal mechanism determination for induced seismicity using the neighbourhood algorithm
NASA Astrophysics Data System (ADS)
Tan, Yuyang; Zhang, Haijiang; Li, Junlun; Yin, Chen; Wu, Furong
2018-06-01
Induced seismicity is widely detected during hydraulic fracture stimulation. To better understand the fracturing process, a thorough knowledge of the source mechanism is required. In this study, we develop a new method to determine the focal mechanism for induced seismicity. Three misfit functions are used in our method to measure the differences between observed and modeled data from different aspects, including the waveform, P wave polarity and S/P amplitude ratio. We minimize these misfit functions simultaneously using the neighbourhood algorithm. Through synthetic data tests, we show the ability of our method to yield reliable focal mechanism solutions and study the effect of velocity inaccuracy and location error on the solutions. To mitigate the impact of the uncertainties, we develop a joint inversion method to find the optimal source depth and focal mechanism simultaneously. Using the proposed method, we determine the focal mechanisms of 40 stimulation induced seismic events in an oil/gas field in Oman. By investigating the results, we find that the reactivation of pre-existing faults is the main cause of the induced seismicity in the monitored area. Other observations obtained from the focal mechanism solutions are also consistent with earlier studies in the same area.
Suicide Risk Assessment and Prevention: A Systematic Review Focusing on Veterans.
Nelson, Heidi D; Denneson, Lauren M; Low, Allison R; Bauer, Brian W; O'Neil, Maya; Kansagara, Devan; Teo, Alan R
2017-10-01
Suicide rates in veteran and military populations in the United States are high. This article reviews studies of the accuracy of methods to identify individuals at increased risk of suicide and the effectiveness and adverse effects of health care interventions relevant to U.S. veteran and military populations in reducing suicide and suicide attempts. Trials, observational studies, and systematic reviews relevant to U.S. veterans and military personnel were identified in searches of MEDLINE, PsycINFO, SocINDEX, and Cochrane databases (January 1, 2008, to September 11, 2015), on Web sites, and in reference lists. Investigators extracted and confirmed data and dual-rated risk of bias for included studies. Nineteen studies evaluated accuracy of risk assessment methods, including models using retrospective electronic records data and clinician- or patient-rated instruments. Most methods demonstrated sensitivity ≥80% or area-under-the-curve values ≥.70 in single studies, including two studies based on electronic records of veterans and military personnel, but specificity varied. Suicide rates were reduced in six of eight observational studies of population-level interventions. Only two of ten trials of individual-level psychotherapy reported statistically significant differences between treatment and usual care. Risk assessment methods have been shown to be sensitive predictors of suicide and suicide attempts, but the frequency of false positives limits their clinical utility. Research to refine these methods and examine clinical applications is needed. Studies of suicide prevention interventions are inconclusive; trials of population-level interventions and promising therapies are required to support their clinical use.
Efficient calibration for imperfect computer models
Tuo, Rui; Wu, C. F. Jeff
2015-12-01
Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.
NASA Astrophysics Data System (ADS)
Conboy, Kieran; Lang, Michael
This chapter outlines the alternative perspectives of "rationalism" and "improvisation" within information systems development and describes the major shortcomings of each. It then discusses how these shortcomings manifested themselves within an e-government case study where a "structured" requirements management method was employed. Although this method was very prescriptive and firmly rooted in the "rational" paradigm, it was observed that users often resorted to improvised behaviour, such as privately making decisions on how certain aspects of the method should or should not be implemented.
Observation of asphalt binder microstructure with ESEM.
Mikhailenko, P; Kadhim, H; Baaj, H; Tighe, S
2017-09-01
The observation of asphalt binder with the environmental scanning electron microscope (ESEM) has shown the potential to observe asphalt binder microstructure and its evolution with binder aging. A procedure for the induction and identification of the microstructure in asphalt binder was established in this study and included sample preparation and observation parameters. A suitable heat-sampling asphalt binder sample preparation method was determined for the test and several stainless steel and Teflon sample moulds developed, finding that stainless steel was the preferable material. The magnification and ESEM settings conducive to observing the 3D microstructure were determined through a number of observations to be 1000×, although other magnifications could be considered. Both straight run binder (PG 58-28) and an air blown oxidised binder were analysed; their structures being compared for their relative size, abundance and other characteristics, showing a clear evolution in the fibril microstructure. The microstructure took longer to appear for the oxidised binder. It was confirmed that the fibril microstructure corresponded to actual characteristics in the asphalt binder. Additionally, a 'bee' micelle structure was found as a transitional structure in ESEM observation. The test methods in this study will be used for more comprehensive analysis of asphalt binder microstructure. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Wang, Ke; Shi, Min; Cheng, Hong
2017-01-01
Microinjection and Fluorescence in situ Hybridization (FISH) assay is a useful method for mRNA export studies, which can overcome the problems of traditional transfection in cells. Here, we describe the method of microinjection and FISH assay applied in investigation of mRNA export. By this method we can estimate the mRNA export kinetics, examining mRNA export in cells with low transfection efficiencies, and observing nuclear export of aberrant RNAs.
Briggs, Marc A.; Rumbold, Penny L. S.; Cockburn, Emma; Russell, Mark; Stevenson, Emma J.
2015-01-01
Collecting accurate and reliable nutritional data from adolescent populations is challenging, with current methods providing significant under-reporting. Therefore, the aim of the study was to determine the accuracy of a combined dietary data collection method (self-reported weighed food diary, supplemented with a 24-h recall) when compared to researcher observed energy intake in male adolescent soccer players. Twelve Academy players from an English Football League club participated in the study. Players attended a 12 h period in the laboratory (08:00 h–20:00 h), during which food and drink items were available and were consumed ad libitum. Food was also provided to consume at home between 20:00 h and 08:00 h the following morning under free-living conditions. To calculate the participant reported energy intake, food and drink items were weighed and recorded in a food diary by each participant, which was supplemented with information provided through a 24-h recall interview the following morning. Linear regression, limits of agreement (LOA) and typical error (coefficient of variation; CV) were used to quantify agreement between observer and participant reported 24-h energy intake. Difference between methods was assessed using a paired samples t-test. Participants systematically under-reported energy intake in comparison to that observed (p < 0.01) but the magnitude of this bias was small and consistent (mean bias = −88 kcal·day−1, 95% CI for bias = −146 to −29 kcal·day−1). For random error, the 95% LOA between methods ranged between −1.11 to 0.37 MJ·day−1 (−256 to 88 kcal·day−1). The standard error of the estimate was low, with a typical error between measurements of 3.1%. These data suggest that the combined dietary data collection method could be used interchangeably with the gold standard observed food intake technique in the population studied providing that appropriate adjustment is made for the systematic under-reporting common to such methods. PMID:26193315
Observer detection of image degradation caused by irreversible data compression processes
NASA Astrophysics Data System (ADS)
Chen, Ji; Flynn, Michael J.; Gross, Barry; Spizarny, David
1991-05-01
Irreversible data compression methods have been proposed to reduce the data storage and communication requirements of digital imaging systems. In general, the error produced by compression increases as an algorithm''s compression ratio is increased. We have studied the relationship between compression ratios and the detection of induced error using radiologic observers. The nature of the errors was characterized by calculating the power spectrum of the difference image. In contrast with studies designed to test whether detected errors alter diagnostic decisions, this study was designed to test whether observers could detect the induced error. A paired-film observer study was designed to test whether induced errors were detected. The study was conducted with chest radiographs selected and ranked for subtle evidence of interstitial disease, pulmonary nodules, or pneumothoraces. Images were digitized at 86 microns (4K X 5K) and 2K X 2K regions were extracted. A full-frame discrete cosine transform method was used to compress images at ratios varying between 6:1 and 60:1. The decompressed images were reprinted next to the original images in a randomized order with a laser film printer. The use of a film digitizer and a film printer which can reproduce all of the contrast and detail in the original radiograph makes the results of this study insensitive to instrument performance and primarily dependent on radiographic image quality. The results of this study define conditions for which errors associated with irreversible compression cannot be detected by radiologic observers. The results indicate that an observer can detect the errors introduced by this compression algorithm for compression ratios of 10:1 (1.2 bits/pixel) or higher.
NASA Technical Reports Server (NTRS)
Smith, R. L.; Huang, C.
1986-01-01
A recent mathematical technique for solving systems of equations is applied in a very general way to the orbit determination problem. The study of this technique, the homotopy continuation method, was motivated by the possible need to perform early orbit determination with the Tracking and Data Relay Satellite System (TDRSS), using range and Doppler tracking alone. Basically, a set of six tracking observations is continuously transformed from a set with known solution to the given set of observations with unknown solutions, and the corresponding orbit state vector is followed from the a priori estimate to the solutions. A numerical algorithm for following the state vector is developed and described in detail. Numerical examples using both real and simulated TDRSS tracking are given. A prototype early orbit determination algorithm for possible use in TDRSS orbit operations was extensively tested, and the results are described. Preliminary studies of two extensions of the method are discussed: generalization to a least-squares formulation and generalization to an exhaustive global method.
ERIC Educational Resources Information Center
Putnam, Susan K.; Lopata, Christopher; Fox, Jeffery D.; Thomeer, Marcus L.; Rodgers, Jonathan D.; Volker, Martin A.; Lee, Gloria K.; Neilans, Erik G.; Werth, Jilynn
2012-01-01
This study compared cortisol concentrations yielded using three saliva collection methods (passive drool, salivette, and sorbette) in both in vitro and in vivo conditions, as well as method acceptability for a sample of children (n = 39) with High Functioning Autism Spectrum Disorders. No cortisol concentration differences were observed between…
ERIC Educational Resources Information Center
Stuebing, Karla K.; Fletcher, Jack M.; Branum-Martin, Lee; Francis, David J.
2012-01-01
This study used simulation techniques to evaluate the technical adequacy of three methods for the identification of specific learning disabilities via patterns of strengths and weaknesses in cognitive processing. Latent and observed data were generated and the decision-making process of each method was applied to assess concordance in…
NASA Astrophysics Data System (ADS)
Barker, J. R.; Pasternack, G. B.; Bratovich, P.; Massa, D.; Reedy, G.; Johnson, T.
2010-12-01
Two-dimensional (depth-averaged) hydrodynamic models have existed for decades and are used to study a variety of hydrogeomorphic processes as well as to design river rehabilitation projects. Rapid computer and coding advances are revolutionizing the size and detail of 2D models. Meanwhile, advances in topo mapping and environmental informatics are providing the data inputs to drive large, detailed simulations. Million-element computational meshes are in hand. With simulations of this size and detail, the primary challenge has shifted to finding rapid and inexpensive means for testing model predictions against observations. Standard methods for collecting velocity data include boat-mounted ADCP and point-based sensors on boats or wading rods. These methods are labor intensive and often limited to a narrow flow range. Also, they generate small datasets at a few cross-sections, which is inadequate to characterize the statistical structure of the relation between predictions and observations. Drawing on the long-standing oceanographic method of using drogues to track water currents, previous studies have demonstrated the potential of small dGPS units to obtain surface velocity in rivers. However, dGPS is too inaccurate to test 2D models. Also, there is financial risk in losing drogues in rough currents. In this study, an RTK GPS unit was mounted onto a manned whitewater kayak. The boater positioned himself into the current and used floating debris to maintain a speed and heading consistent with the ambient surface flow field. RTK GPS measurements were taken ever 5 sec. From these positions, a 2D velocity vector was obtained. The method was tested over ~20 km of the lower Yuba River in California in flows ranging from 500-5000 cfs, yielding 5816 observations. To compare velocity magnitude against the 2D model-predicted depth-averaged value, kayak-based surface values were scaled down by an optimized constant (0.72), which had no negative effect on regression analysis. The r2 value for speed was 0.78 by this method, compared with 0.57 based on 199 points from traditional measurements. The r2 value for velocity direction was 0.77. Although it is not ideal to rely on observed surface velocity to evaluate depth-average velocity predictions, all available velocity-measurement methods have a suite of assumptions and complications. Using this method, the availability of 10-100x more data was so beneficial that the outcome was among the highest model performance outcomes reported in the literature.
Spectral feature measurements and analyses of the East Lake
NASA Astrophysics Data System (ADS)
Fang, Shenghui; Zhou, Yuan; Zhu, Wu
2005-10-01
It is one of basis of water color remote sensing to investigate the method to obtain and analyze the spectral features of the water bodies. This paper concerns the above-water method for the spectral measurements of inland water. A series of experiments were taken in areas of the East Lake with the EPP2000CCD radiometer, and the geometry attitude of the observation and the method of the elimination of the noise of the water Signals will be discussed. The method of the above-water spectral measurements was studied from the point of view of error source. On the basis of the experiments of the water depth and the observing direction form the sun and surface, it is suggested to remove the radiances of the whitecaps, surface-reflected sun glint and skylight which have not the spectral features of water from the lake surface by specialized observing attitude and data processing. At last, a suit of methods is concluded for the water body of the East Lake in measuring and analyzing the spectral features from above-water.
Tian, Haomei; Shen, Jing; Shi, Jia; Liu, Mi; Wang, Chao; Liu, Jinzhi; Chen, Chutao
2016-11-12
To explore the impacts of collaborative teaching method on the teaching achievement of Acupuncture and Moxibustion . Six classes in Hunan University of CM of 2012 grade Chinese medicine department were randomized into an observation group and a control group, 3 classes in each one. In the observation group, the collaborative teaching method was adopted, in which, different teaching modes were used according to the characteristics of each chapter and the study initiative of students was predominated. In the control group, the traditional teaching method was used, in which, the class teaching was the primary and the practice was the secondary in the section of techniques of acupuncture and moxibustion. The results of each curriculum and the total results were compared between the two groups during the whole semester. Compared with the control group, in the observation group, the total achievements of curriculum and case analysis combined with the total result of the theory examination were apparently improved (both P <0.01). The collaborative teaching method improves the comprehensive ability of students and provides a new approach to the teaching of Acupuncture and Moxibustion .
NASA Astrophysics Data System (ADS)
Chang, Q.; Jiao, W.
2017-12-01
Phenology is a sensitive and critical feature of vegetation change that has regarded as a good indicator in climate change studies. So far, variety of remote sensing data sources and phenology extraction methods from satellite datasets have been developed to study the spatial-temporal dynamics of vegetation phenology. However, the differences between vegetation phenology results caused by the varies satellite datasets and phenology extraction methods are not clear, and the reliability for different phenology results extracted from remote sensing datasets is not verified and compared using the ground observation data. Based on three most popular remote sensing phenology extraction methods, this research calculated the Start of the growing season (SOS) for each pixels in the Northern Hemisphere for two kinds of long time series satellite datasets: GIMMS NDVIg (SOSg) and GIMMS NDVI3g (SOS3g). The three methods used in this research are: maximum increase method, dynamic threshold method and midpoint method. Then, this study used SOS calculated from NEE datasets (SOS_NEE) monitored by 48 eddy flux tower sites in global flux website to validate the reliability of six phenology results calculated from remote sensing datasets. Results showed that both SOSg and SOS3g extracted by maximum increase method are not correlated with ground observed phenology metrics. SOSg and SOS3g extracted by the dynamic threshold method and midpoint method are both correlated with SOS_NEE significantly. Compared with SOSg extracted by the dynamic threshold method, SOSg extracted by the midpoint method have a stronger correlation with SOS_NEE. And, the same to SOS3g. Additionally, SOSg showed stronger correlation with SOS_NEE than SOS3g extracted by the same method. SOS extracted by the midpoint method from GIMMS NDVIg datasets seemed to be the most reliable results when validated with SOS_NEE. These results can be used as reference for data and method selection in future's phenology study.
Overview of clinical research design.
Hartung, Daniel M; Touchette, Daniel
2009-02-15
Basic concepts and terminology of clinical research design are presented for new clinical investigators. Clinical research, research involving human subjects, can be described as either observational or experimental. The findings of all clinical research can be threatened by issues of bias and confounding. Biases are systematic errors in how study subjects are selected or measured, which result in false inferences. Confounding is a distortion in findings that is attributable to mixing variable effects. Uncontrolled observation research is generally more prone to bias and confounding than experimental research. Observational research includes designs such as the cohort study, case-control study, and cross-sectional study, while experimental research typically involves a randomized controlled trial (RCT). The cohort study, which includes the RCT, defines subject allocation on the basis of exposure interest (e.g., drug, disease-management program) and follows the patients to assess the outcomes. The case-control study uses the primary outcome of interest (e.g., adverse event) to define subject allocation, and different exposures are assessed in a retrospective manner. Cross-sectional research evaluates both exposure and outcome concurrently. Each of these design methods possesses different strengths and weaknesses in answering research questions, as well as underlying many study subtypes. While experimental research is the strongest method for establishing causality, it can be difficult to accomplish under many scenarios. Observational clinical research offers many design alternatives that may be appropriate if planned and executed carefully.
Live CLEM imaging to analyze nuclear structures at high resolution.
Haraguchi, Tokuko; Osakada, Hiroko; Koujin, Takako
2015-01-01
Fluorescence microscopy (FM) and electron microscopy (EM) are powerful tools for observing molecular components in cells. FM can provide temporal information about cellular proteins and structures in living cells. EM provides nanometer resolution images of cellular structures in fixed cells. We have combined FM and EM to develop a new method of correlative light and electron microscopy (CLEM), called "Live CLEM." In this method, the dynamic behavior of specific molecules of interest is first observed in living cells using fluorescence microscopy (FM) and then cellular structures in the same cell are observed using electron microscopy (EM). Following image acquisition, FM and EM images are compared to enable the fluorescent images to be correlated with the high-resolution images of cellular structures obtained using EM. As this method enables analysis of dynamic events involving specific molecules of interest in the context of specific cellular structures at high resolution, it is useful for the study of nuclear structures including nuclear bodies. Here we describe Live CLEM that can be applied to the study of nuclear structures in mammalian cells.
The reliability of clinical decisions based on the cervical vertebrae maturation staging method.
Sohrabi, Aydin; Babay Ahari, Sahar; Moslemzadeh, Hossein; Rafighi, Ali; Aghazadeh, Zahra
2016-02-01
Of the various techniques used to determine the optimum timing for growth modification treatments, the cervical vertebrae maturation method has great advantages, including validity and no need for extra X-ray exposure. Recently, the reproducibility of this method has been questioned. The aim of this study was to investigate the cause of poor reproducibility of this method and to assess the reproducibility of the clinical decisions made based on it. Seventy lateral cephalograms of Iranian patients aged 9‒15 years were observed twice by five experienced orthodontists. In addition to determining the developmental stage, each single parameter involved in this method was assessed in terms of inter- and intra-observer reproducibility. In order to evaluate the reproducibility of clinical decisions based on this method, cervical vertebrae maturation staging (CVMS) I and II were considered as phase 1 and CVMS IV and V were considered as phase 3. By considering the clinical approach of the CVMS method, inter-observer reproducibility of this method increased from 0.48 to 0.61 (moderate to substantial) and intra-observer reproducibility enhanced from 0.72 to 0.74. 1. Complete visualization of the first four cervical vertebrae was an inclusion criterion, which also limits the clinical application of CVMS method. 2. These results can be generalized when determining growth modification treatments solely for Class II patients. Difficulty in determining the morphology of C3 and C4 leads to poor reproducibility of the CVMS method. Despite this, it has acceptable reproducibility in determining the timing of functional treatment for Class II patients. © The Author 2015. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.
2012-01-01
Background Documentation of posture measurement costs is rare and cost models that do exist are generally naïve. This paper provides a comprehensive cost model for biomechanical exposure assessment in occupational studies, documents the monetary costs of three exposure assessment methods for different stakeholders in data collection, and uses simulations to evaluate the relative importance of cost components. Methods Trunk and shoulder posture variables were assessed for 27 aircraft baggage handlers for 3 full shifts each using three methods typical to ergonomic studies: self-report via questionnaire, observation via video film, and full-shift inclinometer registration. The cost model accounted for expenses related to meetings to plan the study, administration, recruitment, equipment, training of data collectors, travel, and onsite data collection. Sensitivity analyses were conducted using simulated study parameters and cost components to investigate the impact on total study cost. Results Inclinometry was the most expensive method (with a total study cost of € 66,657), followed by observation (€ 55,369) and then self report (€ 36,865). The majority of costs (90%) were borne by researchers. Study design parameters such as sample size, measurement scheduling and spacing, concurrent measurements, location and travel, and equipment acquisition were shown to have wide-ranging impacts on costs. Conclusions This study provided a general cost modeling approach that can facilitate decision making and planning of data collection in future studies, as well as investigation into cost efficiency and cost efficient study design. Empirical cost data from a large field study demonstrated the usefulness of the proposed models. PMID:22738341
Causal inference from observational data.
Listl, Stefan; Jürges, Hendrik; Watt, Richard G
2016-10-01
Randomized controlled trials have long been considered the 'gold standard' for causal inference in clinical research. In the absence of randomized experiments, identification of reliable intervention points to improve oral health is often perceived as a challenge. But other fields of science, such as social science, have always been challenged by ethical constraints to conducting randomized controlled trials. Methods have been established to make causal inference using observational data, and these methods are becoming increasingly relevant in clinical medicine, health policy and public health research. This study provides an overview of state-of-the-art methods specifically designed for causal inference in observational data, including difference-in-differences (DiD) analyses, instrumental variables (IV), regression discontinuity designs (RDD) and fixed-effects panel data analysis. The described methods may be particularly useful in dental research, not least because of the increasing availability of routinely collected administrative data and electronic health records ('big data'). © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Measuring the effectiveness of patient-chosen reminder methods in a private orthodontic practice.
Wegrzyniak, Lauren M; Hedderly, Deborah; Chaudry, Kishore; Bollu, Prashanti
2018-05-01
To evaluate the effectiveness of patient-chosen appointment reminder methods (phone call, e-mail, or SMS text) in reducing no-show rates. This was a retrospective case study that determined the correlation between patient-chosen appointment reminder methods and no-show rates in a private orthodontic practice. This study was conducted in a single office location of a multioffice private orthodontic practice using data gathered in 2015. The subjects were patients who self-selected the appointment reminder method (phone call, e-mail, or SMS text). Patient appointment data were collected over a 6-month period. Patient attendance was analyzed with descriptive statistics to determine any significant differences among patient-chosen reminder methods. There was a total of 1193 appointments with an average no-show rate of 2.43% across the three reminder methods. No statistically significant differences ( P = .569) were observed in the no-show rates between the three methods: phone call (3.49%), e-mail (2.68%), and SMS text (1.90%). The electronic appointment reminder methods (SMS text and e-mail) had lower no-show rates compared with the phone call method, with SMS text having the lowest no-show rate of 1.90%. However, since no significant differences were observed between the three patient-chosen reminder methods, providers may want to allow patients to choose their reminder method to decrease no-shows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hajian, Amir; Alvarez, Marcelo A.; Bond, J. Richard, E-mail: ahajian@cita.utoronto.ca, E-mail: malvarez@cita.utoronto.ca, E-mail: bond@cita.utoronto.ca
Making mock simulated catalogs is an important component of astrophysical data analysis. Selection criteria for observed astronomical objects are often too complicated to be derived from first principles. However the existence of an observed group of objects is a well-suited problem for machine learning classification. In this paper we use one-class classifiers to learn the properties of an observed catalog of clusters of galaxies from ROSAT and to pick clusters from mock simulations that resemble the observed ROSAT catalog. We show how this method can be used to study the cross-correlations of thermal Sunya'ev-Zeldovich signals with number density maps ofmore » X-ray selected cluster catalogs. The method reduces the bias due to hand-tuning the selection function and is readily scalable to large catalogs with a high-dimensional space of astrophysical features.« less
Error in geometric morphometric data collection: Combining data from multiple sources.
Robinson, Chris; Terhune, Claire E
2017-09-01
This study compares two- and three-dimensional morphometric data to determine the extent to which intra- and interobserver and intermethod error influence the outcomes of statistical analyses. Data were collected five times for each method and observer on 14 anthropoid crania using calipers, a MicroScribe, and 3D models created from NextEngine and microCT scans. ANOVA models were used to examine variance in the linear data at the level of genus, species, specimen, observer, method, and trial. Three-dimensional data were analyzed using geometric morphometric methods; principal components analysis was employed to examine how trials of all specimens were distributed in morphospace and Procrustes distances among trials were calculated and used to generate UPGMA trees to explore whether all trials of the same individual grouped together regardless of observer or method. Most variance in the linear data was at the genus level, with greater variance at the observer than method levels. In the 3D data, interobserver and intermethod error were similar to intraspecific distances among Callicebus cupreus individuals, with interobserver error being higher than intermethod error. Generally, taxa separate well in morphospace, with different trials of the same specimen typically grouping together. However, trials of individuals in the same species overlapped substantially with one another. Researchers should be cautious when compiling data from multiple methods and/or observers, especially if analyses are focused on intraspecific variation or closely related species, as in these cases, patterns among individuals may be obscured by interobserver and intermethod error. Conducting interobserver and intermethod reliability assessments prior to the collection of data is recommended. © 2017 Wiley Periodicals, Inc.
Higher-Order Adaptive Finite-Element Methods for Kohn-Sham Density Functional Theory
2012-07-03
systems studied, we observe diminishing returns in computational savings beyond the sixth-order for accuracies commensurate with chemi- cal accuracy...calculations. Further, we demonstrate the capability of the proposed approach to compute the electronic structure of materials systems contain- ing a...benchmark systems studied, we observe diminishing returns in computational savings beyond the sixth-order for accuracies commensurate with chemical accuracy
Puura, Kaija; Mäntymaa, Mirjami; Luoma, Ilona; Kaukonen, Pälvi; Guedeney, Antoine; Salmelin, Raili; Tamminen, Tuula
2010-12-01
Distressed infants may withdraw from social interaction, but recognising infants' social withdrawal is difficult. The aims of the study were to see whether an infant observation method can be reliably used by front line workers, and to examine the prevalence of infants' social withdrawal symptoms. A random sample of 363 families with four, eight or 18-month-old infants participated in the study. The infants were examined by general practitioners (GPs) in well-baby clinics with the Alarm Distress BaBy Scale (ADBB), an observation method developed for clinical settings. A score of five or more on the ADBB Scale in two subsequent assessments at a two-week interval was regarded as a sign of clinically significant infant social withdrawal. Kappas were calculated for the GPs' correct rating of withdrawn/not withdrawn against a set of videotapes rated by developer of the method, Professor Guedeney and his research group. The kappas for their ratings ranged from 0.5 to 1. The frequency of infants scoring above the cut off in two subsequent assessments was 3%. The ADBB Scale is a promising method for detecting infant social withdrawal in front line services. Three percents of infants were showing sustained social withdrawal as a sign of distress in this normal population sample. Copyright © 2010 Elsevier Inc. All rights reserved.
Veisani, Yousef; Delpisheh, Ali; Sayehmiri, Kourosh; Moradi, Ghobad; Hassanzadeh, Jafar
2017-08-01
Little attention has been paid to seasonality in suicide in Iran. Time pattern in suicide deaths and suicide attempts for some related factors such as gender, mental disorders has been found. In present study, we focus on suicide methods and the association with seasonality and other putative covariates such as gender. Through a cross-sectional study, overall identified suicide attempts and suicide deaths in the province of Ilam from 1 January 2010 and 31 December 2014 were enrolled. We used Edwards' test for test of seasonality in suicide methods. Seasonal effect (peak/trough seasons) and (deaths/attempts suicide) was explored by ratio statistics, the null hypothesis being that the attempted suicides in each method group are evenly distributed over a year. More suicide attempts by hanging 29.4% and self-immolation 41.4% were observed in spring and differ by season pattern in both genders. The overall distribution of suicides by violent and non-violent methods was (males x2=6.3, P=0.041, females x2=7.7, P=0.021) and (males x2=44.5, P=0.001, females x2=104.7, P=0.001), respectively. The peak and trough seasons was observed in taking medications and self-poisoning for spring and winter. Suicide with alcohol was no differ by season pattern (x2=1.0, P=0.460). Suicide in Ilam illustrates a significant seasonality for both violent and non-violent methods of suicide, in both genders, the two peaks were observed in spring and autumn for violent suicides, and spring and summer in non-violent suicides.
ERIC Educational Resources Information Center
Ozdemir, Burhanettin
2017-01-01
The purpose of this study is to equate Trends in International Mathematics and Science Study (TIMSS) mathematics subtest scores obtained from TIMSS 2011 to scores obtained from TIMSS 2007 form with different nonlinear observed score equating methods under Non-Equivalent Anchor Test (NEAT) design where common items are used to link two or more test…
Evaluation of camouflage pattern performance of textiles by human observers and CAMAELEON
NASA Astrophysics Data System (ADS)
Heinrich, Daniela H.; Selj, Gorm K.
2017-10-01
Military textiles with camouflage pattern are an important part of the protection measures for soldiers. Military operational environments differ a lot depending on climate and vegetation. This requires very different camouflage pattern to achieve good protection. To find the best performing pattern for given environments we have in earlier evaluations mainly applied observer trials as evaluation method. In these camouflage evaluation test human observers were asked to search for targets (in natural settings) presented on a high resolution PC screen, and the corresponding detection times were recorded. Another possibility is to base the evaluation on simulations. CAMAELEON is a licensed tool that ranks camouflaged targets by their similarity with local backgrounds. The similarity is estimated through the parameters local contrast, orientation of structures in the pattern and spatial frequency, by mimicking the response and signal processing in the visual cortex of the human eye. Simulations have a number of advantages over observer trials, for example, that they are more flexible, cheaper, and faster. Applying these two methods to the same images of camouflaged targets we found that CAMAELEON simulation results didn't match observer trial results for targets with disruptive patterns. This finding now calls for follow up studies in order to learn more about the advantages and pitfalls of CAMAELEON. During recent observer trials we studied new camouflage patterns and the effect of additional equipment, such as combat vests. In this paper we will present the results from a study comparing evaluation results of human based observer trials and CAMAELEON.
Adaptive linearization of phase space. A hydrological case study
NASA Astrophysics Data System (ADS)
Angarita, Hector; Domínguez, Efraín
2013-04-01
Here is presented a method and its implementation to extract transition operators from hydrological signals with significant algorithmic complexity, i.e. signals with an identifiable deterministic component and a non-periodic and irregular part, being the latter a source of uncertainty for the observer. The method assumes that in a system such as a hydrological system, from the perspective of information theory, signals cannot be known to an arbitrary level of precision due to limited observation or coding capabilities. According to the Shannon-Hartley theorem, at a given sampling frequency -fs' there is a theoretical peak capacity C to observe data from a random signal (i.e. the discharge) transmitted through a noisy channel with a signal to noise ratio -SNR. This imposes a limit on the observer capability to completely reconstruct an observed signal if the sampling frequency -fs' is lower than a given threshold -fs', for which a system signal can be completely recovered for any given SNR. Since most hydrological monitoring systems have low monitoring frequency, the observations may contain less information than required to describe the process dynamics and as a result observed signals exhibit some level of uncertainty if compared with the "true" signal. In the proposed approach, a simple local phase-space model, with locally linearized deterministic and stochastic differential equations, is applied to extract system's state transition operators and to probabilistically characterize the signal uncertainty. In order to determine optimality of the local operators, three main elements are considered: i: System state dimensionality, ii. Sampling frequency and, iii. Parameterization window length. Two examples are shown and discussed to illustrate the method. First, the evaluation of the feasibility of real-time forecasting models for levels and fow rates, from hourly to 14-day lead times. The results of this application demonstrate the operational feasibility for simple predictive models for most of the evaluated cases. The second application is the definition of a stage-discharge decoding method based on the dynamics of the water level observed signal. The results indicate that the method leads to a reduction of hysteresis in the decoded flow, which however is not satisfactory as a quadratic bias emerged in the decoded values and needs explanation. Both examples allow to conclude about the optimal sampling frequency of studied variables.
Norris, Susan L; Atkins, David; Bruening, Wendy; Fox, Steven; Johnson, Eric; Kane, Robert; Morton, Sally C; Oremus, Mark; Ospina, Maria; Randhawa, Gurvaneet; Schoelles, Karen; Shekelle, Paul; Viswanathan, Meera
2011-11-01
Systematic reviewers disagree about the ability of observational studies to answer questions about the benefits or intended effects of pharmacotherapeutic, device, or procedural interventions. This study provides a framework for decision making on the inclusion of observational studies to assess benefits and intended effects in comparative effectiveness reviews (CERs). The conceptual model and recommendations were developed using a consensus process by members of the methods workgroup of the Effective Health Care Program of the Agency for Healthcare Research and Quality. In considering whether to use observational studies in CERs for addressing beneficial effects, reviewers should answer two questions: (1) Are there gaps in the evidence from randomized controlled trials (RCTs)? (2) Will observational studies provide valid and useful information? The latter question involves the following: (a) refocusing the study questions on gaps in the evidence from RCTs, (b) assessing the risk of bias of the body of evidence of observational studies, and (c) assessing whether available observational studies address the gap review questions. Because it is unusual to find sufficient evidence from RCTs to answer all key questions concerning benefit or the balance of benefits and harms, comparative effectiveness reviewers should routinely assess the appropriateness of inclusion of observational studies for questions of benefit. Furthermore, reviewers should explicitly state the rationale for inclusion or exclusion of observational studies when conducting CERs. Copyright © 2011 Elsevier Inc. All rights reserved.
Seismic, satellite, and site observations of internal solitary waves in the NE South China Sea.
Tang, Qunshu; Wang, Caixia; Wang, Dongxiao; Pawlowicz, Rich
2014-06-20
Internal solitary waves (ISWs) in the NE South China Sea (SCS) are tidally generated at the Luzon Strait. Their propagation, evolution, and dissipation processes involve numerous issues still poorly understood. Here, a novel method of seismic oceanography capable of capturing oceanic finescale structures is used to study ISWs in the slope region of the NE SCS. Near-simultaneous observations of two ISWs were acquired using seismic and satellite imaging, and water column measurements. The vertical and horizontal length scales of the seismic observed ISWs are around 50 m and 1-2 km, respectively. Wave phase speeds calculated from seismic observations, satellite images, and water column data are consistent with each other. Observed waveforms and vertical velocities also correspond well with those estimated using KdV theory. These results suggest that the seismic method, a new option to oceanographers, can be further applied to resolve other important issues related to ISWs.
NASA Astrophysics Data System (ADS)
Kim, S.
2016-12-01
This study to improve the accuracy of discharge simulation at the head water of the Tone River Basin (Yagisawa Dam Basin; 167 km2 and Naramata Dam Basin; 67 km2), Japan, where the river discharge is governed by the snowmelt and thus much uncertainty was originated in our previous study (Kim et al, 2011). To decrease the uncertainty in our hydrological modeling and simulation, snowmelt amounts are estimated rigorously using an improved degree-day method. The degree-day method, which is the simplest method to estimate snowmelt, is adopted with an improved degree-day factor estimation method. The degree-day factor for the target area is estimated using the observed temperature and the observed river discharge of the snowmelt season. Using long-term observed data, the unique relationship between the degree-day factor and temperature are extracted, and the estimated degree-day factor as a function of temperature is applied for the winter season discharge simulation. Rainfall-runoff simulation for the rest of season is done by the kinematic wave model based on the stage-discharge relationship, considering surface-subsurface flow generation. Finally, long-term (1979-2008) simulation output for the dam inflow is reconstructed and compared with the observed one. ( Kim, S., Tachikawa, Y., Nakakita, E., Yorozu, K. and Shiiba, M. 2011. Climate change impact on river flow of the Tone river basin, Japan, Annual Journal of Hydraulic Engneering, JSCE, 55:S_85-S_90.)
Recharge signal identification based on groundwater level observations.
Yu, Hwa-Lung; Chu, Hone-Jay
2012-10-01
This study applied a method of the rotated empirical orthogonal functions to directly decompose the space-time groundwater level variations and determine the potential recharge zones by investigating the correlation between the identified groundwater signals and the observed local rainfall records. The approach is used to analyze the spatiotemporal process of piezometric heads estimated by Bayesian maximum entropy method from monthly observations of 45 wells in 1999-2007 located in the Pingtung Plain of Taiwan. From the results, the primary potential recharge area is located at the proximal fan areas where the recharge process accounts for 88% of the spatiotemporal variations of piezometric heads in the study area. The decomposition of groundwater levels associated with rainfall can provide information on the recharge process since rainfall is an important contributor to groundwater recharge in semi-arid regions. Correlation analysis shows that the identified recharge closely associates with the temporal variation of the local precipitation with a delay of 1-2 months in the study area.
Analysis of French Jesuit observations of Io made in China in AD 1689‒1690
NASA Astrophysics Data System (ADS)
Gislén, Lars
2017-12-01
The methods and quality of seventeenth century timings of immersions and emersions of the Galilean satellite Io were studied. It was found that the quality of the observations was very good but that in the cases where these observations were used for longitude determinations, the results were impaired by the inaccuracy of Cassini's ephemerides that were used.
Davison, Kirsten K.; Austin, S. Bryn; Giles, Catherine; Cradock, Angie L.; Lee, Rebekka M.; Gortmaker, Steven L.
2017-01-01
Interest in evaluating and improving children’s diets in afterschool settings has grown, necessitating the development of feasible yet valid measures for capturing children’s intake in such settings. This study’s purpose was to test the criterion validity and cost of three unobtrusive visual estimation methods compared to a plate-weighing method: direct on-site observation using a 4-category rating scale and off-site rating of digital photographs taken on-site using 4- and 10-category scales. Participants were 111 children in grades 1–6 attending four afterschool programs in Boston, MA in December 2011. Researchers observed and photographed 174 total snack meals consumed across two days at each program. Visual estimates of consumption were compared to weighed estimates (the criterion measure) using intra-class correlations. All three methods were highly correlated with the criterion measure, ranging from 0.92–0.94 for total calories consumed, 0.86–0.94 for consumption of pre-packaged beverages, 0.90–0.93 for consumption of fruits/vegetables, and 0.92–0.96 for consumption of grains. For water, which was not pre-portioned, coefficients ranged from 0.47–0.52. The photographic methods also demonstrated excellent inter-rater reliability: 0.84–0.92 for the 4-point and 0.92–0.95 for the 10-point scale. The costs of the methods for estimating intake ranged from $0.62 per observation for the on-site direct visual method to $0.95 per observation for the criterion measure. This study demonstrates that feasible, inexpensive methods can validly and reliably measure children’s dietary intake in afterschool settings. Improving precision in measures of children’s dietary intake can reduce the likelihood of spurious or null findings in future studies. PMID:25596895
Multiple scattering in particulate planetary surfaces
NASA Astrophysics Data System (ADS)
Muinonen, Karri; Peltoniemi, Jouni; Markkanen, Johannes; Penttilä, Antti; Videen, Gorden
2015-08-01
There are two ubiquitous phenomena observed at small solar phase angles (the Sun-Object-Observer angle) from, for example, asteroids and transneptunian objects. First, a nonlinear increase of brightness is observed toward the zero phase angle in the magnitude scale that is commonly called the opposition effect. Second, the scattered light is observed to be partially linearly polarized parallel to the Sun-Object-Observer plane that iscommonly called the negative polarization surge.The observations can be interpreted using a radiative-transfer coherent-backscattering Monte Carlo method (RT-CB, Muinonen 2004) that makes use of a so-called phenomenological fundamental single scatterer (Muinonen and Videen 2012). For the validity of RT-CB, see Muinonen et al. (2012). The method can allow us to put constraints on the size, shape, and refractive index of the fundamental scatterers.In the present work, we extend the RT-CB method for the specific case of a macroscopic medium of electric dipole scatterers. For the computation of the interactions, the far-field approximation inherent in the RT-CB method is replaced by an exact treatment, allowing us to account for, e.g., the so-called near-field effects. The present method constitutes the first milestone in the development of a multiple-scattering method, where the so-called ladder and maximally crossed cyclical diagrams of the multiple electromagnetic interactions are rigorously computed. We expect to utilize the new methods in the spectroscopic, photometric, and polarimetric studies of asteroids, as well as in the interpretation of radar echoes from small Solar System bodies.Acknowledgments. The research is funded by the ERC Advanced Grant No 320773 entitled Scattering and Absorption of Electromagnetic Waves in Particulate Media (SAEMPL).K. Muinonen, Waves in Random Media 14, 365 (2004).K. Muinonen, K., and G. Videen, JQSRT 113, 2385 (2012).K. Muinonen, M. I. Mishchenko, J. M. Dlugach, E. Zubko, A. Penttilä,and G. Videen, ApJ 760, 118 (2012).
Sparse kernel methods for high-dimensional survival data.
Evers, Ludger; Messow, Claudia-Martina
2008-07-15
Sparse kernel methods like support vector machines (SVM) have been applied with great success to classification and (standard) regression settings. Existing support vector classification and regression techniques however are not suitable for partly censored survival data, which are typically analysed using Cox's proportional hazards model. As the partial likelihood of the proportional hazards model only depends on the covariates through inner products, it can be 'kernelized'. The kernelized proportional hazards model however yields a solution that is dense, i.e. the solution depends on all observations. One of the key features of an SVM is that it yields a sparse solution, depending only on a small fraction of the training data. We propose two methods. One is based on a geometric idea, where-akin to support vector classification-the margin between the failed observation and the observations currently at risk is maximised. The other approach is based on obtaining a sparse model by adding observations one after another akin to the Import Vector Machine (IVM). Data examples studied suggest that both methods can outperform competing approaches. Software is available under the GNU Public License as an R package and can be obtained from the first author's website http://www.maths.bris.ac.uk/~maxle/software.html.
Variability of Currents in Great South Channel and Over Georges Bank: Observation and Modeling
1992-06-01
Rizzoli motivated me to study the driv:,: mechanism of stratified tidal rectification using diagnostic analysis methods . Conversations with Glen...drifter trajectories in the 1988 and 1989 surveys give further encouragement that the analysis method yields an accurate picture of the nontidal flow...harmonic truncation method . Scaling analysis argues that this method is not appropriate for a step topography because it is valid only when the
Zheng, Bin; Lu, Amy; Hardesty, Lara A; Sumkin, Jules H; Hakim, Christiane M; Ganott, Marie A; Gur, David
2006-01-01
The purpose of this study was to develop and test a method for selecting "visually similar" regions of interest depicting breast masses from a reference library to be used in an interactive computer-aided diagnosis (CAD) environment. A reference library including 1000 malignant mass regions and 2000 benign and CAD-generated false-positive regions was established. When a suspicious mass region is identified, the scheme segments the region and searches for similar regions from the reference library using a multifeature based k-nearest neighbor (KNN) algorithm. To improve selection of reference images, we added an interactive step. All actual masses in the reference library were subjectively rated on a scale from 1 to 9 as to their "visual margins speculations". When an observer identifies a suspected mass region during a case interpretation he/she first rates the margins and the computerized search is then limited only to regions rated as having similar levels of spiculation (within +/-1 scale difference). In an observer preference study including 85 test regions, two sets of the six "similar" reference regions selected by the KNN with and without the interactive step were displayed side by side with each test region. Four radiologists and five nonclinician observers selected the more appropriate ("similar") reference set in a two alternative forced choice preference experiment. All four radiologists and five nonclinician observers preferred the sets of regions selected by the interactive method with an average frequency of 76.8% and 74.6%, respectively. The overall preference for the interactive method was highly significant (p < 0.001). The study demonstrated that a simple interactive approach that includes subjectively perceived ratings of one feature alone namely, a rating of margin "spiculation," could substantially improve the selection of "visually similar" reference images.
Basic Methods for the Study of Reproductive Ecology of Fish in Aquaria.
Fukuda, Kazuya; Sunobe, Tomoki
2017-07-20
Captive-rearing observations are valuable for revealing aspects of fish behavior and ecology when continuous field investigations are impossible. Here, a series of basic techniques are described to enable observations of the reproductive behavior of a wild-caught gobiid fish, as a model, kept in an aquarium. The method focuses on three steps: collection, transport, and observations of reproductive ecology of a substrate spawner. Essential aspects of live fish collection and transport are (1) preventing injury to the fish, and (2) careful acclimation to the aquarium. Preventing harm through injuries such as scratches or a sudden change of water pressure is imperative when collecting live fish, as any physical damage is likely to negatively affect the survival and later behavior of the fish. Careful acclimation to aquaria decreases the incidence death and mitigates the shock of transport. Observations during captive rearing include (1) the identification of individual fish and (2) monitoring spawned eggs without negative effects to the fish or eggs, thereby enabling detailed investigation of the study species' reproductive ecology. The subcutaneous injection of a visible implant elastomer (VIE) tag is a precise method for the subsequent identification of individual fish, and it can be used with a wide size range of fish, with minimal influence on their survival and behavior. If the study species is a substrate spawner that deposits adhesive eggs, an artificial nest site constructed from polyvinyl chloride (PVC) pipe with the addition of a removable waterproof sheet will facilitate counting and monitoring the eggs, lessening the investigator's influence on the nest-holding and egg-guarding behavior of the fish. Although this basic method entails techniques that are seldom mentioned in detail in research articles, they are fundamental for undertaking experiments that require the captive rearing of a wild fish.
78 FR 35038 - Proposed Information Collection Activity; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-11
..., reliable, and transparent method for identifying high-quality programs that can receive continuing five... the system is working. The study will employ a mixed-methods design that integrates and layers administrative and secondary data sources, observational measures, and interviews to develop a rich knowledge...
Surface and allied studies in silicon solar cells
NASA Technical Reports Server (NTRS)
Lindholm, F. A.
1983-01-01
Two main results are presented. The first deals with a simple method that determines the minority-carrier lifetime and the effective surface recombination velocity of the quasi-neutral base of silicon solar cells. The method requires the observation of only a single transient, and is amenable to automation for in-process monitoring in manufacturing. This method, which is called short-circuit current decay, avoids distortion in the observed transient and consequent inacccuracies that arise from the presence of mobile holes and electrons stored in the p/n junction spacecharge region at the initial instant of the transient. The second main result consists in a formulation of the relevant boundary-value problems that resembles that used in linear two-port network theory. This formulation enables comparisons to be made among various contending methods for measuring material parameters of p/n junction devices, and enables the option of putting the description in the time domain of the transient studies in the form of an infinite series, although closed-form solutions are also possible.
Structured Matrix Completion with Applications to Genomic Data Integration.
Cai, Tianxi; Cai, T Tony; Zhang, Anru
2016-01-01
Matrix completion has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival.
Qiu, Fen; Tian, Hui; Zhang, Zhi; Yuan, Xian-Ling; Tan, Yuan-Feng; Ning, Xiao-Qing
2013-10-01
To study the effects of hemostasis, analgesic and anti inflammation of the alcohol extract of Hibiscus tiliaceus and offer pharmacological and experimental basis for its safe and effective use in clinic. The effects of hemostasist were observed with tail breaking method, capillary tube method and slide method; Hot board and body distortion induced by acetic acid methods were applied in mice analgesia experiment, the mice model of acute auricle swelling induced by dmi ethylbenzene and capillary permeability induced by acetic acid were applied to observe the anti inflammatory effects. The alcohol extract of Hibiscus tiliaceus could significantly reduce the bleeding time and the clotting time, delay the plant reaction time and reduce the writhing times of the mice, and it also had effect on inhibiting swelling of mice ear and the permeability of the capillary. These results suggest that the alcohol extract of Hibiscus tiliaceus has the effects of hemostasis, analgesic and anti inflammation.
An alternative empirical likelihood method in missing response problems and causal inference.
Ren, Kaili; Drummond, Christopher A; Brewster, Pamela S; Haller, Steven T; Tian, Jiang; Cooper, Christopher J; Zhang, Biao
2016-11-30
Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double-robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood-based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
ZARE, Mohsen; MALINGE-OUDENOT, Agnes; HÖGLUND, Robert; BIAU, Sophie; ROQUELAURE, Yves
2015-01-01
The aims of this study were 1) to assess the ergonomic physical risk factors from practitioner’s viewpoint in a truck assembly plant with an in-house observational method and the NIOSH lifting equation, and 2) to compare the results of both methods and their differences. The in-house ergonomic observational method for truck assembly i.e. the SCANIA Ergonomics Standard (SES) and the NIOSH lifting equation were applied to evaluate physical risk factors and lifting of loads by operators. Both risk assessment approaches revealed various levels of risk, ranging from low to high. Two workstations were identified by the SES method as high risk. The NIOSH lifting index (LI) was greater than two for four lifting tasks. The results of the SES method disagreed with the NIOSH lifting equation for lifting tasks. Moreover, meaningful variations in ergonomic risk patterns were found for various truck models at each workstation. These results provide a better understanding of the physical ergonomic exposure from practitioner’s point of view in the automotive assembly plant. PMID:26423331
Zare, Mohsen; Malinge-Oudenot, Agnes; Höglund, Robert; Biau, Sophie; Roquelaure, Yves
2016-01-01
The aims of this study were 1) to assess the ergonomic physical risk factors from practitioner's viewpoint in a truck assembly plant with an in-house observational method and the NIOSH lifting equation, and 2) to compare the results of both methods and their differences. The in-house ergonomic observational method for truck assembly i.e. the SCANIA Ergonomics Standard (SES) and the NIOSH lifting equation were applied to evaluate physical risk factors and lifting of loads by operators. Both risk assessment approaches revealed various levels of risk, ranging from low to high. Two workstations were identified by the SES method as high risk. The NIOSH lifting index (LI) was greater than two for four lifting tasks. The results of the SES method disagreed with the NIOSH lifting equation for lifting tasks. Moreover, meaningful variations in ergonomic risk patterns were found for various truck models at each workstation. These results provide a better understanding of the physical ergonomic exposure from practitioner's point of view in the automotive assembly plant.
Seo, Jeong-Ho; Boedijono, Dimas
2016-01-01
Purpose The aim of this study was to investigate new point-connecting measurements for the hallux valgus angle (HVA) and the first intermetatarsal angle (IMA), which can reflect the degree of subluxation of the first metatarsophalangeal joint (MTPJ). Also, this study attempted to compare the validity of midline measurements and the new point-connecting measurements for the determination of HVA and IMA values. Materials and Methods Sixty feet of hallux valgus patients who underwent surgery between 2007 and 2011 were classified in terms of the severity of HVA, congruency of the first MTPJ, and type of chevron metatarsal osteotomy. On weight-bearing dorsal-plantar radiographs, HVA and IMA values were measured and compared preoperatively and postoperatively using both the conventional and new methods. Results Compared with midline measurements, point-connecting measurements showed higher inter- and intra-observer reliability for preoperative HVA/IMA and similar or higher inter- and intra-observer reliability for postoperative HVA/IMA. Patients who underwent distal chevron metatarsal osteotomy (DCMO) had higher intraclass correlation coefficient for inter- and intra-observer reliability for pre- and post-operative HVA and IMA measured by the point-connecting method compared with the midline method. All differences in the preoperative HVAs and IMAs determined by both the midline method and point-connecting methods were significant between the deviated group and subluxated groups (p=0.001). Conclusion The point-connecting method for measuring HVA and IMA in the subluxated first MTPJ may better reflect the severity of a HV deformity with higher reliability than the midline method, and is more useful in patients with DCMO than in patients with proximal chevron metatarsal osteotomy. PMID:26996576
Lund, Travis J; Pilarz, Matthew; Velasco, Jonathan B; Chakraverty, Devasmita; Rosploch, Kaitlyn; Undersander, Molly; Stains, Marilyne
2015-01-01
Researchers, university administrators, and faculty members are increasingly interested in measuring and describing instructional practices provided in science, technology, engineering, and mathematics (STEM) courses at the college level. Specifically, there is keen interest in comparing instructional practices between courses, monitoring changes over time, and mapping observed practices to research-based teaching. While increasingly common observation protocols (Reformed Teaching Observation Protocol [RTOP] and Classroom Observation Protocol in Undergraduate STEM [COPUS]) at the postsecondary level help achieve some of these goals, they also suffer from weaknesses that limit their applicability. In this study, we leverage the strengths of these protocols to provide an easy method that enables the reliable and valid characterization of instructional practices. This method was developed empirically via a cluster analysis using observations of 269 individual class periods, corresponding to 73 different faculty members, 28 different research-intensive institutions, and various STEM disciplines. Ten clusters, called COPUS profiles, emerged from this analysis; they represent the most common types of instructional practices enacted in the classrooms observed for this study. RTOP scores were used to validate the alignment of the 10 COPUS profiles with reformed teaching. Herein, we present a detailed description of the cluster analysis method, the COPUS profiles, and the distribution of the COPUS profiles across various STEM courses at research-intensive universities. © 2015 T. J. Lund et al. CBE—Life Sciences Education © 2015 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Methods of Analysis and Overall Mathematics Teaching Quality in At-Risk Prekindergarten Classrooms
ERIC Educational Resources Information Center
McGuire, Patrick R.; Kinzie, Mable; Thunder, Kateri; Berry, Robert
2016-01-01
Research Findings: This study analyzed the quality of teacher-child interactions across 10 videotaped observations drawn from 5 different prekindergarten classrooms delivering the same mathematics curriculum: "MyTeachingPartner-Math." Interactions were coded using 2 observational measures: (a) a general measure, the Classroom Assessment…
A Constructivist Study of Middle School Students' Narratives and Ecological Illustrations
ERIC Educational Resources Information Center
Stokrocki, Mary L.; Flatt, Barbara; York, Emily
2010-01-01
Using participant observation, we describe/interpret the results of teaching a constructivist unit that empowered students in narrative writing and illustration. Participant observation methods included daily note taking, pre-post questioning, and photographing artworks. We analyzed students' stories and illustrations with borrowed and emerging…
Administrators' Perceptions Regarding the Effectiveness of the Teacher Observation Evaluation System
ERIC Educational Resources Information Center
Williams, Kathleen Riley
2015-01-01
This phenomenological narrative study was designed to explore public school administrators' perceptions regarding Louisiana's Compass teacher observation evaluation system as a method for assessing teacher performance. Participants were administrators with at least two years of experience as a public school administrator at the secondary level,…
Classroom Management: Students' Perspectives, Goals and Strategies.
ERIC Educational Resources Information Center
Allen, James D.
A study investigated classroom management from the students' perspective. Ninety-seven high school students (primarily ninth graders) were observed in one school for 15 weeks in five different classes. Data were collected from this observation, as well as from student and teacher interviews. The guidelines of the Constant-Comparative Method of…
Integrating Quantitative and Ethnographic Methods to Describe the Classroom. Report No. 5083.
ERIC Educational Resources Information Center
Malitz, David; And Others
The debate between proponents of ethnographic and quantitative methodology in classroom observation is reviewed, and the respective strengths and weaknesses of the two approaches are discussed. These methodologies are directly compared in a study that conducted simultaneous ethnographic and quantitative observations on nine classrooms. It is…
NASA Astrophysics Data System (ADS)
Liu, Yun; Song, Shuqun; Chen, Tiantian; Li, Caiwen
2017-04-01
Pyrosequencing of the 18S rRNA gene has been widely adopted to study the eukaryotic diversity in various types of environments, and has an advantage over traditional morphology methods in exploring unknown microbial communities. To comprehensively assess the diversity and community composition of marine protists in the coastal waters of China, we applied both morphological observations and high-throughput sequencing of the V2 and V3 regions of 18S rDNA simultaneously to analyze samples collected from the surface layer of the Yellow and East China Seas. Dinoflagellates, diatoms and ciliates were the three dominant protistan groups as revealed by the two methods. Diatoms were the first dominant protistan group in the microscopic observations, with Skeletonema mainly distributed in the nearshore eutrophic waters and Chaetoceros in higher temperature and higher pH waters. The mixotrophic dinoflagellates, Gymnodinium and Gyrodinium, were more competitive in the oligotrophic waters. The pyrosequencing method revealed an extensive diversity of dinoflagellates. Chaetoceros was the only dominant diatom group in the pyrosequencing dataset. Gyrodinium represented the most abundant reads and dominated the offshore oligotrophic protistan community as they were in the microscopic observations. The dominance of parasitic dinoflagellates in the pyrosequencing dataset, which were overlooked in the morphological observations, indicates more attention should be paid to explore the potential role of this group. Both methods provide coherent clustering of samples. Nutrient levels, salinity and pH were the main factors influencing the distribution of protists. This study demonstrates that different primer pairs used in the pyrosequencing will indicate different protistan community structures. A suitable marker may reveal more comprehensive composition of protists and provide valuable information on environmental drivers.
International perception of lung sounds: a comparison of classification across some European borders
Aviles-Solis, Juan Carlos; Vanbelle, Sophie; Halvorsen, Peder A; Francis, Nick; Cals, Jochen W L; Andreeva, Elena A; Marques, Alda; Piirilä, Päivi; Pasterkamp, Hans; Melbye, Hasse
2017-01-01
Introduction Lung auscultation is helpful in the diagnosis of lung and heart diseases; however, the diagnostic value of lung sounds may be questioned due to interobserver variation. This situation may also impair clinical research in this area to generate evidence-based knowledge about the role that chest auscultation has in a modern clinical setting. The recording and visual display of lung sounds is a method that is both repeatable and feasible to use in large samples, and the aim of this study was to evaluate interobserver agreement using this method. Methods With a microphone in a stethoscope tube, we collected digital recordings of lung sounds from six sites on the chest surface in 20 subjects aged 40 years or older with and without lung and heart diseases. A total of 120 recordings and their spectrograms were independently classified by 28 observers from seven different countries. We employed absolute agreement and kappa coefficients to explore interobserver agreement in classifying crackles and wheezes within and between subgroups of four observers. Results When evaluating agreement on crackles (inspiratory or expiratory) in each subgroup, observers agreed on between 65% and 87% of the cases. Conger’s kappa ranged from 0.20 to 0.58 and four out of seven groups reached a kappa of ≥0.49. In the classification of wheezes, we observed a probability of agreement between 69% and 99.6% and kappa values from 0.09 to 0.97. Four out of seven groups reached a kappa ≥0.62. Conclusions The kappa values we observed in our study ranged widely but, when addressing its limitations, we find the method of recording and presenting lung sounds with spectrograms sufficient for both clinic and research. Standardisation of terminology across countries would improve international communication on lung auscultation findings. PMID:29435344
Satellite and Model Analysis of the Atmospheric Moisture Budget in High Latitudes
NASA Technical Reports Server (NTRS)
Bromwich, David H.; Chen, Qui-Shi
2001-01-01
In order to understand variations of accumulation over Greenland, it is necessary to investigate precipitation and its variations. Observations of precipitation over Greenland are limited and generally inaccurate, but the analyzed wind, geopotential height, and moisture fields are available for recent years. The objective of this study is to enhance the dynamic method for retrieving high resolution precipitation over Greenland from the analyzed fields. The dynamic method enhanced in this study is referred to as the improved dynamic method.
Yoon, Frank B; Huskamp, Haiden A; Busch, Alisa B; Normand, Sharon-Lise T
2011-06-21
Studies of large policy interventions typically do not involve randomization. Adjustments, such as matching, can remove the bias due to observed covariates, but residual confounding remains a concern. In this paper we introduce two analytical strategies to bolster inferences of the effectiveness of policy interventions based on observational data. First, we identify how study groups may differ and then select a second comparison group on this source of difference. Second, we match subjects using a strategy that finely balances the distributions of key categorical covariates and stochastically balances on other covariates. An observational study of the effect of parity on the severely ill subjects enrolled in the Federal Employees Health Benefits (FEHB) Program illustrates our methods.
School Site Visits for Community-Based Participatory Research on Healthy Eating
Patel, Anisha I.; Bogart, Laura M.; Uyeda, Kimberly E.; Martinez, Homero; Knizewski, Ritamarie; Ryan, Gery W.; Schuster, Mark A.
2010-01-01
Background School nutrition policies are gaining support as a means of addressing childhood obesity. Community-based participatory research (CBPR) offers an approach for academic and community partners to collaborate to translate obesity-related school policies into practice. Site visits, in which trained observers visit settings to collect multilevel data (e.g., observation, qualitative interviews), may complement other methods that inform health promotion efforts. This paper demonstrates the utility of site visits in the development of an intervention to implement obesity-related policies in Los Angeles Unified School District (LAUSD) middle schools. Methods In 2006, trained observers visited four LAUSD middle schools. Observers mapped cafeteria layout; observed food/beverage offerings, student consumption, waste patterns, and duration of cafeteria lines; spoke with school staff and students; and collected relevant documents. Data were examined for common themes and patterns. Results Food and beverages sold in study schools met LAUSD nutritional guidelines, and nearly all observed students had time to eat most or all of their meal. Some LAUSD policies were not implemented, including posting nutritional information for cafeteria food, marketing school meals to improve student participation in the National School Lunch Program, and serving a variety of fruits and vegetables. Cafeteria understaffing and cost were obstacles to policy implementation. Conclusions Site visits were a valuable methodology for evaluating the implementation of school district obesity-related policies and contributed to the development of a CBPR intervention to translate school food policies into practice. Future CBPR studies may consider site visits in their toolbox of formative research methods. PMID:19896033
NASA Astrophysics Data System (ADS)
Zunz, Violette; Goosse, Hugues; Dubinkina, Svetlana
2013-04-01
The sea ice extent in the Southern Ocean has increased since 1979 but the causes of this expansion have not been firmly identified. In particular, the contribution of internal variability and external forcing to this positive trend has not been fully established. In this region, the lack of observations and the overestimation of internal variability of the sea ice by contemporary General Circulation Models (GCMs) make it difficult to understand the behaviour of the sea ice. Nevertheless, if its evolution is governed by the internal variability of the system and if this internal variability is in some way predictable, a suitable initialization method should lead to simulations results that better fit the reality. Current GCMs decadal predictions are generally initialized through a nudging towards some observed fields. This relatively simple method does not seem to be appropriated to the initialization of sea ice in the Southern Ocean. The present study aims at identifying an initialization method that could improve the quality of the predictions of Southern Ocean sea ice at decadal timescales. We use LOVECLIM, an Earth-system Model of Intermediate Complexity that allows us to perform, within a reasonable computational time, the large amount of simulations required to test systematically different initialization procedures. These involve three data assimilation methods: a nudging, a particle filter and an efficient particle filter. In a first step, simulations are performed in an idealized framework, i.e. data from a reference simulation of LOVECLIM are used instead of observations, herein after called pseudo-observations. In this configuration, the internal variability of the model obviously agrees with the one of the pseudo-observations. This allows us to get rid of the issues related to the overestimation of the internal variability by models compared to the observed one. This way, we can work out a suitable methodology to assess the efficiency of the initialization procedures tested. It also allows us determine the upper limit of improvement that can be expected if more sophisticated initialization methods are used in decadal prediction simulations and if models have an internal variability agreeing with the observed one. Furthermore, since pseudo-observations are available everywhere at any time step, we also analyse the differences between simulations initialized with a complete dataset of pseudo-observations and the ones for which pseudo-observations data are not assimilated everywhere. In a second step, simulations are realized in a realistic framework, i.e. through the use of actual available observations. The same data assimilation methods are tested in order to check if more sophisticated methods can improve the reliability and the accuracy of decadal prediction simulations, even if they are performed with models that overestimate the internal variability of the sea ice extent in the Southern Ocean.
NASA Astrophysics Data System (ADS)
Saehana, Sahrul; Darsikin, Muslimin
2016-04-01
This study reports the preliminary study of application of Moringa oleifera resin as polymer electrolyte in dye-sensitized solar cell (DSSC). We found that polymer electrolyte membrane was formed by using solution casting methods. It is observed that polymer electrolyte was in elastic form and it is very potential to application as DSSC component. Performance of DSSC which employing Moringa oleifera resin was also observed and photovoltaic effect was found.
Kim, Soo-Byeong; Lee, Yong-Heum
2014-12-01
Cupping is one of the various treatment methods used in traditional oriental medicine. Cupping is also used as a diagnostic method and it may cause skin hyperpigmentation. Quantitative measurements and analysis of changes in skin color due to cupping are critical. The purpose of this study is to suggest an optical technique to visualize and identify changes in skin color due to cupping. We suggest the following analysis methods: digital color spaces [red, green, and blue (RGB) and L∗a∗b], the Erythema Index (E.I.), and the Melanin Index (M.I.). For experiments, we selected and stimulated 10 acupoints at 80 kilopascals (kPa) per minute. The RGB and L∗a∗b color spaces were observed to be decreased (p < 0.05) after cupping. The E.I. and M.I. were observed to be increased significantly (p < 0.05) after cupping. To assess various changes in skin color, we observed the changes for 72 hours. We also obtained the color changes by using the recovery pattern during the recovery period (p < 0.01). We propose that this method can be useful for visual identification and as a way to improve the identification of skin color changes. Copyright © 2014. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Kimura, H.; Ito, T.; Tadokoro, K.
2017-12-01
Introduction In southwest Japan, Philippine sea plate is subducting under the overriding plate such as Amurian plate, and mega interplate earthquakes has occurred at about 100 years interval. There is no occurrence of mega interplate earthquakes in southwest Japan, although it has passed about 70 years since the last mega interplate earthquakes: 1944 and 1946 along Nankai trough, meaning that the strain has been accumulated at plate interface. Therefore, it is essential to reveal the interplate coupling more precisely for predicting or understanding the mechanism of next occurring mega interplate earthquake. Recently, seafloor geodetic observation revealed the detailed interplate coupling distribution in expected source region of Nankai trough earthquake (e.g., Yokota et al. [2016]). In this study, we estimated interplate coupling in southwest Japan, considering block motion model and using seafloor geodetic observation data as well as onland GNSS observation data, based on Markov Chain Monte Carlo (MCMC) method. Method Observed crustal deformation is assumed that sum of rigid block motion and elastic deformation due to coupling at block boundaries. We modeled this relationship as a non-linear inverse problem that the unknown parameters are Euler pole of each block and coupling at each subfault, and solved them simultaneously based on MCMC method. Input data we used in this study are 863 onland GNSS observation data and 24 seafloor GPS/A observation data. We made some block division models based on the map of active fault tracing and selected the best model based on Akaike's Information Criterion (AIC): that is consist of 12 blocks. Result We find that the interplate coupling along Nankai trough has heterogeneous spatial distribution, strong at the depth of 0 to 20km at off Tokai region, and 0 to 30km at off Shikoku region. Moreover, we find that observed crustal deformation at off Tokai region is well explained by elastic deformation due to subducting Izu Micro Plate. We will present more details of our result, and discuss about not only interplate coupling but also rigid block motion, elastic deformation due to inland fault coupling, and resolution of estimated parameters.
Histochemical studies on protease formation in the cotyledons of germinating bean seeds.
Yomo, H; Taylor, M P
1973-03-01
Protease formation in Phaseolus vulgaris L. cotyledons during seed germination was studied histochemically using a gelatin-film-substrate method. Protease activity can be detected by this method on the 5th day of germination, at approximately the same time that a rapid increase of activity was observed by a test-tube assay with casein as a substrate. At the early stage of germination, protease activity was observed throughout the cotyledon except in two or three cell layers below the cotyledon surface and in several cell layers around the vascular bundles. A highly active cell layer surrounding the protease-inactive cells near the vascular bundles is suggested to be a source of the protease.
Data assimilation in integrated hydrological modelling in the presence of observation bias
NASA Astrophysics Data System (ADS)
Rasmussen, J.; Madsen, H.; Jensen, K. H.; Refsgaard, J. C.
2015-08-01
The use of bias-aware Kalman filters for estimating and correcting observation bias in groundwater head observations is evaluated using both synthetic and real observations. In the synthetic test, groundwater head observations with a constant bias and unbiased stream discharge observations are assimilated in a catchment scale integrated hydrological model with the aim of updating stream discharge and groundwater head, as well as several model parameters relating to both stream flow and groundwater modeling. The Colored Noise Kalman filter (ColKF) and the Separate bias Kalman filter (SepKF) are tested and evaluated for correcting the observation biases. The study found that both methods were able to estimate most of the biases and that using any of the two bias estimation methods resulted in significant improvements over using a bias-unaware Kalman Filter. While the convergence of the ColKF was significantly faster than the convergence of the SepKF, a much larger ensemble size was required as the estimation of biases would otherwise fail. Real observations of groundwater head and stream discharge were also assimilated, resulting in improved stream flow modeling in terms of an increased Nash-Sutcliffe coefficient while no clear improvement in groundwater head modeling was observed. Both the ColKF and the SepKF tended to underestimate the biases, which resulted in drifting model behavior and sub-optimal parameter estimation, but both methods provided better state updating and parameter estimation than using a bias-unaware filter.
Data assimilation in integrated hydrological modelling in the presence of observation bias
NASA Astrophysics Data System (ADS)
Rasmussen, Jørn; Madsen, Henrik; Høgh Jensen, Karsten; Refsgaard, Jens Christian
2016-05-01
The use of bias-aware Kalman filters for estimating and correcting observation bias in groundwater head observations is evaluated using both synthetic and real observations. In the synthetic test, groundwater head observations with a constant bias and unbiased stream discharge observations are assimilated in a catchment-scale integrated hydrological model with the aim of updating stream discharge and groundwater head, as well as several model parameters relating to both streamflow and groundwater modelling. The coloured noise Kalman filter (ColKF) and the separate-bias Kalman filter (SepKF) are tested and evaluated for correcting the observation biases. The study found that both methods were able to estimate most of the biases and that using any of the two bias estimation methods resulted in significant improvements over using a bias-unaware Kalman filter. While the convergence of the ColKF was significantly faster than the convergence of the SepKF, a much larger ensemble size was required as the estimation of biases would otherwise fail. Real observations of groundwater head and stream discharge were also assimilated, resulting in improved streamflow modelling in terms of an increased Nash-Sutcliffe coefficient while no clear improvement in groundwater head modelling was observed. Both the ColKF and the SepKF tended to underestimate the biases, which resulted in drifting model behaviour and sub-optimal parameter estimation, but both methods provided better state updating and parameter estimation than using a bias-unaware filter.
... patients who were waiting to receive treatment later. Observational studies have also reported both real acupuncture and ... NCCIH) are sponsoring a number of clinical trials (research studies) at ... and alternative methods. Few CAM therapies have been tested using demanding ...
2010-01-01
Background The development of new wireless communication technologies that emit radio frequency electromagnetic fields (RF-EMF) is ongoing, but little is known about the RF-EMF exposure distribution in the general population. Previous attempts to measure personal exposure to RF-EMF have used different measurement protocols and analysis methods making comparisons between exposure situations across different study populations very difficult. As a result, observed differences in exposure levels between study populations may not reflect real exposure differences but may be in part, or wholly due to methodological differences. Methods The aim of this paper is to develop a study protocol for future personal RF-EMF exposure studies based on experience drawn from previous research. Using the current knowledge base, we propose procedures for the measurement of personal exposure to RF-EMF, data collection, data management and analysis, and methods for the selection and instruction of study participants. Results We have identified two basic types of personal RF-EMF measurement studies: population surveys and microenvironmental measurements. In the case of a population survey, the unit of observation is the individual and a randomly selected representative sample of the population is needed to obtain reliable results. For microenvironmental measurements, study participants are selected in order to represent typical behaviours in different microenvironments. These two study types require different methods and procedures. Conclusion Applying our proposed common core procedures in future personal measurement studies will allow direct comparisons of personal RF-EMF exposures in different populations and study areas. PMID:20487532
NASA Technical Reports Server (NTRS)
1987-01-01
The Earth Observing System (EOS) represents a new approach to the study of the Earth. It consists of remotely sensed and correlative in situ observations designed to address important, interrelated global-scale processes. There is an urgent need to study the Earth as a complete, integrated system in order to understand and predict changes caused by human activities and natural processes. The EOS approach is based on an information system concept and designed to provide a long-term study of the Earth using a variety of measurement methods from both operational and research satellite payloads and continuing ground-based Earth science studies. The EOS concept builds on the foundation of the earlier, single-discipline space missions designed for relatively short observation periods. Continued progress in our understanding of the Earth as a system will come from EOS observations spanning several decades using a variety of contemporaneous measurements.
CANCER INCIDENCE IN THE AGRICULTURAL HEALTH STUDY
The Agricultural Health Study (AHS) was undertaken to ascertain the etiology of cancers observed to be elevated in agricultural populations. Methods: The AHS is a large prospective, cohort study of private applicators and commercial applicators licensed to apply restricted use ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beardsley, A. P.; Morales, M. F.; Lidz, A.
Infrared and radio observations of the Epoch of Reionization promise to revolutionize our understanding of the cosmic dawn, and major efforts with the JWST, MWA, and HERA are underway. While measurements of the ionizing sources with infrared telescopes and the effect of these sources on the intergalactic medium with radio telescopes should be complementary, to date the wildly disparate angular resolutions and survey speeds have made connecting proposed observations difficult. In this paper we develop a method to bridge the gap between radio and infrared studies. While the radio images may not have the sensitivity and resolution to identify individualmore » bubbles with high fidelity, by leveraging knowledge of the measured power spectrum we are able to separate regions that are likely ionized from largely neutral, providing context for the JWST observations of galaxy counts and properties in each. By providing the ionization context for infrared galaxy observations, this method can significantly enhance the science returns of JWST and other infrared observations.« less
Mehl, Matthias R.; Robbins, Megan L.; Deters, Fenne große
2012-01-01
This article introduces a novel, observational ambulatory monitoring method called the Electronically Activated Recorder or EAR. The EAR is a digital audio recorder that runs on a handheld computer and periodically and unobtrusively records snippets of ambient sounds from participants’ momentary environments. In tracking moment-to-moment ambient sounds, it yields acoustic logs of people’s days as they naturally unfold. In sampling only a fraction of the time, it protects participants’ privacy and makes large observational studies feasible. As a naturalistic observation method, it provides an observer’s account of daily life and is optimized for the objective assessment of audible aspects of social environments, behaviors, and interactions (e.g., habitual preferences for social settings, idiosyncratic interaction styles, and subtle emotional expressions). The article discusses the EAR method conceptually and methodologically, reviews prior research with it, and identifies three concrete ways in which it can enrich psychosomatic research. Specifically, it can (a) calibrate psychosocial effects on health against frequencies of real-world behavior, (b) provide ecological, observational measures of health-related social processes that are independent of self-report, and (c) help with the assessment of subtle and habitual social behaviors that evade self-report but have important health implications. An important avenue for future research lies in merging traditional, self-report based ambulatory monitoring methods with observational approaches such as the EAR to allow for the simultaneous yet methodologically independent assessment of inner, experiential (e.g., loneliness) and outer, observable aspects (e.g., social isolation) of real-world social processes to reveal their unique effects on health. PMID:22582338
A Coarse-Alignment Method Based on the Optimal-REQUEST Algorithm
Zhu, Yongyun
2018-01-01
In this paper, we proposed a coarse-alignment method for strapdown inertial navigation systems based on attitude determination. The observation vectors, which can be obtained by inertial sensors, usually contain various types of noise, which affects the convergence rate and the accuracy of the coarse alignment. Given this drawback, we studied an attitude-determination method named optimal-REQUEST, which is an optimal method for attitude determination that is based on observation vectors. Compared to the traditional attitude-determination method, the filtering gain of the proposed method is tuned autonomously; thus, the convergence rate of the attitude determination is faster than in the traditional method. Within the proposed method, we developed an iterative method for determining the attitude quaternion. We carried out simulation and turntable tests, which we used to validate the proposed method’s performance. The experiment’s results showed that the convergence rate of the proposed optimal-REQUEST algorithm is faster and that the coarse alignment’s stability is higher. In summary, the proposed method has a high applicability to practical systems. PMID:29337895
Anguera, M. Teresa; Portell, Mariona; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana
2018-01-01
Indirect observation is a recent concept in systematic observation. It largely involves analyzing textual material generated either indirectly from transcriptions of audio recordings of verbal behavior in natural settings (e.g., conversation, group discussions) or directly from narratives (e.g., letters of complaint, tweets, forum posts). It may also feature seemingly unobtrusive objects that can provide relevant insights into daily routines. All these materials constitute an extremely rich source of information for studying everyday life, and they are continuously growing with the burgeoning of new technologies for data recording, dissemination, and storage. Narratives are an excellent vehicle for studying everyday life, and quantitization is proposed as a means of integrating qualitative and quantitative elements. However, this analysis requires a structured system that enables researchers to analyze varying forms and sources of information objectively. In this paper, we present a methodological framework detailing the steps and decisions required to quantitatively analyze a set of data that was originally qualitative. We provide guidelines on study dimensions, text segmentation criteria, ad hoc observation instruments, data quality controls, and coding and preparation of text for quantitative analysis. The quality control stage is essential to ensure that the code matrices generated from the qualitative data are reliable. We provide examples of how an indirect observation study can produce data for quantitative analysis and also describe the different software tools available for the various stages of the process. The proposed method is framed within a specific mixed methods approach that involves collecting qualitative data and subsequently transforming these into matrices of codes (not frequencies) for quantitative analysis to detect underlying structures and behavioral patterns. The data collection and quality control procedures fully meet the requirement of flexibility and provide new perspectives on data integration in the study of biopsychosocial aspects in everyday contexts. PMID:29441028
NASA Astrophysics Data System (ADS)
Katpatal, Y. B.; Paranjpe, S. V.; Kadu, M. S.
2017-12-01
Geological formations act as aquifer systems and variability in the hydrological properties of aquifers have control over groundwater occurrence and dynamics. To understand the groundwater availability in any terrain, spatial interpolation techniques are widely used. It has been observed that, with varying hydrogeological conditions, even in a geologically homogenous set up, there are large variations in observed groundwater levels. Hence, the accuracy of groundwater estimation depends on the use of appropriate interpretation techniques. The study area of the present study is Venna Basin of Maharashtra State, India which is a basaltic terrain with four different types of basaltic layers laid down horizontally; weathered vesicular basalt, weathered and fractured basalt, highly weathered unclassified basalt and hard massive basalt. The groundwater levels vary with topography as different types of basalts are present at varying depths. The local stratigraphic profiles were generated at different types of basaltic terrains. The present study aims to interpolate the groundwater levels within the basin and to check the co-relation between the estimated and the observed values. The groundwater levels for 125 observation wells situated in these different basaltic terrains for 20 years (1995 - 2015) have been used in the study. The interpolation was carried out in Geographical Information System (GIS) using ordinary kriging and Inverse Distance Weight (IDW) method. A comparative analysis of the interpolated values of groundwater levels is carried out for validating the recorded groundwater level dataset. The results were co-related to various types of basaltic terrains present in basin forming the aquifer systems. Mean Error (ME) and Mean Square Errors (MSE) have been computed and compared. It was observed that within the interpolated values, a good correlation does not exist between the two interpolation methods used. The study concludes that in crystalline basaltic terrain, interpolation methods must be verified with the changes in the geological profiles.
A new retrieval method for the ice water content of cirrus using data from the CloudSat and CALIPSO
NASA Astrophysics Data System (ADS)
Pan, Honglin; Bu, Lingbing; Kumar, K. Raghavendra; Gao, Haiyang; Huang, Xingyou; Zhang, Wentao
2017-08-01
The CloudSat and CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations) are the members of satellite observation system of A-train to achieve the quasi-synchronization observation on the same orbit. With the help of active (CALIOP and CPR) and passive payloads from these two satellites, respectively, unprecedented detailed information of microphysical properties of ice cloud can be retrieved. The ice water content (IWC) is regarded as one of the most important microphysical characteristics of cirrus for its prominent role in cloud radiative forcing. In this paper, we proposed a new joint (Combination) retrieval method using the full advantages of different well established retrieval methods, namely the LIDAR method (for the region Lidar-only), the MWCR method (for the region Radar-only), and Wang method (for the region Lidar-Radar) proposed by Wang et al. (2002). In retrieval of cirrus IWC, empirical formulas of the exponential type were used for both thinner cirrus (detected by Lidar-only), thicker cirrus (detected by radar-only), and the part of cirrus detected by both, respectively. In the present study, the comparison of various methods verified that our proposed new joint method is more comprehensive, rational and reliable. Further, the retrieval information of cirrus is complete and accurate for the region that Lidar cannot penetrate and Radar is insensitive. On the whole, the retrieval results of IWC showed certain differences retrieved from the joint method, Ca&Cl, and ICARE which can be interpreted from the different hypothesis of microphysical characteristics and parameters used in the retrieval method. In addition, our joint method only uses the extinction coefficient and the radar reflectivity factor to calculate the IWC, which is simpler and reduces to some extent the accumulative error. In future studies, we will not only compare the value of IWC but also explore the detailed macrophysical and microphysical characteristics of cirrus.
Mosmuller, David G M; Maal, Thomas J; Prahl, Charlotte; Tan, Robin A; Mulder, Frans J; Schwirtz, Roderic M F; de Vet, Henrica C W; Bergé, Stefaan J; Don Griot, J P W
2017-08-01
For the assessment of the nasolabial appearance in cleft patients, a widely accepted, reliable scoring system is not available. In this study four different methods of assessment are compared, including 2D and 3D asymmetry and aesthetic assessments. The data and ratings from an earlier study using the Asher-McDade aesthetic index on 3D photographs and the outcomes of 3D facial distance mapping were compared to a 2D aesthetic assessment, the Cleft Aesthetic Rating Scale, and to SymNose, a computerized 2D asymmetry assessment technique. The reliability and correlation between the four assessment techniques were tested using a sample of 79 patients. The 3D asymmetry assessment had the highest reliability and could be performed by just one observer (Intraclass correlation coefficient (ICC): 0.99). The 2D asymmetry assessment of the nose was highly reliable when performed by just one observer (ICC: 0.89). However, for the 2D asymmetry assessment of the lip more observers were needed. For the 2D aesthetic assessments 3 observers were needed. The 3D aesthetic assessment had the lowest single-observer reliability (ICC: 0.38-0.56) of all four techniques. The agreement between the different assessment methods is poor to very poor. The highest correlation (R: 0.48) was found between 2D and 3D aesthetic assessments. Remarkably, the lowest correlations were found between 2D and 3D asymmetry assessments (0.08-0.17). Different assessment methods are not in agreement and seem to measure different nasolabial aspects. More research is needed to establish exactly what each assessment technique measures and which measurements or outcomes are relevant for the patients. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fang, G. H.; Yang, J.; Chen, Y. N.; Zammit, C.
2015-06-01
Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River basin, northwestern China, and expected to be vulnerable to climate change. It has been demonstrated that regional climate models (RCMs) provide more reliable results for a regional impact study of climate change (e.g., on water resources) than general circulation models (GCMs). However, due to their considerable bias it is still necessary to apply bias correction before they are used for water resources research. In this paper, after a sensitivity analysis on input meteorological variables based on the Sobol' method, we compared five precipitation correction methods and three temperature correction methods in downscaling RCM simulations applied over the Kaidu River basin, one of the headwaters of the Tarim River basin. Precipitation correction methods applied include linear scaling (LS), local intensity scaling (LOCI), power transformation (PT), distribution mapping (DM) and quantile mapping (QM), while temperature correction methods are LS, variance scaling (VARI) and DM. The corrected precipitation and temperature were compared to the observed meteorological data, prior to being used as meteorological inputs of a distributed hydrologic model to study their impacts on streamflow. The results show (1) streamflows are sensitive to precipitation, temperature and solar radiation but not to relative humidity and wind speed; (2) raw RCM simulations are heavily biased from observed meteorological data, and its use for streamflow simulations results in large biases from observed streamflow, and all bias correction methods effectively improved these simulations; (3) for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g., standard deviation, percentile values) while the LOCI method performed best in terms of the time-series-based indices (e.g., Nash-Sutcliffe coefficient, R2); (4) for temperature, all correction methods performed equally well in correcting raw temperature; and (5) for simulated streamflow, precipitation correction methods have more significant influence than temperature correction methods and the performances of streamflow simulations are consistent with those of corrected precipitation; i.e., the PT and QM methods performed equally best in correcting flow duration curve and peak flow while the LOCI method performed best in terms of the time-series-based indices. The case study is for an arid area in China based on a specific RCM and hydrologic model, but the methodology and some results can be applied to other areas and models.
NASA Technical Reports Server (NTRS)
Pina, J. F.; House, F. B.
1976-01-01
A scheme was developed which divides the earth-atmosphere system into 2060 elemental areas. The regions previously described are defined in terms of these elemental areas which are fixed in size and position as the satellite moves. One method, termed the instantaneous technique, yields values of the radiant emittance (We) and the radiant reflectance (Wr) which the regions have during the time interval of a single satellite pass. The number of observations matches the number of regions under study and a unique solution is obtained using matrix inversion. The other method (termed the best fit technique), yields time averages of We and Wr for large time intervals (e.g., months, seasons). The number of observations in this technique is much greater than the number of regions considered, and an approximate solution is obtained by the method of least squares.
NASA Astrophysics Data System (ADS)
Megan Gillies, D.; Knudsen, D.; Donovan, E.; Jackel, B.; Gillies, R.; Spanswick, E.
2017-08-01
We present a comprehensive survey of 630 nm (red-line) emission discrete auroral arcs using the newly deployed Redline Emission Geospace Observatory. In this study we discuss the need for observations of 630 nm aurora and issues with the large-altitude range of the red-line aurora. We compare field-aligned currents (FACs) measured by the Swarm constellation of satellites with the location of 10 red-line (630 nm) auroral arcs observed by all-sky imagers (ASIs) and find that a characteristic emission height of 200 km applied to the ASI maps gives optimal agreement between the two observations. We also compare the new FAC method against the traditional triangulation method using pairs of all-sky imagers (ASIs), and against electron density profiles obtained from the Resolute Bay Incoherent Scatter Radar-Canadian radar, both of which are consistent with a characteristic emission height of 200 km.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, A; Little, K; Chung, J
Purpose: To validate the use of a Channelized Hotelling Observer (CHO) model for guiding image processing parameter selection and enable improved nodule detection in digital chest radiography. Methods: In a previous study, an anthropomorphic chest phantom was imaged with and without PMMA simulated nodules using a GE Discovery XR656 digital radiography system. The impact of image processing parameters was then explored using a CHO with 10 Laguerre-Gauss channels. In this work, we validate the CHO’s trend in nodule detectability as a function of two processing parameters by conducting a signal-known-exactly, multi-reader-multi-case (MRMC) ROC observer study. Five naive readers scored confidencemore » of nodule visualization in 384 images with 50% nodule prevalence. The image backgrounds were regions-of-interest extracted from 6 normal patient scans, and the digitally inserted simulated nodules were obtained from phantom data in previous work. Each patient image was processed with both a near-optimal and a worst-case parameter combination, as determined by the CHO for nodule detection. The same 192 ROIs were used for each image processing method, with 32 randomly selected lung ROIs per patient image. Finally, the MRMC data was analyzed using the freely available iMRMC software of Gallas et al. Results: The image processing parameters which were optimized for the CHO led to a statistically significant improvement (p=0.049) in human observer AUC from 0.78 to 0.86, relative to the image processing implementation which produced the lowest CHO performance. Conclusion: Differences in user-selectable image processing methods on a commercially available digital radiography system were shown to have a marked impact on performance of human observers in the task of lung nodule detection. Further, the effect of processing on humans was similar to the effect on CHO performance. Future work will expand this study to include a wider range of detection/classification tasks and more observers, including experienced chest radiologists.« less
Inventory versus Checklist Approach to Assess Middle School a la Carte Food Availability
ERIC Educational Resources Information Center
Hearst, Mary O.; Lytle, Leslie A.; Pasch, Keryn E.; Heitzler, Carrie D.
2009-01-01
Background: The purpose of this research is to evaluate 2 methods of assessing foods available on school a la carte lines for schools' ability to assess the proportion of foods that are healthful options. Methods: This observational study used data collected at 38 middle schools, October 2006-May 2007. An inventory method was used to collect…
NASA Astrophysics Data System (ADS)
Wang, Shuguo
2013-01-01
The so called change detection method is a promising way to acquire soil moisture (SM) dynamics dependent on time series of radar backscatter (σ0) observations. The current study is a preceded step for using this method to carry out SM inversion at basin scale, in order to investigate the applicability of the change detection method in the Heihe River Basin, and to inspect the sensitivity of SAR signals to soil moisture variations. At the meantime, a prior knowledge of SM dynamics and land heterogeneities that may contribute to backscatter observations can be obtained. The impact of land surface states on spatial and temporal σ0 variability measured by ASAR has been evaluated in the upstream of the Heihe River Basin, which was one of the foci experimental areas (FEAs) in Watershed Allied Telemetry Experimental Research (WATER). Based on the in situ measurements provided by an automatic meteorological station (AMS) established at the A’rou site and time series of ASAR observations focused on a 1 km2 area, the relationships between the temporal dynamics of σ0 with in situ SM variations, and land heterogeneities of the study area according to the characteristics of spatial variability of σ0, were identified. The in situ measurements of soil moisture and temperature show a very clear seasonal freeze/thaw cycle in the study site. The temporal σ0 evolvement is basically coherent with ground measurements.
NASA Astrophysics Data System (ADS)
Khatri, Pradeep; Hayasaka, Tadahiro; Iwabuchi, Hironobu; Takamura, Tamio; Irie, Hitoshi; Nakajima, Takashi Y.; Letu, Husi; Kai, Qin
2017-04-01
Clouds are known to have profound impacts on atmospheric radiation and water budget, climate change, atmosphere-surface interaction, and so on. Cloud optical thickness (COT) and effective radius (Re) are two fundamental cloud parameters required to study clouds from climatological and hydrological point of view. Large spatial-temporal coverages of those cloud parameters from space observation have proved to be very useful for cloud research; however, validation of space-based products is still a challenging task due to lack of reliable data. Ground-based remote sensing instruments, such as sky radiometers distributed around the world through international observation networks of SKYNET (http://atmos2.cr.chiba-u.jp/skynet/) and AERONET (https://aeronet.gsfc.nasa.gov/) have a great potential to produce ground-truth cloud parameters at different parts of the globe to validate satellite products. Focusing to the sky radiometers of SKYNET and AERONET, a few cloud retrieval methods exists, but those methods have some difficulties to address the problem when cloud is optically thin. It is because the observed transmittances at two wavelengths can be originated from more than one set of COD and Re, and the choice of the most plausible set is difficult. At the same time, calibration issue, especially for the wavelength of near infrared (NIR) region, which is important to retrieve Re, is also a difficult task at present. As a result, instruments need to be calibrated at a high mountain or calibration terms need to be transferred from a standard instrument. Taking those points on account, we developed a new retrieval method emphasizing to overcome above-mentioned difficulties. We used observed transmittances of multiple wavelengths to overcome the first problem. We further proposed a method to obtain calibration constant of NIR wavelength channel using observation data. Our cloud retrieval method is found to produce relatively accurate COD and Re when validated them using data of a narrow field of view radiometer of collocated observation in one SKYNET site. Though the method is developed for the sky radiometer of SKYNET, it can be still used for the sky radiometer of AERONET and other instruments observing spectral zenith transmittances. The proposed retrieval method is then applied to retrieve cloud parameters at key sites of SKYNET within Japan, which are then used to validate cloud products obtained from space observations by MODIS sensors onboard TERRA/AQUA satellites and Himawari 8, a Japanese geostationary satellite. Our analyses suggest the underestimation (overestimation) of COD (Re) from space observations.
Ultimate pier and contraction scour prediction in cohesive soils at selected bridges in Illinois
Straub, Timothy D.; Over, Thomas M.; Domanski, Marian M.
2013-01-01
The Scour Rate In COhesive Soils-Erosion Function Apparatus (SRICOS-EFA) method includes an ultimate scour prediction that is the equilibrium maximum pier and contraction scour of cohesive soils over time. The purpose of this report is to present the results of testing the ultimate pier and contraction scour methods for cohesive soils on 30 bridge sites in Illinois. Comparison of the ultimate cohesive and noncohesive methods, along with the Illinois Department of Transportation (IDOT) cohesive soil reduction-factor method and measured scour are presented. Also, results of the comparison of historic IDOT laboratory and field values of unconfined compressive strength of soils (Qu) are presented. The unconfined compressive strength is used in both ultimate cohesive and reduction-factor methods, and knowing how the values from field methods compare to the laboratory methods is critical to the informed application of the methods. On average, the non-cohesive method results predict the highest amount of scour, followed by the reduction-factor method results; and the ultimate cohesive method results predict the lowest amount of scour. The 100-year scour predicted for the ultimate cohesive, noncohesive, and reduction-factor methods for each bridge site and soil are always larger than observed scour in this study, except 12% of predicted values that are all within 0.4 ft of the observed scour. The ultimate cohesive scour prediction is smaller than the non-cohesive scour prediction method for 78% of bridge sites and soils. Seventy-six percent of the ultimate cohesive predictions show a 45% or greater reduction from the non-cohesive predictions that are over 10 ft. Comparing the ultimate cohesive and reduction-factor 100-year scour predictions methods for each bridge site and soil, the scour predicted by the ultimate cohesive scour prediction method is less than the reduction-factor 100-year scour prediction method for 51% of bridge sites and soils. Critical shear stress remains a needed parameter in the ultimate scour prediction for cohesive soils. The unconfined soil compressive strength measured by IDOT in the laboratory was found to provide a good prediction of critical shear stress, as measured by using the erosion function apparatus in a previous study. Because laboratory Qu analyses are time-consuming and expensive, the ability of field-measured Rimac data to estimate unconfined soil strength in the critical shear–soil strength relation was tested. A regression analysis was completed using a historic IDOT dataset containing 366 data pairs of laboratory Qu and field Rimac measurements from common sites with cohesive soils. The resulting equations provide a point prediction of Qu, given any Rimac value with the 90% confidence interval. The prediction equations are not significantly different from the identity Qu = Rimac. The alternative predictions of ultimate cohesive scour presented in this study assume Qu will be estimated using Rimac measurements that include computed uncertainty. In particular, the ultimate cohesive predicted scour is greater than observed scour for the entire 90% confidence interval range for predicting Qu at the bridges and soils used in this study, with the exception of the six predicted values that are all within 0.6 ft of the observed scour.
Motevallizadeh, Saeed; Malek Afzali, Hossein; Larijani, Bagher
2011-01-01
Family planning has been defined in the framework of mothers and children plan as one of Primary Healthcare (PHC) details. Besides quantity, the quality of services, particularly in terms of ethics, such as observing individuals' privacy, is of great importance in offering family planning services. A preliminary study to gather information about the degree of medical ethics offered during family planning services at Tehran urban healthcare centers. A questionnaire was designed for study. In the first question regarding informed consent, 47 clients who were advised about various contraception methods were asked whether advantages and disadvantages of the contraceptive methods have been discussed by the service provider. Then a certain rank was measured for either client or method in 2007. Finally, average value of advantage and disadvantage for each method was measured. In questions about autonomy, justice and beneficence, yes/no answers have been expected and measured accordingly. Health care providers have stressed more on the advantages of pills and disadvantages of tubectomy and have paid less attention to advantages of injection ampoules and disadvantages of pills in first time clients. While they have stressed more on the advantages and disadvantages of tubectomy and less attention to advantages of condom and disadvantages of vasectomy in second time clients. Clients divulged their 100% satisfaction in terms of observing turns and free charges services. Observance degree of autonomy was 64.7% and 77.3% for first time and second- time clients respectively. Applying the consultant's personal viewpoint for selecting a method will breach an informed consent for first and second time clients. System has good consideration to justice and no malfeasance.
Singh, Samiksha; Upadhyaya, Sanjeev; Deshmukh, Pradeep; Dongre, Amol; Dwivedi, Neha; Dey, Deepak; Kumar, Vijay
2018-04-02
In India, amidst the increasing number of health programmes, there are concerns about the performance of frontline health workers (FLHW). We assessed the time utilisation and factors affecting the work of frontline health workers from South India. This is a mixed methods study using time and motion (TAM) direct observations and qualitative enquiry among frontline/community health workers. These included 43 female and 6 male multipurpose health workers (namely, auxiliary nurse midwives (ANMs) and male-MPHWs), 12 nutrition and health workers (Anganwadi workers, AWWs) and 53 incentive-based community health workers (accredited social health activists, ASHAs). We conducted the study in two phases. In the formative phase, we conducted an in-depth inductive investigation to develop observation checklists and qualitative tools. The main study involved deductive approach for TAM observations. This enabled us to observe a larger sample to capture variations across non-tribal and tribal regions and different health cadres. For the main study, we developed GPRS-enabled android-based application to precisely record time, multi-tasking and field movement. We conducted non-participatory direct observations (home to home) for consecutively 6 days for each participant. We conducted in-depth interviews with all the participants and 33 of their supervisors and relevant officials. We conducted six focus group discussions (FGDs) with ASHAs and one FGD with ANMs to validate preliminary findings. We established a mechanism for quality assurance of data collection and analysis. We analysed the data separately for each cadre and stratified for non-tribal and tribal regions. On any working day, the ANMs spent median 7:04 h, male-MPHWs spent median 5:44 h and AWWs spent median 6:50 h on the job. The time spent on the job was less among the FLHWs from tribal areas as compared to those from non-tribal areas. ANMs and AWWs prioritised maternal and child health, while male-MPHWs were involved in seasonal diseases and school health. ASHAs visited homes to provide maternal health, basic curative care, and follow-up of tuberculosis patients. The results describe issues related with work planning, time management and several systemic, community-based and personnel factors affecting work of FLHWs. TAM study with mixed methods can help researchers as well as managers to periodically review work patterns, devise appropriate job responsibilities and improve the efficiency of health workers.
Veta, Mitko; van Diest, Paul J; Jiwa, Mehdi; Al-Janabi, Shaimaa; Pluim, Josien P W
2016-01-01
Tumor proliferation speed, most commonly assessed by counting of mitotic figures in histological slide preparations, is an important biomarker for breast cancer. Although mitosis counting is routinely performed by pathologists, it is a tedious and subjective task with poor reproducibility, particularly among non-experts. Inter- and intraobserver reproducibility of mitosis counting can be improved when a strict protocol is defined and followed. Previous studies have examined only the agreement in terms of the mitotic count or the mitotic activity score. Studies of the observer agreement at the level of individual objects, which can provide more insight into the procedure, have not been performed thus far. The development of automatic mitosis detection methods has received large interest in recent years. Automatic image analysis is viewed as a solution for the problem of subjectivity of mitosis counting by pathologists. In this paper we describe the results from an interobserver agreement study between three human observers and an automatic method, and make two unique contributions. For the first time, we present an analysis of the object-level interobserver agreement on mitosis counting. Furthermore, we train an automatic mitosis detection method that is robust with respect to staining appearance variability and compare it with the performance of expert observers on an "external" dataset, i.e. on histopathology images that originate from pathology labs other than the pathology lab that provided the training data for the automatic method. The object-level interobserver study revealed that pathologists often do not agree on individual objects, even if this is not reflected in the mitotic count. The disagreement is larger for objects from smaller size, which suggests that adding a size constraint in the mitosis counting protocol can improve reproducibility. The automatic mitosis detection method can perform mitosis counting in an unbiased way, with substantial agreement with human experts.
Reporting of methodological features in observational studies of pre-harvest food safety.
Sargeant, Jan M; O'Connor, Annette M; Renter, David G; Kelton, David F; Snedeker, Kate; Wisener, Lee V; Leonard, Erin K; Guthrie, Alessia D; Faires, Meredith
2011-02-01
Observational studies in pre-harvest food safety may be useful for identifying risk factors and for evaluating potential mitigation strategies to reduce foodborne pathogens. However, there are no structured reporting guidelines for these types of study designs in livestock species. Our objective was to evaluate the reporting of observational studies in the pre-harvest food safety literature using guidelines modified from the human healthcare literature. We identified 100 pre-harvest food safety studies published between 1999 and 2009. Each study was evaluated independently by two reviewers using a structured checklist. Of the 38 studies that explicitly stated the observational study design, 27 were described as cross-sectional studies, eight as case-control studies, and three as cohort studies. Study features reported in over 75% of the selected studies included: description of the geographic location of the studies, definitions and sources of data for outcomes, organizational level and source of data for independent variables, description of statistical methods and results, number of herds enrolled in the study and included in the analysis, and sources of study funding. However, other features were not consistently reported, including details related to eligibility criteria for groups (such as barn, room, or pen) and individuals, numbers of groups and individuals included in various stages of the study, identification of primary outcomes, the distinction between putative risk factors and confounding variables, the identification of a primary exposure variable, the referent level for evaluation of categorical variable associations, methods of controlling confounding variables and missing variables, model fit, details of subset analysis, demographic information at the sampling unit level, and generalizability of the study results. Improvement in reporting of observational studies of pre-harvest food safety will aid research readers and reviewers in interpreting and evaluating the results of such studies. Copyright © 2010 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Goetz, Michal; Yeh, Chin-Bin; Ondrejka, Igor; Akay, Aynur; Herczeg, Ilona; Dobrescu, Iuliana; Kim, Boong Nyun; Jin, Xingming; Riley, Anne W.; Martenyi, Ferenc; Harrison, Gavan; Treuer, Tamas
2012-01-01
Objectives: This prospective, observational, non-randomized study aimed to describe the relationship between treatment regimen prescribed and the quality of life (QoL) of ADHD patients in countries of Central and Eastern Europe (CEE) and Eastern Asia over 12 months. Methods: 977 Male and female patients aged 6-17 years seeking treatment for…
1987-09-01
a useful average for population studies, do not delay data processing , and is relatively Inexpensive. Using MVEN and observing recipe preparation...for population studies, do not delay data processing , and is relatively inexpensive. Using HVEM and observing recipe preparation procedures improve the...extensive review of the procedures and problems in design, collection, analysis, processing and interpretation of dietary survey data for individuals
Inverse Abbe-method for observing small refractive index changes in liquids.
Räty, Jukka; Peiponen, Kai-Erik
2015-05-01
This study concerns an optical method for the detection of minuscule refractive index changes in the liquid phase. The proposed method reverses the operation of the traditional Abbe refractometer and thus utilizes the light dispersion properties of materials, i.e. it involves the dependence of the refractive index on light wavelength. In practice, the method includes the detection of light reflection spectra in the visible spectral range. This inverse Abbe method is suitable for liquid quality studies e.g. for monitoring water purity. Tests have shown that the method reveals less than per mil NaCl or ethanol concentrations in water. Copyright © 2015 Elsevier B.V. All rights reserved.
Methods for the correction of vascular artifacts in PET O-15 water brain-mapping studies
NASA Astrophysics Data System (ADS)
Chen, Kewei; Reiman, E. M.; Lawson, M.; Yun, Lang-sheng; Bandy, D.; Palant, A.
1996-12-01
While positron emission tomographic (PET) measurements of regional cerebral blood flow (rCBF) can be used to map brain regions that are involved in normal and pathological human behaviors, measurements in the anteromedial temporal lobe can be confounded by the combined effects of radiotracer activity in neighboring arteries and partial-volume averaging. The authors now describe two simple methods to address this vascular artifact. One method utilizes the early frames of a dynamic PET study, while the other method utilizes a coregistered magnetic resonance image (MRI) to characterize the vascular region of interest (VROI). Both methods subsequently assign a common value to each pixel in the VROI for the control (baseline) scan and the activation scan. To study the vascular artifact and to demonstrate the ability of the proposed methods correcting the vascular artifact, four dynamic PET scans were performed in a single subject during the same behavioral state. For each of the four scans, a vascular scan containing vascular activity was computed as the summation of the images acquired 0-60 s after radiotracer administration, and a control scan containing minimal vascular activity was computed as the summation of the images acquired 20-80 s after radiotracer administration. t-score maps calculated from the four pairs of vascular and control scans were used to characterize regional blood flow differences related to vascular activity before and after the application of each vascular artifact correction method. Both methods eliminated the observed differences in vascular activity, as well as the vascular artifact observed in the anteromedial temporal lobes. Using PET data from a study of normal human emotion, these methods permitted the authors to identify rCBF increases in the anteromedial temporal lobe free from the potentially confounding, combined effects of vascular activity and partial-volume averaging.
A method for recording verbal behavior in free-play settings1
Nordquist, Vey M.
1971-01-01
The present study attempted to test the reliability of a new method of recording verbal behavior in a free-play preschool setting. Six children, three normal and three speech impaired, served as subjects. Videotaped records of verbal behavior were scored by two experimentally naive observers. The results suggest that the system provides a means of obtaining reliable records of both normal and impaired speech, even when the subjects exhibit nonverbal behaviors (such as hyperactivity) that interfere with direct observation techniques. ImagesFig. 1Fig. 2 PMID:16795310
A new modelling approach for zooplankton behaviour
NASA Astrophysics Data System (ADS)
Keiyu, A. Y.; Yamazaki, H.; Strickler, J. R.
We have developed a new simulation technique to model zooplankton behaviour. The approach utilizes neither the conventional artificial intelligence nor neural network methods. We have designed an adaptive behaviour network, which is similar to BEER [(1990) Intelligence as an adaptive behaviour: an experiment in computational neuroethology, Academic Press], based on observational studies of zooplankton behaviour. The proposed method is compared with non- "intelligent" models—random walk and correlated walk models—as well as observed behaviour in a laboratory tank. Although the network is simple, the model exhibits rich behavioural patterns similar to live copepods.
Thermal Diffusivity and Conductivity in Ceramic Matrix Fiber Composite Materials - Literature Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
R.G. Quinn
A technical literature review was conducted to gain an understanding of the state of the art method, problems, results, and future of thermal diffusivity/conductivity of matrix-fiber composites for high temperature applications. This paper summarizes the results of test method development and theory. Results from testing on various sample types are discussed with concentration on the anisotropic characteristics of matrix-fiber composites, barriers to heat flow, and notable microstructure observations. The conclusion presents some observations from the technical literature, drawbacks of current information and discusses potential needs for future testing.
ERIC Educational Resources Information Center
Carter, Angela
This study involved observing a second-grade classroom to investigate how the teacher called on students, noting whether the teacher gave enough attention to students who raised their hands frequently by calling on them and examining students' responses when called on. Researchers implemented a new method of calling on students using name cards,…
Glycerol dehydration of native and diabetic animal tissues studied by THz-TDS and NMR methods
Smolyanskaya, O. A.; Schelkanova, I. J.; Kulya, M. S.; Odlyanitskiy, E. L.; Goryachev, I. S.; Tcypkin, A. N.; Grachev, Ya. V.; Toropova, Ya. G.; Tuchin, V. V.
2018-01-01
The optical clearing method has been widely used for different spectral ranges where it provides tissue transparency. In this work, we observed the enhanced penetration of the terahertz waves inside biological samples (skin, kidney, and cornea) treated with glycerol solutions inducing changes of optical and dielectric properties. It was supported by the observed trend of free-to-bound water ratio measured by the nuclear magnetic resonance (NMR) method. The terahertz clearing efficiency was found to be less for diabetic samples than for normal ones. Results of the numerical simulation proved that pulse deformation is due to bigger penetration depth caused by the reduction of absorption and refraction at optical clearing. PMID:29541513
Impact of enumeration method on diversity of Escherichia coli genotypes isolated from surface water.
Martin, E C; Gentry, T J
2016-11-01
There are numerous regulatory-approved Escherichia coli enumeration methods, but it is not known whether differences in media composition and incubation conditions impact the diversity of E. coli populations detected by these methods. A study was conducted to determine if three standard water quality assessments, Colilert ® , USEPA Method 1603, (modified mTEC) and USEPA Method 1604 (MI), detect different populations of E. coli. Samples were collected from six watersheds and analysed using the three enumeration approaches followed by E. coli isolation and genotyping. Results indicated that the three methods generally produced similar enumeration data across the sites, although there were some differences on a site-by-site basis. The Colilert ® method consistently generated the least diverse collection of E. coli genotypes as compared to modified mTEC and MI, with those two methods being roughly equal to each other. Although the three media assessed in this study were designed to enumerate E. coli, the differences in the media composition, incubation temperature, and growth platform appear to have a strong selective influence on the populations of E. coli isolated. This study suggests that standardized methods of enumeration and isolation may be warranted if researchers intend to obtain individual E. coli isolates for further characterization. This study characterized the impact of three USEPA-approved Escherichia coli enumeration methods on observed E. coli population diversity in surface water samples. Results indicated that these methods produced similar E. coli enumeration data but were more variable in the diversity of E. coli genotypes observed. Although the three methods enumerate the same species, differences in media composition, growth platform, and incubation temperature likely contribute to the selection of different cultivable populations of E. coli, and thus caution should be used when implementing these methods interchangeably for downstream applications which require cultivated isolates. © 2016 The Society for Applied Microbiology.
Comparative evaluation of RetCam vs. gonioscopy images in congenital glaucoma.
Azad, Raj V; Chandra, Parijat; Chandra, Anuradha; Gupta, Aparna; Gupta, Viney; Sihota, Ramanjit
2014-02-01
To compare clarity, exposure and quality of anterior chamber angle visualization in congenital glaucoma patients, using RetCam and indirect gonioscopy images. Cross-sectional study Participants. Congenital glaucoma patients over age of 5 years. A prospective consecutive pilot study was done in congenital glaucoma patients who were older than 5 years. Methods used are indirect gonioscopy and RetCam imaging. Clarity of the image, extent of angle visible and details of angle structures seen were graded for both methods, on digitally recorded images, in each eye, by two masked observers. Image clarity, interobserver agreement. 40 eyes of 25 congenital glaucoma patients were studied. RetCam image had excellent clarity in 77.5% of patients versus 47.5% by gonioscopy. The extent of angle seen was similar by both methods. Agreement between RetCam and gonioscopy images regarding details of angle structures was 72.50% by observer 1 and 65.00% by observer 2. There was good agreement between RetCam and indirect gonioscopy images in detecting angle structures of congenital glaucoma patients. However, RetCam provided greater clarity, with better quality, and higher magnification images. RetCam can be a useful alternative to gonioscopy in infants and small children without the need for general anesthesia.
NASA Astrophysics Data System (ADS)
Piburn, J.; Stewart, R.; Morton, A.
2017-10-01
Identifying erratic or unstable time-series is an area of interest to many fields. Recently, there have been successful developments towards this goal. These new developed methodologies however come from domains where it is typical to have several thousand or more temporal observations. This creates a challenge when attempting to apply these methodologies to time-series with much fewer temporal observations such as for socio-cultural understanding, a domain where a typical time series of interest might only consist of 20-30 annual observations. Most existing methodologies simply cannot say anything interesting with so few data points, yet researchers are still tasked to work within in the confines of the data. Recently a method for characterizing instability in a time series with limitedtemporal observations was published. This method, Attribute Stability Index (ASI), uses an approximate entropy based method tocharacterize a time series' instability. In this paper we propose an explicitly spatially weighted extension of the Attribute StabilityIndex. By including a mechanism to account for spatial autocorrelation, this work represents a novel approach for the characterizationof space-time instability. As a case study we explore national youth male unemployment across the world from 1991-2014.