Multivariate modelling of endophenotypes associated with the metabolic syndrome in Chinese twins.
Pang, Z; Zhang, D; Li, S; Duan, H; Hjelmborg, J; Kruse, T A; Kyvik, K O; Christensen, K; Tan, Q
2010-12-01
The common genetic and environmental effects on endophenotypes related to the metabolic syndrome have been investigated using bivariate and multivariate twin models. This paper extends the pairwise analysis approach by introducing independent and common pathway models to Chinese twin data. The aim was to explore the common genetic architecture in the development of these phenotypes in the Chinese population. Three multivariate models including the full saturated Cholesky decomposition model, the common factor independent pathway model and the common factor common pathway model were fitted to 695 pairs of Chinese twins representing six phenotypes including BMI, total cholesterol, total triacylglycerol, fasting glucose, HDL and LDL. Performances of the nested models were compared with that of the full Cholesky model. Cross-phenotype correlation coefficients gave clear indication of common genetic or environmental backgrounds in the phenotypes. Decomposition of phenotypic correlation by the Cholesky model revealed that the observed phenotypic correlation among lipid phenotypes had genetic and unique environmental backgrounds. Both pathway models suggest a common genetic architecture for lipid phenotypes, which is distinct from that of the non-lipid phenotypes. The declining performance with model restriction indicates biological heterogeneity in development among some of these phenotypes. Our multivariate analyses revealed common genetic and environmental backgrounds for the studied lipid phenotypes in Chinese twins. Model performance showed that physiologically distinct endophenotypes may follow different genetic regulations.
NASA Technical Reports Server (NTRS)
1977-01-01
The development of a framework and structure for shuttle era unmanned spacecraft projects and the development of a commonality evaluation model is documented. The methodology developed for model utilization in performing cost trades and comparative evaluations for commonality studies is discussed. The model framework consists of categories of activities associated with the spacecraft system's development process. The model structure describes the physical elements to be treated as separate identifiable entities. Cost estimating relationships for subsystem and program-level components were calculated.
Skew-t partially linear mixed-effects models for AIDS clinical studies.
Lu, Tao
2016-01-01
We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.
Conceptual model of iCAL4LA: Proposing the components using comparative analysis
NASA Astrophysics Data System (ADS)
Ahmad, Siti Zulaiha; Mutalib, Ariffin Abdul
2016-08-01
This paper discusses an on-going study that initiates an initial process in determining the common components for a conceptual model of interactive computer-assisted learning that is specifically designed for low achieving children. This group of children needs a specific learning support that can be used as an alternative learning material in their learning environment. In order to develop the conceptual model, this study extracts the common components from 15 strongly justified computer assisted learning studies. A comparative analysis has been conducted to determine the most appropriate components by using a set of specific indication classification to prioritize the applicability. The results of the extraction process reveal 17 common components for consideration. Later, based on scientific justifications, 16 of them were selected as the proposed components for the model.
Model-Based Reasoning: Using Visual Tools to Reveal Student Learning
ERIC Educational Resources Information Center
Luckie, Douglas; Harrison, Scott H.; Ebert-May, Diane
2011-01-01
Using visual models is common in science and should become more common in classrooms. Our research group has developed and completed studies on the use of a visual modeling tool, the Concept Connector. This modeling tool consists of an online concept mapping Java applet that has automatic scoring functions we refer to as Robograder. The Concept…
The Importance of Statistical Modeling in Data Analysis and Inference
ERIC Educational Resources Information Center
Rollins, Derrick, Sr.
2017-01-01
Statistical inference simply means to draw a conclusion based on information that comes from data. Error bars are the most commonly used tool for data analysis and inference in chemical engineering data studies. This work demonstrates, using common types of data collection studies, the importance of specifying the statistical model for sound…
Information Interaction Study for DER and DMS Interoperability
NASA Astrophysics Data System (ADS)
Liu, Haitao; Lu, Yiming; Lv, Guangxian; Liu, Peng; Chen, Yu; Zhang, Xinhui
The Common Information Model (CIM) is an abstract data model that can be used to represent the major objects in Distribution Management System (DMS) applications. Because the Common Information Model (CIM) doesn't modeling the Distributed Energy Resources (DERs), it can't meet the requirements of DER operation and management for Distribution Management System (DMS) advanced applications. Modeling of DER were studied based on a system point of view, the article initially proposed a CIM extended information model. By analysis the basic structure of the message interaction between DMS and DER, a bidirectional messaging mapping method based on data exchange was proposed.
ERIC Educational Resources Information Center
Kajonius, Petri J.
2017-01-01
Research is currently testing how the new maladaptive personality inventory for DSM (PID-5) and the well-established common Five-Factor Model (FFM) together can serve as an empirical and theoretical foundation for clinical psychology. The present study investigated the official short version of the PID-5 together with a common short version of…
ERIC Educational Resources Information Center
Goodrich, J. Marc; Lonigan, Christopher J.
2017-01-01
According to the common underlying proficiency model (Cummins, 1981), as children acquire academic knowledge and skills in their first language, they also acquire language-independent information about those skills that can be applied when learning a second language. The purpose of this study was to evaluate the relevance of the common underlying…
ERIC Educational Resources Information Center
Lee, Eunju J.; Bukowski, William M.
2012-01-01
Latent growth curve modeling was used to study the co-development of internalizing and externalizing problems in a sample of 2844 Korean fourth graders followed over four years. The project integrated two major theoretical viewpoints positing developmental mechanism: directional model and common vulnerability model. Findings suggest that (a) boys…
Xu, Yun; Muhamadali, Howbeer; Sayqal, Ali; Dixon, Neil; Goodacre, Royston
2016-10-28
Partial least squares (PLS) is one of the most commonly used supervised modelling approaches for analysing multivariate metabolomics data. PLS is typically employed as either a regression model (PLS-R) or a classification model (PLS-DA). However, in metabolomics studies it is common to investigate multiple, potentially interacting, factors simultaneously following a specific experimental design. Such data often cannot be considered as a "pure" regression or a classification problem. Nevertheless, these data have often still been treated as a regression or classification problem and this could lead to ambiguous results. In this study, we investigated the feasibility of designing a hybrid target matrix Y that better reflects the experimental design than simple regression or binary class membership coding commonly used in PLS modelling. The new design of Y coding was based on the same principle used by structural modelling in machine learning techniques. Two real metabolomics datasets were used as examples to illustrate how the new Y coding can improve the interpretability of the PLS model compared to classic regression/classification coding.
Dougherty, Lea R.; Bufferd, Sara J.; Carlson, Gabrielle A.; Klein, Daniel N.
2014-01-01
A number of studies have found that broadband internalizing and externalizing factors provide a parsimonious framework for understanding the structure of psychopathology across childhood, adolescence, and adulthood. However, few of these studies have examined psychopathology in young children, and several recent studies have found support for alternative models, including a bi-factor model with common and specific factors. The present study used parents’ (typically mothers’) reports on a diagnostic interview in a community sample of 3-year old children (n=541; 53.9 % male) to compare the internalizing-externalizing latent factor model with a bi-factor model. The bi-factor model provided a better fit to the data. To test the concurrent validity of this solution, we examined associations between this model and paternal reports and laboratory observations of child temperament. The internalizing factor was associated with low levels of surgency and high levels of fear; the externalizing factor was associated with high levels of surgency and disinhibition and low levels of effortful control; and the common factor was associated with high levels of surgency and negative affect and low levels of effortful control. These results suggest that psychopathology in preschool-aged children may be explained by a single, common factor influencing nearly all disorders and unique internalizing and externalizing factors. These findings indicate that shared variance across internalizing and externalizing domains is substantial and are consistent with recent suggestions that emotion regulation difficulties may be a common vulnerability for a wide array of psychopathology. PMID:24652485
Johnson, Matthew D.; Anderson, Jared R.; Walker, Ann; Wilcox, Allison; Lewis, Virginia L.; Robbins, David C.
2014-01-01
Using cross-sectional data from 117 married couples in which one member is diagnosed with type 2 diabetes, the current study sought to explore a possible indirect association between common dyadic coping and dietary and exercise adherence via the mechanism of patient and spouse reports of diabetes efficacy. Results from the structural equation model analysis indicated common dyadic coping was associated with higher levels of diabetes efficacy for both patients and spouses which, in turn, was then associated with better dietary and exercise adherence for the patient. This model proved a better fit to the data than three plausible alternative models. The bootstrap test of mediation revealed common dyadic coping was indirectly associated with dietary adherence via both patient and spouse diabetes efficacy, but spouse diabetes efficacy was the only mechanism linking common dyadic coping and exercise adherence. This study highlights the importance of exploring the indirect pathways through which general intimate relationship functioning might be associated with type 2 diabetes outcomes. PMID:24015707
AQUATIC TOXICITY MODE OF ACTION STUDIES APPLIED TO QSAR DEVELOPMENT
A series of QSAR models for predicting fish acute lethality were developed using systematically collected data on more than 600 chemicals. These models were developed based on the assumption that chemicals producing toxicity through a common mechanism will have commonality in the...
Rosenström, Tom; Ystrom, Eivind; Torvik, Fartein Ask; Czajkowski, Nikolai Olavi; Gillespie, Nathan A.; Aggen, Steven H.; Krueger, Robert F.; Kendler, Kenneth S; Reichborn-Kjennerud, Ted
2017-01-01
Results from previous studies on DSM-IV and DSM-5 Antisocial Personality Disorder (ASPD) have suggested that the construct is etiologically multidimensional. To our knowledge, however, the structure of genetic and environmental influences in ASPD has not been examined using an appropriate range of biometric models and diagnostic interviews. The 7 ASPD criteria (section A) were assessed in a population-based sample of 2794 Norwegian twins by a structured interview for DSM-IV personality disorders. Exploratory analyses were conducted at the phenotypic level. Multivariate biometric models, including both independent and common pathways, were compared. A single phenotypic factor was found, and the best-fitting biometric model was a single-factor common pathway model, with common-factor heritability of 51% (95% CI = 40–67%). In other words, both genetic and environmental correlations between the ASPD criteria could be accounted for by a single common latent variable. The findings support the validity of ASPD as a unidimensional diagnostic construct. PMID:28108863
Rosenström, Tom; Ystrom, Eivind; Torvik, Fartein Ask; Czajkowski, Nikolai Olavi; Gillespie, Nathan A; Aggen, Steven H; Krueger, Robert F; Kendler, Kenneth S; Reichborn-Kjennerud, Ted
2017-05-01
Results from previous studies on DSM-IV and DSM-5 Antisocial Personality Disorder (ASPD) have suggested that the construct is etiologically multidimensional. To our knowledge, however, the structure of genetic and environmental influences in ASPD has not been examined using an appropriate range of biometric models and diagnostic interviews. The 7 ASPD criteria (section A) were assessed in a population-based sample of 2794 Norwegian twins by a structured interview for DSM-IV personality disorders. Exploratory analyses were conducted at the phenotypic level. Multivariate biometric models, including both independent and common pathways, were compared. A single phenotypic factor was found, and the best-fitting biometric model was a single-factor common pathway model, with common-factor heritability of 51% (95% CI 40-67%). In other words, both genetic and environmental correlations between the ASPD criteria could be accounted for by a single common latent variable. The findings support the validity of ASPD as a unidimensional diagnostic construct.
NASA Astrophysics Data System (ADS)
Krivtsov, S. N.; Yakimov, I. V.; Ozornin, S. P.
2018-03-01
A mathematical model of a solenoid common rail fuel injector was developed. Its difference from existing models is control valve wear simulation. A common rail injector of 0445110376 Series (Cummins ISf 2.8 Diesel engine) produced by Bosch Company was used as a research object. Injector parameters (fuel delivery and back leakage) were determined by calculation and experimental methods. GT-Suite model average R2 is 0.93 which means that it predicts the injection rate shape very accurately (nominal and marginal technical conditions of an injector). Numerical analysis and experimental studies showed that control valve wear increases back leakage and fuel delivery (especially at 160 MPa). The regression models for determining fuel delivery and back leakage effects on fuel pressure and energizing time were developed (for nominal and marginal technical conditions).
Interpretation of commonly used statistical regression models.
Kasza, Jessica; Wolfe, Rory
2014-01-01
A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.
Common Leadership Responsibilities of Principals of Successful Turnaround Model Schools
ERIC Educational Resources Information Center
Fullwood, Jezelle
2016-01-01
Purpose: The purpose of this qualitative study was to discover which leadership responsibilities, within the domains of trust, communication, learning, and shared leadership, did elementary and middle school principals of successful turnaround schools commonly perceive as most necessary to lead a turnaround intervention model school. Themes were…
Combined Common Person and Common Item Equating of Medical Science Examinations.
ERIC Educational Resources Information Center
Kelley, Paul R.
This equating study of the National Board of Medical Examiners Examinations was a combined common persons and common items equating, using the Rasch model. The 1,000-item test was administered to about 3,000 second-year medical students in seven equal-length subtests: anatomy, physiology, biochemistry, pathology, microbiology, pharmacology, and…
Colvin, Michael E.; Pierce, Clay; Stewart, Timothy W.
2015-01-01
Food web modeling is recognized as fundamental to understanding the complexities of aquatic systems. Ecopath is the most common mass-balance model used to represent food webs and quantify trophic interactions among groups. We constructed annual Ecopath models for four consecutive years during the first half-decade of a zebra mussel invasion in shallow, eutrophic Clear Lake, Iowa, USA, to evaluate changes in relative biomass and total system consumption among food web groups, evaluate food web impacts of non-native common carp and zebra mussels on food web groups, and to interpret food web impacts in light of on-going lake restoration. Total living biomass increased each year of the study; the majority of the increase due to a doubling in planktonic blue green algae, but several other taxa also increased including a more than two-order of magnitude increase in zebra mussels. Common carp accounted for the largest percentage of total fish biomass throughout the study even with on-going harvest. Chironomids, common carp, and zebra mussels were the top-three ranking consumer groups. Non-native common carp and zebra mussels accounted for an average of 42% of the total system consumption. Despite the relatively high biomass densities of common carp and zebra mussel, food web impacts was minimal due to excessive benthic and primary production in this eutrophic system. Consumption occurring via benthic pathways dominated system consumption in Clear Lake throughout our study, supporting the argument that benthic food webs are significant in shallow, eutrophic lake ecosystems and must be considered if ecosystem-level understanding is to be obtained.
NASA Astrophysics Data System (ADS)
Silva, Claudio; Yáñez, Eleuterio; Barbieri, María Angela; Bernal, Claudio; Aranis, Antonio
2015-05-01
Recent studies have demonstrated the effects of climate change on both oceanographic conditions and the relative abundance and distribution of fisheries resources. In this study, we investigated the impacts of climate change on swordfish (Xiphias gladius) and common sardine (Strangomera bentincki) fisheries using predictions of changes from global models (according to the NCAR model and IPCC emissions scenario A2), bioclimate envelope models and satellite-based sea surface temperature (SST) estimates from high-resolution regional models for the simulation period 2015-2065. Predictions of SST from global climate models were regionalised using the Delta statistical downscaling technique. The results show an SST trend of 0.0196 °C per year in the study area, equivalent to 0.98 °C for the simulation horizon and for a high CO2 emission scenario (A2). The bioclimate envelope models were developed using historical (2001-2011) monthly environmental and fisheries data. These data included the local relative abundance index of fish catch per unit effort (CPUE), corresponding to the total catch (kg) by 1000 hooks in a 1° latitude × 1° longitude fishing grid for swordfish and to the total catch (ton) by hold capacity (100 m3) in a 10‧ latitude × 10‧ longitude grid for common sardine. The environmental data included temporal (month), spatial (latitude) and thermal conditions (SST). In the first step of the bioclimate modelling performed in this study, generalised additive models (GAMs) were used as an exploratory tool to identify the functional relationships between the environmental variables and CPUE. These relationships were then parameterised using general linear models (GLMs) to provide a robust forecasting tool. With this modelling approach, environmental variables explained 58.7% of the variation in the CPUE of swordfish and 60.6% of the variation in the CPUE of common sardine in the final GLMs. Using IDRISI GIS, these GLMs simulated monthly changes in the relative abundance and distribution of the studied species forced by changes in the regionalised SST projected by the NCAR model under the A2 emission scenario. The simulations predicted a slight decline of 6% (17 kg/1000 hooks) and 7% (3.8 ton/100 m3) for swordfish and common sardine, respectively, in the spatial mean of the potential relative abundance (CPUE) by 2065.
The second phase of the MicroArray Quality Control (MAQC-II) project evaluated common practices for developing and validating microarray-based models aimed at predicting toxicological and clinical endpoints. Thirty-six teams developed classifiers for 13 endpoints - some easy, som...
Judging Alignment of Curriculum-Based Measures in Mathematics and Common Core Standards
ERIC Educational Resources Information Center
Morton, Christopher
2013-01-01
Measurement literature supports the utility of alignment models for application with state standards and large-scale assessments. However, the literature is lacking in the application of these models to curriculum-based measures (CBMs) and common core standards. In this study, I investigate the alignment of CBMs and standards, with specific…
A Comparison of Linking and Concurrent Calibration under the Graded Response Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho; Cohen, Allan S.
Applications of item response theory to practical testing problems including equating, differential item functioning, and computerized adaptive testing, require that item parameter estimates be placed onto a common metric. In this study, two methods for developing a common metric for the graded response model under item response theory were…
Sensitivity and uncertainty analysis for the annual phosphorus loss estimator model
USDA-ARS?s Scientific Manuscript database
Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that there are inherent uncertainties with model predictions, limited studies have addressed model prediction uncertainty. In this study we assess the effect of model input error on predict...
Zhan, Xianbao; Wang, Fan; Bi, Yan
2016-01-01
Animal models of pancreatitis are useful for elucidating the pathogenesis of pancreatitis and developing and testing novel interventions. In this review, we aim to summarize the most commonly used animal models, overview their pathophysiology, and discuss their strengths and limitations. We will also briefly describe common animal study procedures and refer readers to more detailed protocols in the literature. Although animal models include pigs, dogs, opossums, and other animals, we will mainly focus on rodent models because of their popularity. Autoimmune pancreatitis and genetically engineered animal models will be reviewed elsewhere. PMID:27418683
Osier, Nicole; Dixon, C Edward
2016-01-01
Controlled cortical impact (CCI) is a commonly used and highly regarded model of brain trauma that uses a pneumatically or electromagnetically controlled piston to induce reproducible and well-controlled injury. The CCI model was originally used in ferrets and it has since been scaled for use in many other species. This chapter will describe the historical development of the CCI model, compare and contrast the pneumatic and electromagnetic models, and summarize key short- and long-term consequences of TBI that have been gleaned using this model. In accordance with the recent efforts to promote high-quality evidence through the reporting of common data elements (CDEs), relevant study details-that should be reported in CCI studies-will be noted.
A Pursuit Theory Account for the Perception of Common Motion in Motion Parallax.
Ratzlaff, Michael; Nawrot, Mark
2016-09-01
The visual system uses an extraretinal pursuit eye movement signal to disambiguate the perception of depth from motion parallax. Visual motion in the same direction as the pursuit is perceived nearer in depth while visual motion in the opposite direction as pursuit is perceived farther in depth. This explanation of depth sign applies to either an allocentric frame of reference centered on the fixation point or an egocentric frame of reference centered on the observer. A related problem is that of depth order when two stimuli have a common direction of motion. The first psychophysical study determined whether perception of egocentric depth order is adequately explained by a model employing an allocentric framework, especially when the motion parallax stimuli have common rather than divergent motion. A second study determined whether a reversal in perceived depth order, produced by a reduction in pursuit velocity, is also explained by this model employing this allocentric framework. The results show than an allocentric model can explain both the egocentric perception of depth order with common motion and the perceptual depth order reversal created by a reduction in pursuit velocity. We conclude that an egocentric model is not the only explanation for perceived depth order in these common motion conditions. © The Author(s) 2016.
Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong
2017-12-18
Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.
Parameter uncertainty analysis for the annual phosphorus loss estimator (APLE) model
USDA-ARS?s Scientific Manuscript database
Technical abstract: Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analys...
Testing for Two-Way Interactions in the Multigroup Common Factor Model
ERIC Educational Resources Information Center
van Smeden, Maarten; Hessen, David J.
2013-01-01
In this article, a 2-way multigroup common factor model (MG-CFM) is presented. The MG-CFM can be used to estimate interaction effects between 2 grouping variables on 1 or more hypothesized latent variables. For testing the significance of such interactions, a likelihood ratio test is presented. In a simulation study, the robustness of the…
The potential of large studies for building genetic risk prediction models
NCI scientists have developed a new paradigm to assess hereditary risk prediction in common diseases, such as prostate cancer. This genetic risk prediction concept is based on polygenic analysis—the study of a group of common DNA sequences, known as singl
Modeling the impact of common noise inputs on the network activity of retinal ganglion cells
Ahmadian, Yashar; Shlens, Jonathon; Pillow, Jonathan W.; Kulkarni, Jayant; Litke, Alan M.; Chichilnisky, E. J.; Simoncelli, Eero; Paninski, Liam
2013-01-01
Synchronized spontaneous firing among retinal ganglion cells (RGCs), on timescales faster than visual responses, has been reported in many studies. Two candidate mechanisms of synchronized firing include direct coupling and shared noisy inputs. In neighboring parasol cells of primate retina, which exhibit rapid synchronized firing that has been studied extensively, recent experimental work indicates that direct electrical or synaptic coupling is weak, but shared synaptic input in the absence of modulated stimuli is strong. However, previous modeling efforts have not accounted for this aspect of firing in the parasol cell population. Here we develop a new model that incorporates the effects of common noise, and apply it to analyze the light responses and synchronized firing of a large, densely-sampled network of over 250 simultaneously recorded parasol cells. We use a generalized linear model in which the spike rate in each cell is determined by the linear combination of the spatio-temporally filtered visual input, the temporally filtered prior spikes of that cell, and unobserved sources representing common noise. The model accurately captures the statistical structure of the spike trains and the encoding of the visual stimulus, without the direct coupling assumption present in previous modeling work. Finally, we examined the problem of decoding the visual stimulus from the spike train given the estimated parameters. The common-noise model produces Bayesian decoding performance as accurate as that of a model with direct coupling, but with significantly more robustness to spike timing perturbations. PMID:22203465
Port, Russell G; Gandal, Michael J; Roberts, Timothy P L; Siegel, Steven J; Carlson, Gregory C
2014-01-01
Most recent estimates indicate that 1 in 68 children are affected by an autism spectrum disorder (ASD). Though decades of research have uncovered much about these disorders, the pathological mechanism remains unknown. Hampering efforts is the seeming inability to integrate findings over the micro to macro scales of study, from changes in molecular, synaptic and cellular function to large-scale brain dysfunction impacting sensory, communicative, motor and cognitive activity. In this review, we describe how studies focusing on neuronal circuit function provide unique context for identifying common neurobiological disease mechanisms of ASD. We discuss how recent EEG and MEG studies in subjects with ASD have repeatedly shown alterations in ensemble population recordings (both in simple evoked related potential latencies and specific frequency subcomponents). Because these disease-associated electrophysiological abnormalities have been recapitulated in rodent models, studying circuit differences in these models may provide access to abnormal circuit function found in ASD. We then identify emerging in vivo and ex vivo techniques, focusing on how these assays can characterize circuit level dysfunction and determine if these abnormalities underlie abnormal clinical electrophysiology. Such circuit level study in animal models may help us understand how diverse genetic and environmental risks can produce a common set of EEG, MEG and anatomical abnormalities found in ASD.
Zahnd, Whitney E; McLafferty, Sara L
2017-11-01
There is increasing call for the utilization of multilevel modeling to explore the relationship between place-based contextual effects and cancer outcomes in the United States. To gain a better understanding of how contextual factors are being considered, we performed a systematic review. We reviewed studies published between January 1, 2002 and December 31, 2016 and assessed the following attributes: (1) contextual considerations such as geographic scale and contextual factors used; (2) methods used to quantify contextual factors; and (3) cancer type and outcomes. We searched PubMed, Scopus, and Web of Science and initially identified 1060 studies. One hundred twenty-two studies remained after exclusions. Most studies utilized a two-level structure; census tracts were the most commonly used geographic scale. Socioeconomic factors, health care access, racial/ethnic factors, and rural-urban status were the most common contextual factors addressed in multilevel models. Breast and colorectal cancers were the most common cancer types, and screening and staging were the most common outcomes assessed in these studies. Opportunities for future research include deriving contextual factors using more rigorous approaches, considering cross-classified structures and cross-level interactions, and using multilevel modeling to explore understudied cancers and outcomes. Copyright © 2017 Elsevier Inc. All rights reserved.
Standardized Representation of Clinical Study Data Dictionaries with CIMI Archetypes
Sharma, Deepak K.; Solbrig, Harold R.; Prud’hommeaux, Eric; Pathak, Jyotishman; Jiang, Guoqian
2016-01-01
Researchers commonly use a tabular format to describe and represent clinical study data. The lack of standardization of data dictionary’s metadata elements presents challenges for their harmonization for similar studies and impedes interoperability outside the local context. We propose that representing data dictionaries in the form of standardized archetypes can help to overcome this problem. The Archetype Modeling Language (AML) as developed by the Clinical Information Modeling Initiative (CIMI) can serve as a common format for the representation of data dictionary models. We mapped three different data dictionaries (identified from dbGAP, PheKB and TCGA) onto AML archetypes by aligning dictionary variable definitions with the AML archetype elements. The near complete alignment of data dictionaries helped map them into valid AML models that captured all data dictionary model metadata. The outcome of the work would help subject matter experts harmonize data models for quality, semantic interoperability and better downstream data integration. PMID:28269909
Standardized Representation of Clinical Study Data Dictionaries with CIMI Archetypes.
Sharma, Deepak K; Solbrig, Harold R; Prud'hommeaux, Eric; Pathak, Jyotishman; Jiang, Guoqian
2016-01-01
Researchers commonly use a tabular format to describe and represent clinical study data. The lack of standardization of data dictionary's metadata elements presents challenges for their harmonization for similar studies and impedes interoperability outside the local context. We propose that representing data dictionaries in the form of standardized archetypes can help to overcome this problem. The Archetype Modeling Language (AML) as developed by the Clinical Information Modeling Initiative (CIMI) can serve as a common format for the representation of data dictionary models. We mapped three different data dictionaries (identified from dbGAP, PheKB and TCGA) onto AML archetypes by aligning dictionary variable definitions with the AML archetype elements. The near complete alignment of data dictionaries helped map them into valid AML models that captured all data dictionary model metadata. The outcome of the work would help subject matter experts harmonize data models for quality, semantic interoperability and better downstream data integration.
Zhan, Xianbao; Wang, Fan; Bi, Yan; Ji, Baoan
2016-09-01
Animal models of pancreatitis are useful for elucidating the pathogenesis of pancreatitis and developing and testing novel interventions. In this review, we aim to summarize the most commonly used animal models, overview their pathophysiology, and discuss their strengths and limitations. We will also briefly describe common animal study procedures and refer readers to more detailed protocols in the literature. Although animal models include pigs, dogs, opossums, and other animals, we will mainly focus on rodent models because of their popularity. Autoimmune pancreatitis and genetically engineered animal models will be reviewed elsewhere. Copyright © 2016 the American Physiological Society.
Systematic review of health-related quality of life models
2012-01-01
Background A systematic literature review was conducted to (a) identify the most frequently used health-related quality of life (HRQOL) models and (b) critique those models. Methods Online search engines were queried using pre-determined inclusion and exclusion criteria. We reviewed titles, abstracts, and then full-text articles for their relevance to this review. Then the most commonly used models were identified, reviewed in tables, and critiqued using published criteria. Results Of 1,602 titles identified, 100 articles from 21 countries met the inclusion criteria. The most frequently used HRQOL models were: Wilson and Cleary (16%), Ferrans and colleagues (4%), or World Health Organization (WHO) (5%). Ferrans and colleagues’ model was a revision of Wilson and Cleary’s model and appeared to have the greatest potential to guide future HRQOL research and practice. Conclusions Recommendations are for researchers to use one of the three common HRQOL models unless there are compelling and clearly delineated reasons for creating new models. Disease-specific models can be derived from one of the three commonly used HRQOL models. We recommend Ferrans and colleagues’ model because they added individual and environmental characteristics to the popular Wilson and Cleary model to better explain HRQOL. Using a common HRQOL model across studies will promote a coherent body of evidence that will more quickly advance the science in the area of HRQOL. PMID:23158687
Goodrich, J Marc; Lonigan, Christopher J
2017-08-01
According to the common underlying proficiency model (Cummins, 1981), as children acquire academic knowledge and skills in their first language, they also acquire language-independent information about those skills that can be applied when learning a second language. The purpose of this study was to evaluate the relevance of the common underlying proficiency model for the early literacy skills of Spanish-speaking language-minority children using confirmatory factor analysis. Eight hundred fifty-eight Spanish-speaking language-minority preschoolers (mean age = 60.83 months, 50.2% female) participated in this study. Results indicated that bifactor models that consisted of language-independent as well as language-specific early literacy factors provided the best fits to the data for children's phonological awareness and print knowledge skills. Correlated factors models that only included skills specific to Spanish and English provided the best fits to the data for children's oral language skills. Children's language-independent early literacy skills were significantly related across constructs and to language-specific aspects of early literacy. Language-specific aspects of early literacy skills were significantly related within but not across languages. These findings suggest that language-minority preschoolers have a common underlying proficiency for code-related skills but not language-related skills that may allow them to transfer knowledge across languages.
Stimulating collaboration between human and veterinary health care professionals.
Eussen, Björn G M; Schaveling, Jaap; Dragt, Maria J; Blomme, Robert Jan
2017-06-13
Despite the need to control outbreaks of (emerging) zoonotic diseases and the need for added value in comparative/translational medicine, jointly addressed in the One Health approach [One health Initiative (n.d.a). About the One Health Initiative. http://www.onehealthinitiative.com/about.php . Accessed 13 September 2016], collaboration between human and veterinary health care professionals is limited. This study focuses on the social dilemma experienced by health care professionals and ways in which an interdisciplinary approach could be developed. Based on Gaertner and Dovidio's Common Ingroup Identity Model, a number of questionnaires were designed and tested; with PROGRESS, the relation between collaboration and common goal was assessed, mediated by decategorization, recategorization, mutual differentiation and knowledge sharing. This study confirms the Common Ingroup Identity Model stating that common goals stimulate collaboration. Decategorization and mutual differentiation proved to be significant in this relationship; recategorization and knowledge sharing mediate this relation. It can be concluded that the Common Ingroup Identity Model theory helps us to understand how health care professionals perceive the One Health initiative and how they can intervene in this process. In the One Health approach, professional associations could adopt a facilitating role.
A Basic Bivariate Structure of Personality Attributes Evident Across Nine Languages.
Saucier, Gerard; Thalmayer, Amber Gayle; Payne, Doris L; Carlson, Robert; Sanogo, Lamine; Ole-Kotikash, Leonard; Church, A Timothy; Katigbak, Marcia S; Somer, Oya; Szarota, Piotr; Szirmák, Zsofia; Zhou, Xinyue
2014-02-01
Here, two studies seek to characterize a parsimonious common-denominator personality structure with optimal cross-cultural replicability. Personality differences are observed in all human populations and cultures, but lexicons for personality attributes contain so many distinctions that parsimony is lacking. Models stipulating the most important attributes have been formulated by experts or by empirical studies drawing on experience in a very limited range of cultures. Factor analyses of personality lexicons of nine languages of diverse provenance (Chinese, Korean, Filipino, Turkish, Greek, Polish, Hungarian, Maasai, and Senoufo) were examined, and their common structure was compared to that of several prominent models in psychology. A parsimonious bivariate model showed evidence of substantial convergence and ubiquity across cultures. Analyses involving key markers of these dimensions in English indicate that they are broad dimensions involving the overlapping content of the interpersonal circumplex, models of communion and agency, and morality/warmth and competence. These "Big Two" dimensions-Social Self-Regulation and Dynamism-provide a common-denominator model involving the two most crucial axes of personality variation, ubiquitous across cultures. The Big Two might serve as an umbrella model serving to link diverse theoretical models and associated research literatures. © 2013 Wiley Periodicals, Inc.
The sensitivity of ecosystem service models to choices of input data and spatial resolution
Kenneth J. Bagstad; Erika Cohen; Zachary H. Ancona; Steven. G. McNulty; Ge Sun
2018-01-01
Although ecosystem service (ES) modeling has progressed rapidly in the last 10â15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address...
Mather, Lisa; Blom, Victoria; Bergström, Gunnar; Svedberg, Pia
2016-12-01
Depression and anxiety are highly comorbid due to shared genetic risk factors, but less is known about whether burnout shares these risk factors. We aimed to examine whether the covariation between major depressive disorder (MDD), generalized anxiety disorder (GAD), and burnout is explained by common genetic and/or environmental factors. This cross-sectional study included 25,378 Swedish twins responding to a survey in 2005-2006. Structural equation models were used to analyze whether the trait variances and covariances were due to additive genetics, non-additive genetics, shared environment, and unique environment. Univariate analyses tested sex limitation models and multivariate analysis tested Cholesky, independent pathway, and common pathway models. The phenotypic correlations were 0.71 (0.69-0.74) between MDD and GAD, 0.58 (0.56-0.60) between MDD and burnout, and 0.53 (0.50-0.56) between GAD and burnout. Heritabilities were 45% for MDD, 49% for GAD, and 38% for burnout; no statistically significant sex differences were found. A common pathway model was chosen as the final model. The common factor was influenced by genetics (58%) and unique environment (42%), and explained 77% of the variation in MDD, 69% in GAD, and 44% in burnout. GAD and burnout had additive genetic factors unique to the phenotypes (11% each), while MDD did not. Unique environment explained 23% of the variability in MDD, 20% in GAD, and 45% in burnout. In conclusion, the covariation was explained by an underlying common factor, largely influenced by genetics. Burnout was to a large degree influenced by unique environmental factors not shared with MDD and GAD.
May common model biases reduce CMIP5's ability to simulate the recent Pacific La Niña-like cooling?
NASA Astrophysics Data System (ADS)
Luo, Jing-Jia; Wang, Gang; Dommenget, Dietmar
2018-02-01
Over the recent three decades sea surface temperate (SST) in the eastern equatorial Pacific has decreased, which helps reduce the rate of global warming. However, most CMIP5 model simulations with historical radiative forcing do not reproduce this Pacific La Niña-like cooling. Based on the assumption of "perfect" models, previous studies have suggested that errors in simulated internal climate variations and/or external radiative forcing may cause the discrepancy between the multi-model simulations and the observation. But the exact causes remain unclear. Recent studies have suggested that observed SST warming in the other two ocean basins in past decades and the thermostat mechanism in the Pacific in response to increased radiative forcing may also play an important role in driving this La Niña-like cooling. Here, we investigate an alternative hypothesis that common biases of current state-of-the-art climate models may deteriorate the models' ability and can also contribute to this multi-model simulations-observation discrepancy. Our results suggest that underestimated inter-basin warming contrast across the three tropical oceans, overestimated surface net heat flux and underestimated local SST-cloud negative feedback in the equatorial Pacific may favor an El Niño-like warming bias in the models. Effects of the three common model biases do not cancel one another and jointly explain 50% of the total variance of the discrepancies between the observation and individual models' ensemble mean simulations of the Pacific SST trend. Further efforts on reducing common model biases could help improve simulations of the externally forced climate trends and the multi-decadal climate fluctuations.
De Clercq, Etienne
2008-09-01
It is widely accepted that the development of electronic patient records, or even of a common electronic patient record, is one possible way to improve cooperation and data communication between nurses and physicians. Yet, little has been done so far to develop a common conceptual model for both medical and nursing patient records, which is a first challenge that should be met to set up a common electronic patient record. In this paper, we describe a problem-oriented conceptual model and we show how it may suit both nursing and medical perspectives in a hospital setting. We started from existing nursing theory and from an initial model previously set up for primary care. In a hospital pilot site, a multi-disciplinary team refined this model using one large and complex clinical case (retrospective study) and nine ongoing cases (prospective study). An internal validation was performed through hospital-wide multi-professional interviews and through discussions around a graphical user interface prototype. To assess the consistency of the model, a computer engineer specified it. Finally, a Belgian expert working group performed an external assessment of the model. As a basis for a common patient record we propose a simple problem-oriented conceptual model with two levels of meta-information. The model is mapped with current nursing theories and it includes the following concepts: "health care element", "health approach", "health agent", "contact", "subcontact" and "service". These concepts, their interrelationships and some practical rules for using the model are illustrated in this paper. Our results are compatible with ongoing standardization work at the Belgian and European levels. Our conceptual model is potentially a foundation for a multi-professional electronic patient record that is problem-oriented and therefore patient-centred.
Marriage and Family Therapy Students' Experience with Common Factors Training.
Fife, Stephen T; D'Aniello, Carissa; Scott, Sarah; Sullivan, Erin
2018-04-27
With the increased empirical and theoretical support for common factors in the psychotherapy literature, marriage and family therapy (MFT) scholars have begun discussing the inclusion of common factors in MFT training. However, there is very little empirical research on common factors training or how to include common factors in MFT curricula. The purpose of this phenomenological study was to investigate MFT students' experience with common factors training. Seventeen master's degree students who received training in common factors participated in the study. Data was comprised of participants' journal reflections and focus group interviews on their experience learning about common factors and how this influenced their work with clients. Participants' responses to the training were overwhelmingly positive and highlighted the ways in which studying common factors enhanced their confidence, understanding of MFT models, conceptual abilities, and clinical practice. Additional results and discussion about incorporating common factors in MFT training are presented. © 2018 American Association for Marriage and Family Therapy.
Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P
2014-06-26
To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.
Suzuki, Takashi; Takao, Hiroyuki; Suzuki, Takamasa; Suzuki, Tomoaki; Masuda, Shunsuke; Dahmani, Chihebeddine; Watanabe, Mitsuyoshi; Mamori, Hiroya; Ishibashi, Toshihiro; Yamamoto, Hideki; Yamamoto, Makoto; Murayama, Yuichi
2017-01-01
In most simulations of intracranial aneurysm hemodynamics, blood is assumed to be a Newtonian fluid. However, it is a non-Newtonian fluid, and its viscosity profile differs among individuals. Therefore, the common viscosity assumption may not be valid for all patients. This study aims to test the suitability of the common viscosity assumption. Blood viscosity datasets were obtained from two healthy volunteers. Three simulations were performed for three different-sized aneurysms, two using measured value-based non-Newtonian models and one using a Newtonian model. The parameters proposed to predict an aneurysmal rupture obtained using the non-Newtonian models were compared with those obtained using the Newtonian model. The largest difference (25%) in the normalized wall shear stress (NWSS) was observed in the smallest aneurysm. Comparing the difference ratio to the NWSS with the Newtonian model between the two Non-Newtonian models, the difference of the ratio was 17.3%. Irrespective of the aneurysmal size, computational fluid dynamics simulations with either the common Newtonian or non-Newtonian viscosity assumption could lead to values different from those of the patient-specific viscosity model for hemodynamic parameters such as NWSS.
Burke, Emily L; Walvekar, Rohan R; Lin, James; Hagan, Joseph; Kluka, Evelyn A
2009-12-01
To determine the efficacy of common solutions used to dissolve blood clots blocking tympanostomy tubes (TTs) of differing lengths and diameters. An ex vivo experimental study. Ear models were built by the study investigator. Tympanostomy tubes were inserted into the models and blocked with blood clots. Test solutions were applied to the blood clots, and time for clearance was recorded via microscopic visual confirmation. Richards T-tube had higher odds of unclogging than collar button tubes (odds ratio: 2.37, 95% confidence intervals 1.02-5.54, p=0.042). Vinegar and 3% hydrogen peroxide were most effective for Richards T-tubes and collar button tubes, respectively. Common solutions (vinegar and hydrogen peroxide) were more effective than antibiotic drops in clearing blood clot blocking TTs.
Impact of a spring defoliator on common oak
Victor V. Rubtsov; Irina A. Utkina
1991-01-01
We have investigated the population dynamics of some common phyllophagous insects in oak stands of the forest-steppe zone and their impact on common oak (Quercus robur L). Considerable attention has also been paid to mathematical modeling of the studied processes. All field data represent samples taken from the Tellerman oak grove in the Voronezh...
The Common Core State Standards Initiative: an Overview
ERIC Educational Resources Information Center
Watt, Michael G.
2011-01-01
The purpose of this study was to evaluate decision making in the Common Core State Standards Initiative as the change process moved from research, development and diffusion activities to adoption of the Common Core State Standards by the states. A decision-oriented evaluation model was used to describe the four stages of planning, structuring,…
Previous studies indicate that freshwater mollusks are more sensitive than commonly tested organisms to some chemicals, such as copper and ammonia. Nevertheless, mollusks are generally under-represented in toxicity databases. Studies are needed to generate data with which to comp...
ERIC Educational Resources Information Center
Zuzovsky, Ruth; Donitsa-Schmidt, Smadar
2017-01-01
The purpose of the present study was to examine the effectiveness of two common models of initial teacher education programmes that are prevalent in many countries, including Israel. The two are: the concurrent model, in which disciplinary studies and pedagogical studies are integrated and taught at the same time; and the consecutive model, which…
Commonalities of nurse-designed models of health care.
Mason, Diana J; Jones, Dorothy A; Roy, Callista; Sullivan, Cheryl G; Wood, Laura J
2015-01-01
The American Academy of Nursing has identified examples of care redesign developed by nurses who address the health needs of diverse populations. These models show important clinical and financial outcomes as summarized in the Select Edge Runner Models of Care table included in this article. A study team appointed by the Academy explored the commonalities across these models. Four commonalities emerged: health holistically defined; individual-, family-, and community-centric approaches to care; relationship-based care that enables partnerships and builds patient engagement and activation; and a shift from episodic individual care to continuous group and public health approaches. The policy implications include examining measures of an expanded definition of health, paying for visionary care, and transparency and rewards for community-level engagement. Copyright © 2015 Elsevier Inc. All rights reserved.
2015-02-06
additional pages if necessary.) PROTOCOL#: FDG20140008A DATE: 6 February 2015 PROTOCOL TITLE: A Pilot Study of Common Bile Duct Reconstruction with...obstruction or bile peritonitis. This was reported to the IACUC chair. 9. REDUCTION, REFINEMENT, OR REPLACEMENT OF ANIMAL USE; REPLACEMENT...benefit the DoD/USAF? We developed a porcine model of common bile duct injury and interposition grafting, gained experience managing these patients
Vanuytrecht, Eline; Thorburn, Peter J
2017-05-01
Elevated atmospheric CO 2 concentrations ([CO 2 ]) cause direct changes in crop physiological processes (e.g. photosynthesis and stomatal conductance). To represent these CO 2 responses, commonly used crop simulation models have been amended, using simple and semicomplex representations of the processes involved. Yet, there is no standard approach to and often poor documentation of these developments. This study used a bottom-up approach (starting with the APSIM framework as case study) to evaluate modelled responses in a consortium of commonly used crop models and illuminate whether variation in responses reflects true uncertainty in our understanding compared to arbitrary choices of model developers. Diversity in simulated CO 2 responses and limited validation were common among models, both within the APSIM framework and more generally. Whereas production responses show some consistency up to moderately high [CO 2 ] (around 700 ppm), transpiration and stomatal responses vary more widely in nature and magnitude (e.g. a decrease in stomatal conductance varying between 35% and 90% among models was found for [CO 2 ] doubling to 700 ppm). Most notably, nitrogen responses were found to be included in few crop models despite being commonly observed and critical for the simulation of photosynthetic acclimation, crop nutritional quality and carbon allocation. We suggest harmonization and consideration of more mechanistic concepts in particular subroutines, for example, for the simulation of N dynamics, as a way to improve our predictive understanding of CO 2 responses and capture secondary processes. Intercomparison studies could assist in this aim, provided that they go beyond simple output comparison and explicitly identify the representations and assumptions that are causal for intermodel differences. Additionally, validation and proper documentation of the representation of CO 2 responses within models should be prioritized. © 2017 John Wiley & Sons Ltd.
Heteroscedastic Latent Trait Models for Dichotomous Data.
Molenaar, Dylan
2015-09-01
Effort has been devoted to account for heteroscedasticity with respect to observed or latent moderator variables in item or test scores. For instance, in the multi-group generalized linear latent trait model, it could be tested whether the observed (polychoric) covariance matrix differs across the levels of an observed moderator variable. In the case that heteroscedasticity arises across the latent trait itself, existing models commonly distinguish between heteroscedastic residuals and a skewed trait distribution. These models have valuable applications in intelligence, personality and psychopathology research. However, existing approaches are only limited to continuous and polytomous data, while dichotomous data are common in intelligence and psychopathology research. Therefore, in present paper, a heteroscedastic latent trait model is presented for dichotomous data. The model is studied in a simulation study, and applied to data pertaining alcohol use and cognitive ability.
Xie, Haiyi; Tao, Jill; McHugo, Gregory J; Drake, Robert E
2013-07-01
Count data with skewness and many zeros are common in substance abuse and addiction research. Zero-adjusting models, especially zero-inflated models, have become increasingly popular in analyzing this type of data. This paper reviews and compares five mixed-effects Poisson family models commonly used to analyze count data with a high proportion of zeros by analyzing a longitudinal outcome: number of smoking quit attempts from the New Hampshire Dual Disorders Study. The findings of our study indicated that count data with many zeros do not necessarily require zero-inflated or other zero-adjusting models. For rare event counts or count data with small means, a simpler model such as the negative binomial model may provide a better fit. Copyright © 2013 Elsevier Inc. All rights reserved.
Characterizing Touch Using Pressure Data and Auto Regressive Models
Laufer, Shlomi; Pugh, Carla M.; Van Veen, Barry D.
2014-01-01
Palpation plays a critical role in medical physical exams. Despite the wide range of exams, there are several reproducible and subconscious sets of maneuvers that are common to examination by palpation. Previous studies by our group demonstrated the use of manikins and pressure sensors for measuring and quantifying how physicians palpate during different physical exams. In this study we develop mathematical models that describe some of these common maneuvers. Dynamic pressure data was measured using a simplified testbed and different autoregressive models were used to describe the motion of interest. The frequency, direction and type of motion used were identified from the models. We believe these models can a provide better understanding of how humans explore objects in general and more specifically give insights to understand medical physical exams. PMID:25570335
NASA Technical Reports Server (NTRS)
Russell, Richard A.; Waiss, Richard D.
1988-01-01
A study was conducted to identify the common support equipment and Space Station interface requirements for the IOC (initial operating capabilities) model technology experiments. In particular, each principal investigator for the proposed model technology experiment was contacted and visited for technical understanding and support for the generation of the detailed technical backup data required for completion of this study. Based on the data generated, a strong case can be made for a dedicated technology experiment command and control work station consisting of a command keyboard, cathode ray tube, data processing and storage, and an alert/annunciator panel located in the pressurized laboratory.
Determination of the spectral behaviour of atmospheric soot using different particle models
NASA Astrophysics Data System (ADS)
Skorupski, Krzysztof
2017-08-01
In the atmosphere, black carbon aggregates interact with both organic and inorganic matter. In many studies they are modeled using different, less complex, geometries. However, some common simplification might lead to many inaccuracies in the following light scattering simulations. The goal of this study was to compare the spectral behavior of different, commonly used soot particle models. For light scattering simulations, in the visible spectrum, the ADDA algorithm was used. The results prove that the relative extinction error δCext, in some cases, can be unexpectedly large. Therefore, before starting excessive simulations, it is important to know what error might occur.
ERIC Educational Resources Information Center
Schramm, David G.; Adler-Baeder, Francesca
2012-01-01
Although economic pressure and family stress models have been examined with samples of men and women in first marriages, previous models have neglected to focus on men and women in stepfamilies and to examine stress sources unique to stepfamilies. This study examines the effect of economic pressure on both common stressors and stepfamily-specific…
Emergence of a Common Modeling Architecture for Earth System Science (Invited)
NASA Astrophysics Data System (ADS)
Deluca, C.
2010-12-01
Common modeling architecture can be viewed as a natural outcome of common modeling infrastructure. The development of model utility and coupling packages (ESMF, MCT, OpenMI, etc.) over the last decade represents the realization of a community vision for common model infrastructure. The adoption of these packages has led to increased technical communication among modeling centers and newly coupled modeling systems. However, adoption has also exposed aspects of interoperability that must be addressed before easy exchange of model components among different groups can be achieved. These aspects include common physical architecture (how a model is divided into components) and model metadata and usage conventions. The National Unified Operational Prediction Capability (NUOPC), an operational weather prediction consortium, is collaborating with weather and climate researchers to define a common model architecture that encompasses these advanced aspects of interoperability and looks to future needs. The nature and structure of the emergent common modeling architecture will be discussed along with its implications for future model development.
Myeloproliferative Neoplasm Animal Models
Mullally, Ann; Lane, Steven W.; Brumme, Kristina; Ebert, Benjamin L.
2012-01-01
Synopsis Myeloproliferative neoplasm (MPN) animal models accurately re-capitulate human disease in mice and have been an important tool for the study of MPN biology and therapy. Transplantation of BCR-ABL transduced bone marrow cells into irradiated syngeneic mice established the field of MPN animal modeling and the retroviral bone marrow transplantation (BMT) assay has been used extensively since. Genetically engineered MPN animal models have enabled detailed characterization of the effects of specific MPN associated genetic abnormalities on the hematopoietic stem and progenitor cell (HSPC) compartment and xenograft models have allowed the study of primary human MPN-propagating cells in vivo. All models have facilitated the pre-clinical development of MPN therapies. JAK2V617F, the most common molecular abnormality in BCR-ABL negative MPN, has been extensively studied using retroviral, transgenic, knock-in and xenograft models. MPN animal models have also been used to investigate additional genetic lesions found in human MPN and to evaluate the bone marrow microenvironment in these diseases. Finally, several genetic lesions, although not common, somatically mutated drivers of MPN in humans induce a MPN phenotype in mice. Future uses for MPN animal models will include modeling compound genetic lesions in MPN and studying myelofibrotic transformation. PMID:23009938
Hoyer, Annika; Kuss, Oliver
2018-05-01
Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared.
Port, Russell G.; Gandal, Michael J.; Roberts, Timothy P. L.; Siegel, Steven J.; Carlson, Gregory C.
2014-01-01
Most recent estimates indicate that 1 in 68 children are affected by an autism spectrum disorder (ASD). Though decades of research have uncovered much about these disorders, the pathological mechanism remains unknown. Hampering efforts is the seeming inability to integrate findings over the micro to macro scales of study, from changes in molecular, synaptic and cellular function to large-scale brain dysfunction impacting sensory, communicative, motor and cognitive activity. In this review, we describe how studies focusing on neuronal circuit function provide unique context for identifying common neurobiological disease mechanisms of ASD. We discuss how recent EEG and MEG studies in subjects with ASD have repeatedly shown alterations in ensemble population recordings (both in simple evoked related potential latencies and specific frequency subcomponents). Because these disease-associated electrophysiological abnormalities have been recapitulated in rodent models, studying circuit differences in these models may provide access to abnormal circuit function found in ASD. We then identify emerging in vivo and ex vivo techniques, focusing on how these assays can characterize circuit level dysfunction and determine if these abnormalities underlie abnormal clinical electrophysiology. Such circuit level study in animal models may help us understand how diverse genetic and environmental risks can produce a common set of EEG, MEG and anatomical abnormalities found in ASD. PMID:25538564
Comparison of actual and seismologically inferred stress drops in dynamic models of microseismicity
NASA Astrophysics Data System (ADS)
Lin, Y. Y.; Lapusta, N.
2017-12-01
Estimating source parameters for small earthquakes is commonly based on either Brune or Madariaga source models. These models assume circular rupture that starts from the center of a fault and spreads axisymmetrically with a constant rupture speed. The resulting stress drops are moment-independent, with large scatter. However, more complex source behaviors are commonly discovered by finite-fault inversions for both large and small earthquakes, including directivity, heterogeneous slip, and non-circular shapes. Recent studies (Noda, Lapusta, and Kanamori, GJI, 2013; Kaneko and Shearer, GJI, 2014; JGR, 2015) have shown that slip heterogeneity and directivity can result in large discrepancies between the actual and estimated stress drops. We explore the relation between the actual and seismologically estimated stress drops for several types of numerically produced microearthquakes. For example, an asperity-type circular fault patch with increasing normal stress towards the middle of the patch, surrounded by a creeping region, is a potentially common microseismicity source. In such models, a number of events rupture the portion of the patch near its circumference, producing ring-like ruptures, before a patch-spanning event occurs. We calculate the far-field synthetic waveforms for our simulated sources and estimate their spectral properties. The distribution of corner frequencies over the focal sphere is markedly different for the ring-like sources compared to the Madariaga model. Furthermore, most waveforms for the ring-like sources are better fitted by a high-frequency fall-off rate different from the commonly assumed value of 2 (from the so-called omega-squared model), with the average value over the focal sphere being 1.5. The application of Brune- or Madariaga-type analysis to these sources results in the stress drops estimates different from the actual stress drops by a factor of up to 125 in the models we considered. We will report on our current studies of other types of seismic sources, such as repeating earthquakes and foreshock-like events, and whether the potentially realistic and common sources different from the standard Brune and Madariaga models can be identified from their focal spectral signatures and studied using a more tailored seismological analysis.
Kivimäki, Mika; Lawlor, Debbie A; Singh-Manoux, Archana; Batty, G David; Ferrie, Jane E; Shipley, Martin J; Nabi, Hermann; Sabia, Séverine; Marmot, Michael G; Jokela, Markus
2009-10-06
To examine potential reciprocal associations between common mental disorders and obesity, and to assess whether dose-response relations exist. Prospective cohort study with four measures of common mental disorders and obesity over 19 years (Whitehall II study). Civil service departments in London. 4363 adults (28% female, mean age 44 years at baseline). Common mental disorder defined as general health questionnaire "caseness;" overweight and obesity based on Word Health Organization definitions. In models adjusted for age, sex, and body mass index at baseline, odds ratios for obesity at the fourth screening were 1.33 (95% confidence interval 1.00 to 1.77), 1.64 (1.13 to 2.36), and 2.01 (1.21 to 3.34) for participants with common mental disorder at one, two, or three preceding screenings compared with people free from common mental disorder (P for trend<0.001). The corresponding mean differences in body mass index at the most recent screening were 0.20, 0.31, and 0.50 (P for trend<0.001). These associations remained after adjustment for baseline characteristics related to mental health and exclusion of participants who were obese at baseline. In addition, obesity predicted future risk of common mental disorder, again with evidence of a dose-response relation (P for trend=0.02, multivariable model). However, this association was lost when people with common mental disorder at baseline were excluded (P for trend=0.33). These findings suggest that in British adults the direction of association between common mental disorders and obesity is from common mental disorder to increased future risk of obesity. This association is cumulative such that people with chronic or repeat episodes of common mental disorder are particularly at risk of weight gain.
Measurements of Student and Teacher Perceptions of Co-Teaching Models
ERIC Educational Resources Information Center
Keeley, Randa G.
2015-01-01
Co-teaching is an accepted teaching model for inclusive classrooms. This study measured the perceptions of both students and teachers regarding the five most commonly used co-teaching models (i.e., One Teach/One Assist, Station Teaching, Alternative Teaching, Parallel Teaching, and Team Teaching). Additionally, this study compared student…
Shannon C.K. Straub; Mark Fishbein; Tatyana Livshult; Zachary Foster; Matthew Parks; Kevin Weitemier; Richard C. Cronn; Aaron Liston
2011-01-01
Milkweeds (Asclepias L.) have been extensively investigated in diverse areas of evolutionary biology and ecology; however, there are few genetic resources available to facilitate and compliment these studies. This study explored how low coverage genome sequencing of the common milkweed (Asclepias syriaca L.) could be useful in...
Analyzing and improving surface texture by dual-rotation magnetorheological finishing
NASA Astrophysics Data System (ADS)
Wang, Yuyue; Zhang, Yun; Feng, Zhijing
2016-01-01
The main advantages of magnetorheological finishing (MRF) are its high convergence rate of surface error, the ability of polishing aspheric surfaces and nearly no subsurface damage. However, common MRF produces directional surface texture due to the constant flow direction of the magnetorheological (MR) polishing fluid. This paper studies the mechanism of surface texture formation by texture modeling. Dual-rotation magnetorheological finishing (DRMRF) is presented to suppress directional surface texture after analyzing the results of the texture model for common MRF. The results of the surface texture model for DRMRF and the proposed quantitative method based on mathematical statistics indicate the effective suppression of directional surface texture. An experimental setup is developed and experiments show directional surface texture and no directional surface texture in common MRF and DRMRF, respectively. As a result, the surface roughness of DRMRF is 0.578 nm (root-mean-square value) which is lower than 1.109 nm in common MRF.
Model Selection Indices for Polytomous Items
ERIC Educational Resources Information Center
Kang, Taehoon; Cohen, Allan S.; Sung, Hyun-Jung
2009-01-01
This study examines the utility of four indices for use in model selection with nested and nonnested polytomous item response theory (IRT) models: a cross-validation index and three information-based indices. Four commonly used polytomous IRT models are considered: the graded response model, the generalized partial credit model, the partial credit…
Monogenic Mouse Models of Autism Spectrum Disorders: Common Mechanisms and Missing Links
Hulbert, Samuel W.; Jiang, Yong-hui
2016-01-01
Autism Spectrum Disorders (ASDs) present unique challenges in the fields of genetics and neurobiology because of the clinical and molecular heterogeneity underlying these disorders. Genetic mutations found in ASD patients provide opportunities to dissect the molecular and circuit mechanisms underlying autistic behaviors using animal models. Ongoing studies of genetically modified models have offered critical insight into possible common mechanisms arising from different mutations, but links between molecular abnormalities and behavioral phenotypes remain elusive. The challenges encountered in modeling autism in mice demand a new analytic paradigm that integrates behavioral analysis with circuit-level analysis in genetically modified models with strong construct validity. PMID:26733386
Transforming a High School Media Center into a Library Learning Commons
ERIC Educational Resources Information Center
Chiara, Nancy A.
2014-01-01
This study outlines a planned action based research project focused on studying the transformation of an urban high school media center to a learning commons model. This study includes a descriptive account as well as the impact of steps taken to match the media center to the needs of the 21st century learner. The research focuses on shifting…
Moore, Julia L; Remais, Justin V
2014-03-01
Developmental models that account for the metabolic effect of temperature variability on poikilotherms, such as degree-day models, have been widely used to study organism emergence, range and development, particularly in agricultural and vector-borne disease contexts. Though simple and easy to use, structural and parametric issues can influence the outputs of such models, often substantially. Because the underlying assumptions and limitations of these models have rarely been considered, this paper reviews the structural, parametric, and experimental issues that arise when using degree-day models, including the implications of particular structural or parametric choices, as well as assumptions that underlie commonly used models. Linear and non-linear developmental functions are compared, as are common methods used to incorporate temperature thresholds and calculate daily degree-days. Substantial differences in predicted emergence time arose when using linear versus non-linear developmental functions to model the emergence time in a model organism. The optimal method for calculating degree-days depends upon where key temperature threshold parameters fall relative to the daily minimum and maximum temperatures, as well as the shape of the daily temperature curve. No method is shown to be universally superior, though one commonly used method, the daily average method, consistently provides accurate results. The sensitivity of model projections to these methodological issues highlights the need to make structural and parametric selections based on a careful consideration of the specific biological response of the organism under study, and the specific temperature conditions of the geographic regions of interest. When degree-day model limitations are considered and model assumptions met, the models can be a powerful tool for studying temperature-dependent development.
Model systems for the study of Enterococcal colonization and infection
Goh, H. M. Sharon; Yong, M. H. Adeline; Chong, Kelvin Kian Long
2017-01-01
ABSTRACT Enterococcus faecalis and Enterococcus faecium are common inhabitants of the human gastrointestinal tract, as well as frequent opportunistic pathogens. Enterococci cause a range of infections including, most frequently, infections of the urinary tract, catheterized urinary tract, bloodstream, wounds and surgical sites, and heart valves in endocarditis. Enterococcal infections are often biofilm-associated, polymicrobial in nature, and resistant to antibiotics of last resort. Understanding Enterococcal mechanisms of colonization and pathogenesis are important for identifying new ways to manage and intervene with these infections. We review vertebrate and invertebrate model systems applied to study the most common E. faecalis and E. faecium infections, with emphasis on recent findings examining Enterococcal-host interactions using these models. We discuss strengths and shortcomings of each model, propose future animal models not yet applied to study mono- and polymicrobial infections involving E. faecalis and E. faecium, and comment on the significance of anti-virulence strategies derived from a fundamental understanding of host-pathogen interactions in model systems. PMID:28102784
De Brún, Aoife; McCarthy, Mary; McKenzie, Kenneth; McGloin, Aileen
2015-01-01
This study examined the Irish media discourse on obesity by employing the Common Sense Model of Illness Representations. A media sample of 368 transcripts was compiled from newspaper articles (n = 346), radio discussions (n = 5), and online news articles (n = 17) on overweight and obesity from the years 2005, 2007, and 2009. Using the Common Sense Model and framing theory to guide the investigation, a thematic analysis was conducted on the media sample. Analysis revealed that the behavioral dimensions of diet and activity levels were the most commonly cited causes of and interventions in obesity. The advertising industry was blamed for obesity, and there were calls for increased government action to tackle the issue. Physical illness and psychological consequences of obesity were prevalent in the sample, and analysis revealed that the economy, regardless of its state, was blamed for obesity. These results are discussed in terms of expectations of audience understandings of the issue and the implications of these dominant portrayals and framings on public support for interventions. The article also outlines the value of a qualitative analytical framework that combines the Common Sense Model and framing theory in the investigation of illness narratives.
Müller, Jochen; Bühner, Markus; Ellgring, Heiner
2003-12-01
The 20-item Toronto Alexithymia Scale (TAS-20) is the most widely used instrument for measuring alexithymia. However, different studies did not always yield identical factor structures of this scale. The present study aims at clarifying some discrepant results. Maximum likelihood confirmatory factor analyses of a German version of the TAS-20 were conducted on data from a clinical sample (N=204) and a sample of normal adults (N=224). Five different models with one to four factors were compared. A four-factor model with factors (F1) "Difficulty identifying feelings" (F2), "Difficulty describing feelings" (F3), "Low importance of emotion" and (F4) "Pragmatic thinking" and a three-factor model with the combined factor "Difficulties in identifying and describing feelings" described the data best. Factors related to "externally oriented thinking" provided no acceptable level of reliability. Results from the present and other studies indicate that the factorial structure of the TAS-20 may vary across samples. Whether factor structures different from the common three-factor structure are an exception in some mainly clinical populations or a common phenomenon outside student populations has still to be determined. For a further exploration of the factor structure of the TAS-20 in different populations, it would be important not only to test the fit of the common three-factor model, but also to consider other competing solutions like the models of the present study.
ERIC Educational Resources Information Center
El-Banna, Adel I.; Naeem, Marwa A.
2016-01-01
This research work aimed at making use of Machine Translation to help students avoid some syntactic, semantic and pragmatic common errors in translation from English into Arabic. Participants were a hundred and five freshmen who studied the "Translation Common Errors Remedial Program" prepared by the researchers. A testing kit that…
Sample Invariance of the Structural Equation Model and the Item Response Model: A Case Study.
ERIC Educational Resources Information Center
Breithaupt, Krista; Zumbo, Bruno D.
2002-01-01
Evaluated the sample invariance of item discrimination statistics in a case study using real data, responses of 10 random samples of 500 people to a depression scale. Results lend some support to the hypothesized superiority of a two-parameter item response model over the common form of structural equation modeling, at least when responses are…
Generalized functional linear models for gene-based case-control association studies.
Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao
2014-11-01
By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. © 2014 WILEY PERIODICALS, INC.
Generalized Functional Linear Models for Gene-based Case-Control Association Studies
Mills, James L.; Carter, Tonia C.; Lobach, Iryna; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Weeks, Daniel E.; Xiong, Momiao
2014-01-01
By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene are disease-related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease data sets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. PMID:25203683
Olsen, Espen
2010-09-01
The aim of the present study was to explore the possibility of identifying general safety climate concepts in health care and petroleum sectors, as well as develop and test the possibility of a common cross-industrial structural model. Self-completion questionnaire surveys were administered in two organisations and sectors: (1) a large regional hospital in Norway that offers a wide range of hospital services, and (2) a large petroleum company that produces oil and gas worldwide. In total, 1919 and 1806 questionnaires were returned from the hospital and petroleum organisation, with response rates of 55 percent and 52 percent, respectively. Using a split sample procedure principal factor analysis and confirmatory factor analysis revealed six identical cross-industrial measurement concepts in independent samples-five measures of safety climate and one of safety behaviour. The factors' psychometric properties were explored with satisfactory internal consistency and concept validity. Thus, a common cross-industrial structural model was developed and tested using structural equation modelling (SEM). SEM revealed that a cross-industrial structural model could be identified among health care workers and offshore workers in the North Sea. The most significant contributing variables in the model testing stemmed from organisational management support for safety and supervisor/manager expectations and actions promoting safety. These variables indirectly enhanced safety behaviour (stop working in dangerous situations) through transitions and teamwork across units, and teamwork within units as well as learning, feedback, and improvement. Two new safety climate instruments were validated as part of the study: (1) Short Safety Climate Survey (SSCS) and (2) Hospital Survey on Patient Safety Culture-short (HSOPSC-short). Based on development of measurements and structural model assessment, this study supports the possibility of a common safety climate structural model across health care and the offshore petroleum industry. 2010 Elsevier Ltd. All rights reserved.
Spatial Assessment of Model Errors from Four Regression Techniques
Lianjun Zhang; Jeffrey H. Gove; Jeffrey H. Gove
2005-01-01
Fomst modelers have attempted to account for the spatial autocorrelations among trees in growth and yield models by applying alternative regression techniques such as linear mixed models (LMM), generalized additive models (GAM), and geographicalIy weighted regression (GWR). However, the model errors are commonly assessed using average errors across the entire study...
THE EPA MULTIMEDIA INTEGRATED MODELING SYSTEM SOFTWARE SUITE
The U.S. EPA is developing a Multimedia Integrated Modeling System (MIMS) framework that will provide a software infrastructure or environment to support constructing, composing, executing, and evaluating complex modeling studies. The framework will include (1) common software ...
Chaotic Dynamics and Application of LCR Oscillators Sharing Common Nonlinearity
NASA Astrophysics Data System (ADS)
Jeevarekha, A.; Paul Asir, M.; Philominathan, P.
2016-06-01
This paper addresses the problem of sharing common nonlinearity among nonautonomous and autonomous oscillators. By choosing a suitable common nonlinear element with the driving point characteristics capable of bringing out chaotic motion in a combined system, we obtain identical chaotic states. The dynamics of the coupled system is explored through numerical and experimental studies. Employing the concept of common nonlinearity, a simple chaotic communication system is modeled and its performance is verified through Multisim simulation.
The most common friend first immunization
NASA Astrophysics Data System (ADS)
Nian, Fu-Zhong; Hu, Cha-Sheng
2016-12-01
In this paper, a standard susceptible-infected-recovered-susceptible(SIRS) epidemic model based on the Watts-Strogatz (WS) small-world network model and the Barabsi-Albert (BA) scale-free network model is established, and a new immunization scheme — “the most common friend first immunization” is proposed, in which the most common friend’s node is described as being the first immune on the second layer protection of complex networks. The propagation situations of three different immunization schemes — random immunization, high-risk immunization, and the most common friend first immunization are studied. At the same time, the dynamic behaviors are also studied on the WS small-world and the BA scale-free network. Moreover, the analytic and simulated results indicate that the immune effect of the most common friend first immunization is better than random immunization, but slightly worse than high-risk immunization. However, high-risk immunization still has some limitations. For example, it is difficult to accurately define who a direct neighbor in the life is. Compared with the traditional immunization strategies having some shortcomings, the most common friend first immunization is effective, and it is nicely consistent with the actual situation. Project supported by the National Natural Science Foundation of China (Grant No. 61263019), the Program for International Science and Technology Cooperation Projects of Gansu Province, China (Grant No. 144WCGA166), and the Program for Longyuan Young Innovation Talents and the Doctoral Foundation of Lanzhou University of Technology, China.
Plants as models for the study of human pathogenesis.
Guttman, David S
2004-05-01
There are many common disease mechanisms used by bacterial pathogens of plants and humans. They use common means of attachment, secretion and genetic regulation. They share many virulence factors, such as extracellular polysaccharides and some type III secreted effectors. Plant and human innate immune systems also share many similarities. Many of these shared bacterial virulence mechanisms are homologous, but even more appear to have independently converged on a common function. This combination of homologous and analogous systems reveals conserved and critical steps in the disease process. Given these similarities, and the many experimental advantages of plant biology, including ease of replication, stringent genetic and reproductive control, and high throughput with low cost, it is proposed that plants would make excellent models for the study of human pathogenesis.
Themes Found in High Performing Schools: The CAB Model
ERIC Educational Resources Information Center
Sanders, Brenda
2010-01-01
This study examines the CAB [Cooperativeness, Accountability, and Boundlessness] model of high performing schools by developing case studies of two Portland, Oregon area schools. In pursuing this purpose, this study answers the following three research questions: 1) To what extent is the common correlate cooperativeness demonstrated or absent in…
Induced Pathogen Resistance in Bean Plants: A Model for Studying "Vaccination" in the Classroom.
ERIC Educational Resources Information Center
Goetsch, Emily; Mathias, Christine; Mosley, Sydnie; Shull, Meredith; Brock, David L.
2002-01-01
Shows how the tobacco mosaic virus can be used in conjunction with the common bean plant Phaseolus vulgaris to provide a discernable, experimental model that students can use to study induced resistance. (Contains 17 references.) (DDR)
Palmer, Rohan H C; McGeary, John E; Heath, Andrew C; Keller, Matthew C; Brick, Leslie A; Knopik, Valerie S
2015-12-01
Genetic studies of alcohol dependence (AD) have identified several candidate loci and genes, but most observed effects are small and difficult to reproduce. A plausible explanation for inconsistent findings may be a violation of the assumption that genetic factors contributing to each of the seven DSM-IV criteria point to a single underlying dimension of risk. Given that recent twin studies suggest that the genetic architecture of AD is complex and probably involves multiple discrete genetic factors, the current study employed common single nucleotide polymorphisms in two multivariate genetic models to examine the assumption that the genetic risk underlying DSM-IV AD is unitary. AD symptoms and genome-wide single nucleotide polymorphism (SNP) data from 2596 individuals of European descent from the Study of Addiction: Genetics and Environment were analyzed using genomic-relatedness-matrix restricted maximum likelihood. DSM-IV AD symptom covariance was described using two multivariate genetic factor models. Common SNPs explained 30% (standard error=0.136, P=0.012) of the variance in AD diagnosis. Additive genetic effects varied across AD symptoms. The common pathway model approach suggested that symptoms could be described by a single latent variable that had a SNP heritability of 31% (0.130, P=0.008). Similarly, the exploratory genetic factor model approach suggested that the genetic variance/covariance across symptoms could be represented by a single genetic factor that accounted for at least 60% of the genetic variance in any one symptom. Additive genetic effects on DSM-IV alcohol dependence criteria overlap. The assumption of common genetic effects across alcohol dependence symptoms appears to be a valid assumption. © 2015 Society for the Study of Addiction.
Support System Effects on the NASA Common Research Model
NASA Technical Reports Server (NTRS)
Rivers, S. Melissa B.; Hunter, Craig A.
2012-01-01
An experimental investigation of the NASA Common Research Model was conducted in the NASA Langley National Transonic Facility and NASA Ames 11-Foot Transonic Wind Tunnel Facility for use in the Drag Prediction Workshop. As data from the experimental investigations was collected, a large difference in moment values was seen between the experimental and the computational data from the 4th Drag Prediction Workshop. This difference led to the present work. In this study, a computational assessment has been undertaken to investigate model support system interference effects on the Common Research Model. The configurations computed during this investigation were the wing/body/tail=0deg without the support system and the wing/body/tail=0deg with the support system. The results from this investigation confirm that the addition of the support system to the computational cases does shift the pitching moment in the direction of the experimental results.
ESPC Common Model Architecture
2014-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. ESPC Common Model Architecture Earth System Modeling...Operational Prediction Capability (NUOPC) was established between NOAA and Navy to develop common software architecture for easy and efficient...development under a common model architecture and other software-related standards in this project. OBJECTIVES NUOPC proposes to accelerate
Ridge Regression for Interactive Models.
ERIC Educational Resources Information Center
Tate, Richard L.
1988-01-01
An exploratory study of the value of ridge regression for interactive models is reported. Assuming that the linear terms in a simple interactive model are centered to eliminate non-essential multicollinearity, a variety of common models, representing both ordinal and disordinal interactions, are shown to have "orientations" that are…
Osier, Nicole; Dixon, C. Edward
2017-01-01
Controlled cortical impact (CCI) is a commonly used and highly regarded model of brain trauma that uses a pneumatically or electromagnetically controlled piston to induce reproducible and well-controlled injury. The CCI model was originally used in ferrets and it has since been scaled for use in many other species. This chapter will describe the historical development of the CCI model, compare and contrast the pneumatic and electromagnetic models, and summarize key short- and long-term consequences of TBI that have been gleaned using this model. In accordance with the recent efforts to promote high-quality evidence through the reporting of common data elements (CDEs), relevant study details—that should be reported in CCI studies—will be noted. PMID:27604719
An animal model that reflects human disease: the common marmoset (Callithrix jacchus).
Carrion, Ricardo; Patterson, Jean L
2012-06-01
The common marmoset is a new world primate belonging to the Callitrichidae family weighing between 350 and 400 g. The marmoset has been shown to be an outstanding model for studying aging, reproduction, neuroscience, toxicology, and infectious disease. With regard to their susceptibility to infectious agents, they are exquisite NHP models for viral, protozoan and bacterial agents, as well as prions. The marmoset provides the advantages of a small animal model in high containment coupled with the immunological repertoire of a nonhuman primate and susceptibility to wild type, non-adapted viruses. Copyright © 2012 Elsevier B.V. All rights reserved.
A new concept in seismic landslide hazard analysis for practical application
NASA Astrophysics Data System (ADS)
Lee, Chyi-Tyi
2017-04-01
A seismic landslide hazard model could be constructed using deterministic approach (Jibson et al., 2000) or statistical approach (Lee, 2014). Both approaches got landslide spatial probability under a certain return-period earthquake. In the statistical approach, our recent study found that there are common patterns among different landslide susceptibility models of the same region. The common susceptibility could reflect relative stability of slopes at a region; higher susceptibility indicates lower stability. Using the common susceptibility together with an earthquake event landslide inventory and a map of topographically corrected Arias intensity, we can build the relationship among probability of failure, Arias intensity and the susceptibility. This relationship can immediately be used to construct a seismic landslide hazard map for the region that the empirical relationship built. If the common susceptibility model is further normalized and the empirical relationship built with normalized susceptibility, then the empirical relationship may be practically applied to different region with similar tectonic environments and climate conditions. This could be feasible, when a region has no existing earthquake-induce landslide data to train the susceptibility model and to build the relationship. It is worth mentioning that a rain-induced landslide susceptibility model has common pattern similar to earthquake-induced landslide susceptibility in the same region, and is usable to build the relationship with an earthquake event landslide inventory and a map of Arias intensity. These will be introduced with examples in the meeting.
Common Warming Pattern Emerges Irrespective of Forcing Location
NASA Astrophysics Data System (ADS)
Kang, Sarah M.; Park, Kiwoong; Jin, Fei-Fei; Stuecker, Malte F.
2017-10-01
The Earth's climate is changing due to the existence of multiple radiative forcing agents. It is under question whether different forcing agents perturb the global climate in a distinct way. Previous studies have demonstrated the existence of similar climate response patterns in response to aerosol and greenhouse gas (GHG) forcings. In this study, the sensitivity of tropospheric temperature response patterns to surface heating distributions is assessed by forcing an atmospheric general circulation model coupled to an aquaplanet slab ocean with a wide range of possible forcing patterns. We show that a common climate pattern emerges in response to localized forcing at different locations. This pattern, characterized by enhanced warming in the tropical upper troposphere and the polar lower troposphere, resembles the historical trends from observations and models as well as the future projections. Atmospheric dynamics in combination with thermodynamic air-sea coupling are primarily responsible for shaping this pattern. Identifying this common pattern strengthens our confidence in the projected response to GHG and aerosols in complex climate models.
Development of Metabolic Function Biomarkers in the Common Marmoset, Callithrix jacchus
Ziegler, Toni E.; Colman, Ricki J.; Tardif, Suzette D.; Sosa, Megan E.; Wegner, Fredrick H.; Wittwer, Daniel J.; Shrestha, Hemanta
2013-01-01
Metabolic assessment of a nonhuman primate model of metabolic syndrome and obesity requires the necessary biomarkers specific to the species. While the rhesus monkey has a number of specific assays for assessing metabolic syndrome, the marmoset does not. Furthermore, the common marmoset (Callithrix jacchus) has a small blood volume that necessitates using a single blood volume for multiple analyses. The common marmoset holds a great potential as an alternative primate model for the study of human disease but assay methods need to be developed and validated for the biomarkers of metabolic syndrome. Here we report on the adaptation, development and validation of commercially available immunoassays for common marmoset samples in small volumes. We have performed biological validations for insulin, adiponectin, leptin, and ghrelin to demonstrate the use of these biomarkers in examining metabolic syndrome and other related diseases in the common marmoset. PMID:23447060
Gilbreath, Jeremy J.; Cody, William L.; Merrell, D. Scott; Hendrixson, David R.
2011-01-01
Summary: Microbial evolution and subsequent species diversification enable bacterial organisms to perform common biological processes by a variety of means. The epsilonproteobacteria are a diverse class of prokaryotes that thrive in diverse habitats. Many of these environmental niches are labeled as extreme, whereas other niches include various sites within human, animal, and insect hosts. Some epsilonproteobacteria, such as Campylobacter jejuni and Helicobacter pylori, are common pathogens of humans that inhabit specific regions of the gastrointestinal tract. As such, the biological processes of pathogenic Campylobacter and Helicobacter spp. are often modeled after those of common enteric pathogens such as Salmonella spp. and Escherichia coli. While many exquisite biological mechanisms involving biochemical processes, genetic regulatory pathways, and pathogenesis of disease have been elucidated from studies of Salmonella spp. and E. coli, these paradigms often do not apply to the same processes in the epsilonproteobacteria. Instead, these bacteria often display extensive variation in common biological mechanisms relative to those of other prototypical bacteria. In this review, five biological processes of commonly studied model bacterial species are compared to those of the epsilonproteobacteria C. jejuni and H. pylori. Distinct differences in the processes of flagellar biosynthesis, DNA uptake and recombination, iron homeostasis, interaction with epithelial cells, and protein glycosylation are highlighted. Collectively, these studies support a broader view of the vast repertoire of biological mechanisms employed by bacteria and suggest that future studies of the epsilonproteobacteria will continue to provide novel and interesting information regarding prokaryotic cellular biology. PMID:21372321
Gilbreath, Jeremy J; Cody, William L; Merrell, D Scott; Hendrixson, David R
2011-03-01
Microbial evolution and subsequent species diversification enable bacterial organisms to perform common biological processes by a variety of means. The epsilonproteobacteria are a diverse class of prokaryotes that thrive in diverse habitats. Many of these environmental niches are labeled as extreme, whereas other niches include various sites within human, animal, and insect hosts. Some epsilonproteobacteria, such as Campylobacter jejuni and Helicobacter pylori, are common pathogens of humans that inhabit specific regions of the gastrointestinal tract. As such, the biological processes of pathogenic Campylobacter and Helicobacter spp. are often modeled after those of common enteric pathogens such as Salmonella spp. and Escherichia coli. While many exquisite biological mechanisms involving biochemical processes, genetic regulatory pathways, and pathogenesis of disease have been elucidated from studies of Salmonella spp. and E. coli, these paradigms often do not apply to the same processes in the epsilonproteobacteria. Instead, these bacteria often display extensive variation in common biological mechanisms relative to those of other prototypical bacteria. In this review, five biological processes of commonly studied model bacterial species are compared to those of the epsilonproteobacteria C. jejuni and H. pylori. Distinct differences in the processes of flagellar biosynthesis, DNA uptake and recombination, iron homeostasis, interaction with epithelial cells, and protein glycosylation are highlighted. Collectively, these studies support a broader view of the vast repertoire of biological mechanisms employed by bacteria and suggest that future studies of the epsilonproteobacteria will continue to provide novel and interesting information regarding prokaryotic cellular biology.
Tissue and Animal Models of Sudden Cardiac Death
Sallam, Karim; Li, Yingxin; Sager, Philip T.; Houser, Steven R.; Wu, Joseph C.
2015-01-01
Sudden Cardiac Death (SCD) is a common cause of death in patients with structural heart disease, genetic mutations or acquired disorders affecting cardiac ion channels. A wide range of platforms exist to model and study disorders associated with SCD. Human clinical studies are cumbersome and are thwarted by the extent of investigation that can be performed on human subjects. Animal models are limited by their degree of homology to human cardiac electrophysiology including ion channel expression. Most commonly used cellular models are cellular transfection models, which are able to mimic the expression of a single ion channel offering incomplete insight into changes of the action potential profile. Induced pluripotent stem cell derived Cardiomyocytes (iPSC-CMs) resemble, but are not identical, to adult human cardiomyocytes, and provide a new platform for studying arrhythmic disorders leading to SCD. A variety of platforms exist to phenotype cellular models including conventional and automated patch clamp, multi-electrode array, and computational modeling. iPSC-CMs have been used to study Long QT syndrome, catecholaminergic polymorphic ventricular tachycardia, hypertrophic cardiomyopathy and other hereditary cardiac disorders. Although iPSC-CMs are distinct from adult cardiomyocytes, they provide a robust platform to advance the science and clinical care of SCD. PMID:26044252
Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data
Yang, Yan; Simpson, Douglas
2010-01-01
Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Chen, Yixing; Belafi, Zsofia
Occupant behavior (OB) in buildings is a leading factor influencing energy use in buildings. Quantifying this influence requires the integration of OB models with building performance simulation (BPS). This study reviews approaches to representing and implementing OB models in today’s popular BPS programs, and discusses weaknesses and strengths of these approaches and key issues in integrating of OB models with BPS programs. Two of the key findings are: (1) a common data model is needed to standardize the representation of OB models, enabling their flexibility and exchange among BPS programs and user applications; the data model can be implemented usingmore » a standard syntax (e.g., in the form of XML schema), and (2) a modular software implementation of OB models, such as functional mock-up units for co-simulation, adopting the common data model, has advantages in providing a robust and interoperable integration with multiple BPS programs. Such common OB model representation and implementation approaches help standardize the input structures of OB models, enable collaborative development of a shared library of OB models, and allow for rapid and widespread integration of OB models with BPS programs to improve the simulation of occupant behavior and quantification of their impact on building performance.« less
Hong, Tianzhen; Chen, Yixing; Belafi, Zsofia; ...
2017-07-27
Occupant behavior (OB) in buildings is a leading factor influencing energy use in buildings. Quantifying this influence requires the integration of OB models with building performance simulation (BPS). This study reviews approaches to representing and implementing OB models in today’s popular BPS programs, and discusses weaknesses and strengths of these approaches and key issues in integrating of OB models with BPS programs. Two of the key findings are: (1) a common data model is needed to standardize the representation of OB models, enabling their flexibility and exchange among BPS programs and user applications; the data model can be implemented usingmore » a standard syntax (e.g., in the form of XML schema), and (2) a modular software implementation of OB models, such as functional mock-up units for co-simulation, adopting the common data model, has advantages in providing a robust and interoperable integration with multiple BPS programs. Such common OB model representation and implementation approaches help standardize the input structures of OB models, enable collaborative development of a shared library of OB models, and allow for rapid and widespread integration of OB models with BPS programs to improve the simulation of occupant behavior and quantification of their impact on building performance.« less
Conceptual Change Texts in Chemistry Teaching: A Study on the Particle Model of Matter
ERIC Educational Resources Information Center
Beerenwinkel, Anne; Parchmann, Ilka; Grasel, Cornelia
2011-01-01
This study explores the effect of a conceptual change text on students' awareness of common misconceptions on the particle model of matter. The conceptual change text was designed based on principles of text comprehensibility, of conceptual change instruction and of instructional approaches how to introduce the particle model. It was evaluated in…
Accounting for control mislabeling in case-control biomarker studies.
Rantalainen, Mattias; Holmes, Chris C
2011-12-02
In biomarker discovery studies, uncertainty associated with case and control labels is often overlooked. By omitting to take into account label uncertainty, model parameters and the predictive risk can become biased, sometimes severely. The most common situation is when the control set contains an unknown number of undiagnosed, or future, cases. This has a marked impact in situations where the model needs to be well-calibrated, e.g., when the prediction performance of a biomarker panel is evaluated. Failing to account for class label uncertainty may lead to underestimation of classification performance and bias in parameter estimates. This can further impact on meta-analysis for combining evidence from multiple studies. Using a simulation study, we outline how conventional statistical models can be modified to address class label uncertainty leading to well-calibrated prediction performance estimates and reduced bias in meta-analysis. We focus on the problem of mislabeled control subjects in case-control studies, i.e., when some of the control subjects are undiagnosed cases, although the procedures we report are generic. The uncertainty in control status is a particular situation common in biomarker discovery studies in the context of genomic and molecular epidemiology, where control subjects are commonly sampled from the general population with an established expected disease incidence rate.
Non-steroidal anti-inflammatory drugs for the common cold.
Kim, Soo Young; Chang, Yoon-Jung; Cho, Hye Min; Hwang, Ye-Won; Moon, Yoo Sun
2013-06-04
Non-steroidal anti-inflammatory drugs (NSAIDs) have been widely used for the treatment of pain and fever associated with the common cold. However, there is no systematic review to assess the effects of NSAIDs in treating the common cold. To determine the effects of NSAIDs versus placebo (and other treatments) on signs and symptoms of the common cold, and to determine any adverse effects of NSAIDs in people with the common cold. We searched CENTRAL (The Cochrane Library 2013, Issue 1), MEDLINE (January 1966 to April week 4, 2013), EMBASE (January 1980 to April 2013), CINAHL (January 1982 to April 2013) and ProQuest Digital Dissertations (January 1938 to April 2013). Randomised controlled trials (RCTs) of NSAIDS in adults or children with the common cold. Four review authors extracted data. We subdivided trials into placebo-controlled RCTs and head-to-head comparisons of NSAIDs. We extracted and summarised data on global efficacies of analgesic effects (such as reduction of headache and myalgia), non-analgesic effects (such as reduction of nasal symptoms, cough, sputum and sneezing) and side effects. We expressed dichotomous outcomes as risk ratios (RR) with 95% confidence intervals (CI) and continuous data as mean differences (MD) or standardised mean differences (SMD). We pooled data using the fixed- and random-effects models. We included nine RCTs with 1069 participants, describing 37 comparisons: six were NSAIDs versus placebo and three were NSAIDs versus NSAIDs. The overall risk of bias in the included studies was mixed. In a pooled analysis, NSAIDs did not significantly reduce the total symptom score (SMD -0.40, 95% CI -1.03 to 0.24, three studies, random-effects model), or duration of colds (MD -0.23, 95% CI -1.75 to 1.29, two studies, random-effects model). For respiratory symptoms, cough did not improve (SMD -0.05, 95% CI -0.66 to 0.56, two studies, random-effects model) but the sneezing score significantly improved (SMD -0.44, 95% CI -0.75 to -0.12, two studies, random-effects model). For outcomes related to the analgesic effects of NSAIDs (headache, ear pain, and muscle and joint pain) the treatment produced significant benefits. The risk of adverse effects was not high with NSAIDs (RR 2.94, 95% CI 0.51 to 17.03, two studies, random-effects model) and it is difficult to conclude that such drugs are not different from placebo. NSAIDs are somewhat effective in relieving discomfort caused by a cold but there is no clear evidence of their effect in easing respiratory symptoms. The balance of benefit and harms needs to be considered when using NSAIDs for colds.
Development of a Training Model for Laparoscopic Common Bile Duct Exploration
Rodríguez, Omaira; Benítez, Gustavo; Sánchez, Renata; De la Fuente, Liliana
2010-01-01
Background: Training and experience of the surgical team are fundamental for the safety and success of complex surgical procedures, such as laparoscopic common bile duct exploration. Methods: We describe an inert, simple, very low-cost, and readily available training model. Created using a “black box” and basic medical and surgical material, it allows training in the fundamental steps necessary for laparoscopic biliary tract surgery, namely, (1) intraoperative cholangiography, (2) transcystic exploration, and (3) laparoscopic choledochotomy, and t-tube insertion. Results: The proposed model has allowed for the development of the skills necessary for partaking in said procedures, contributing to its development and diminishing surgery time as the trainee advances down the learning curve. Further studies are directed towards objectively determining the impact of the model on skill acquisition. Conclusion: The described model is simple and readily available allowing for accurate reproduction of the main steps and maneuvers that take place during laparoscopic common bile duct exploration, with the purpose of reducing failure and complications. PMID:20529526
NASA Astrophysics Data System (ADS)
Laramie, Sydney M.; Milshtein, Jarrod D.; Breault, Tanya M.; Brushett, Fikile R.; Thompson, Levi T.
2016-09-01
Non-aqueous redox flow batteries (NAqRFBs) have recently received considerable attention as promising high energy density, low cost grid-level energy storage technologies. Despite these attractive features, NAqRFBs are still at an early stage of development and innovative design techniques are necessary to improve performance and decrease costs. In this work, we investigate multi-electron transfer, common ion exchange NAqRFBs. Common ion systems decrease the supporting electrolyte requirement, which subsequently improves active material solubility and decreases electrolyte cost. Voltammetric and electrolytic techniques are used to study the electrochemical performance and chemical compatibility of model redox active materials, iron (II) tris(2,2‧-bipyridine) tetrafluoroborate (Fe(bpy)3(BF4)2) and ferrocenylmethyl dimethyl ethyl ammonium tetrafluoroborate (Fc1N112-BF4). These results help disentangle complex cycling behavior observed in flow cell experiments. Further, a simple techno-economic model demonstrates the cost benefits of employing common ion exchange NAqRFBs, afforded by decreasing the salt and solvent contributions to total chemical cost. This study highlights two new concepts, common ion exchange and multi-electron transfer, for NAqRFBs through a demonstration flow cell employing model active species. In addition, the compatibility analysis developed for asymmetric chemistries can apply to other promising species, including organics, metal coordination complexes (MCCs) and mixed MCC/organic systems, enabling the design of low cost NAqRFBs.
Parker, Dawn C.; Entwisle, Barbara; Rindfuss, Ronald R.; Vanwey, Leah K.; Manson, Steven M.; Moran, Emilio; An, Li; Deadman, Peter; Evans, Tom P.; Linderman, Marc; Rizi, S. Mohammad Mussavi; Malanson, George
2009-01-01
Cross-site comparisons of case studies have been identified as an important priority by the land-use science community. From an empirical perspective, such comparisons potentially allow generalizations that may contribute to production of global-scale land-use and land-cover change projections. From a theoretical perspective, such comparisons can inform development of a theory of land-use science by identifying potential hypotheses and supporting or refuting evidence. This paper undertakes a structured comparison of four case studies of land-use change in frontier regions that follow an agent-based modeling approach. Our hypothesis is that each case study represents a particular manifestation of a common process. Given differences in initial conditions among sites and the time at which the process is observed, actual mechanisms and outcomes are anticipated to differ substantially between sites. Our goal is to reveal both commonalities and differences among research sites, model implementations, and ultimately, conclusions derived from the modeling process. PMID:19960107
Parker, Dawn C; Entwisle, Barbara; Rindfuss, Ronald R; Vanwey, Leah K; Manson, Steven M; Moran, Emilio; An, Li; Deadman, Peter; Evans, Tom P; Linderman, Marc; Rizi, S Mohammad Mussavi; Malanson, George
2008-01-01
Cross-site comparisons of case studies have been identified as an important priority by the land-use science community. From an empirical perspective, such comparisons potentially allow generalizations that may contribute to production of global-scale land-use and land-cover change projections. From a theoretical perspective, such comparisons can inform development of a theory of land-use science by identifying potential hypotheses and supporting or refuting evidence. This paper undertakes a structured comparison of four case studies of land-use change in frontier regions that follow an agent-based modeling approach. Our hypothesis is that each case study represents a particular manifestation of a common process. Given differences in initial conditions among sites and the time at which the process is observed, actual mechanisms and outcomes are anticipated to differ substantially between sites. Our goal is to reveal both commonalities and differences among research sites, model implementations, and ultimately, conclusions derived from the modeling process.
Promoting Model-based Definition to Establish a Complete Product Definition
Ruemler, Shawn P.; Zimmerman, Kyle E.; Hartman, Nathan W.; Hedberg, Thomas; Feeny, Allison Barnard
2016-01-01
The manufacturing industry is evolving and starting to use 3D models as the central knowledge artifact for product data and product definition, or what is known as Model-based Definition (MBD). The Model-based Enterprise (MBE) uses MBD as a way to transition away from using traditional paper-based drawings and documentation. As MBD grows in popularity, it is imperative to understand what information is needed in the transition from drawings to models so that models represent all the relevant information needed for processes to continue efficiently. Finding this information can help define what data is common amongst different models in different stages of the lifecycle, which could help establish a Common Information Model. The Common Information Model is a source that contains common information from domain specific elements amongst different aspects of the lifecycle. To help establish this Common Information Model, information about how models are used in industry within different workflows needs to be understood. To retrieve this information, a survey mechanism was administered to industry professionals from various sectors. Based on the results of the survey a Common Information Model could not be established. However, the results gave great insight that will help in further investigation of the Common Information Model. PMID:28070155
NASA Integrated Network Monitor and Control Software Architecture
NASA Technical Reports Server (NTRS)
Shames, Peter; Anderson, Michael; Kowal, Steve; Levesque, Michael; Sindiy, Oleg; Donahue, Kenneth; Barnes, Patrick
2012-01-01
The National Aeronautics and Space Administration (NASA) Space Communications and Navigation office (SCaN) has commissioned a series of trade studies to define a new architecture intended to integrate the three existing networks that it operates, the Deep Space Network (DSN), Space Network (SN), and Near Earth Network (NEN), into one integrated network that offers users a set of common, standardized, services and interfaces. The integrated monitor and control architecture utilizes common software and common operator interfaces that can be deployed at all three network elements. This software uses state-of-the-art concepts such as a pool of re-programmable equipment that acts like a configurable software radio, distributed hierarchical control, and centralized management of the whole SCaN integrated network. For this trade space study a model-based approach using SysML was adopted to describe and analyze several possible options for the integrated network monitor and control architecture. This model was used to refine the design and to drive the costing of the four different software options. This trade study modeled the three existing self standing network elements at point of departure, and then described how to integrate them using variations of new and existing monitor and control system components for the different proposed deployments under consideration. This paper will describe the trade space explored, the selected system architecture, the modeling and trade study methods, and some observations on useful approaches to implementing such model based trade space representation and analysis.
The Mechanism of Covalent Bonding: Analysis within the Huckel Model of Electronic Structure
ERIC Educational Resources Information Center
Nordholm, Sture; Back, Andreas; Backsay, George B.
2007-01-01
The commonly used Huckel model of electronic structure is employed to study the mechanisms of covalent bonding, a quantum effect related to electron dynamics. The model also explains the conjugation and aromaticity of planar hydrocarbon molecules completely.
Common Characteristics of Models in Present-Day Scientific Practice
ERIC Educational Resources Information Center
Van Der Valk, Ton; Van Driel, Jan H.; De Vos, Wobbe
2007-01-01
Teaching the use of models in scientific research requires a description, in general terms, of how scientists actually use models in their research activities. This paper aims to arrive at defining common characteristics of models that are used in present-day scientific research. Initially, a list of common features of models and modelling, based…
McVicar, Andrew
2016-03-01
To identify core antecedents of job stress and job satisfaction, and to explore the potential of stress interventions to improve job satisfaction. Decreased job satisfaction for nurses is strongly associated with increased job stress. Stress management strategies might have the potential to improve job satisfaction. Comparative scoping review of studies (2000-2013) and location of their outcomes within the 'job demands-job resources' (JD-R) model of stress to identify commonalities and trends. Many, but not all, antecedents of both phenomena appeared consistently suggesting they are common mediators. Others were more variable but the appearance of 'emotional demands' as a common antecedent in later studies suggests an evolving influence of the changing work environment. The occurrence of 'shift work' as a common issue in later studies points to further implications for nurses' psychosocial well-being. Job satisfaction problems in nursing might be co-responsive to stress management intervention. Improving the buffering effectiveness of increased resilience and of prominent perceived job resource issues are urgently required. Participatory, psychosocial methods have the potential to raise job resources but will require high-level collaboration by stakeholders, and participative leadership and facilitation by managers to enable better decision-latitude, support for action planning and responsive changes. © 2015 John Wiley & Sons Ltd.
Common Cause Failure Modeling: Aerospace Versus Nuclear
NASA Technical Reports Server (NTRS)
Stott, James E.; Britton, Paul; Ring, Robert W.; Hark, Frank; Hatfield, G. Spencer
2010-01-01
Aggregate nuclear plant failure data is used to produce generic common-cause factors that are specifically for use in the common-cause failure models of NUREG/CR-5485. Furthermore, the models presented in NUREG/CR-5485 are specifically designed to incorporate two significantly distinct assumptions about the methods of surveillance testing from whence this aggregate failure data came. What are the implications of using these NUREG generic factors to model the common-cause failures of aerospace systems? Herein, the implications of using the NUREG generic factors in the modeling of aerospace systems are investigated in detail and strong recommendations for modeling the common-cause failures of aerospace systems are given.
Chun, Seokjoon; Harris, Alexa; Carrion, Margely; Rojas, Elizabeth; Stark, Stephen; Lejuez, Carl; Lechner, William V.; Bornovalova, Marina A.
2016-01-01
The comorbidity between Borderline Personality Disorder (BPD) and Antisocial Personality Disorder (ASPD) is well-established, and the two disorders share many similarities. However, there are also differences across disorders: most notably, BPD is diagnosed more frequently in females and ASPD in males. We investigated if a) comorbidity between BPD and ASPD is attributable to two discrete disorders or the expression of common underlying processes, and b) if the model of comorbidity is true across sex. Using a clinical sample of 1400 drug users in residential substance abuse treatment, we tested three competing models to explore whether the comorbidity of ASPD and BPD should be represented by a single common factor, two correlated factors, or a bifactor structure involving a general and disorder-specific factors. Next, we tested whether our resulting model was meaningful by examining its relationship with criterion variables previously reported to be associated with BPD and ASPD. The bifactor model provided the best fit and was invariant across sex. Overall, the general factor of the bifactor model significantly accounted for a large percentage of the variance in criterion variables, whereas the BPD and AAB specific factors added little to the models. The association of the general and specific factor with all criterion variables was equal for males and females. Our results suggest common underlying vulnerability accounts for both the comorbidity between BPD and AAB (across sex), and this common vulnerability drives the association with other psychopathology and maladaptive behavior. This in turn has implications for diagnostic classification systems and treatment. General scientific summary This study found that, for both males and females, borderline and antisocial personality disorders show a large degree of overlap, and little uniqueness. The commonality between BPD and ASPD mainly accounted for associations with criterion variables. This suggests that BPD and ASPD show a large common core that accounts for their comorbidity. PMID:27808543
CUNNER(TAUTOGOLABRUS ADSPERSUS) AS A MODEL FISH FOR REPRODUCTIVE STUDIES IN THE LABORATORY
Cunner (Tautogolabrus adspersus) are being studied at our laboratory as a model species to determine the effects of endocrine disrupting chemicals (EDCs) on estuarine fish populations. Cunner was selected because this species is common in estuarine areas, is easily obtainable, an...
Seeking a Multi-Construct Model of Morality
ERIC Educational Resources Information Center
McDaniel, Brenda L.; Grice, James W.; Eason, E. Allen
2010-01-01
The present study explored a multi-construct model of moral development. Variables commonly seen in the moral development literature, such as family interactions, spiritual life, ascription to various sources of moral authority, empathy, shame, guilt and moral judgement competence, were investigated. Results from the current study support previous…
Common Mental Disorders among Occupational Groups: Contributions of the Latent Class Model
Martins Carvalho, Fernando; de Araújo, Tânia Maria
2016-01-01
Background. The Self-Reporting Questionnaire (SRQ-20) is widely used for evaluating common mental disorders. However, few studies have evaluated the SRQ-20 measurements performance in occupational groups. This study aimed to describe manifestation patterns of common mental disorders symptoms among workers populations, by using latent class analysis. Methods. Data derived from 9,959 Brazilian workers, obtained from four cross-sectional studies that used similar methodology, among groups of informal workers, teachers, healthcare workers, and urban workers. Common mental disorders were measured by using SRQ-20. Latent class analysis was performed on each database separately. Results. Three classes of symptoms were confirmed in the occupational categories investigated. In all studies, class I met better criteria for suspicion of common mental disorders. Class II discriminated workers with intermediate probability of answers to the items belonging to anxiety, sadness, and energy decrease that configure common mental disorders. Class III was composed of subgroups of workers with low probability to respond positively to questions for screening common mental disorders. Conclusions. Three patterns of symptoms of common mental disorders were identified in the occupational groups investigated, ranging from distinctive features to low probabilities of occurrence. The SRQ-20 measurements showed stability in capturing nonpsychotic symptoms. PMID:27630999
Segre, Lisa S.; McCabe, Jennifer E.; Chuffo-Siewert, Rebecca; O’Hara, Michael W.
2014-01-01
Background Mothers of infants hospitalized in the neonatal intensive care unit (NICU) are at risk for clinically significant levels of depression and anxiety symptoms; however, the maternal/infant characteristics that predict risk have been difficult to determine. Previous studies have conceptualized depression and anxiety symptoms separately, ignoring their comorbidity. Moreover, risk factors for these symptoms have not been assessed together in one study sample. Objectives The primary aim of this study was to determine whether a diagnostic classification approach or a common-factor model better explained the pattern of symptoms reported by NICU mothers, including depression, generalized anxiety, panic, and trauma. A secondary aim was to assess risk factors of aversive emotional states in NICU mothers based on the supported conceptual model. Method In this cross-sectional study, a nonprobability convenience sample of 200 NICU mothers completed questionnaires assessing maternal demographic and infant health characteristics, as well as maternal depression and anxiety symptoms. Structural equation modeling was used to test a diagnostic classification model, and a common-factor model of aversive emotional states and the risk factors of aversive emotional states in mothers in the NICU. Results Maximum likelihood estimates indicated that examining symptoms of depression and anxiety disorders as separate diagnostic classifications did not fit the data well, whereas examining the common factor of negative emotionality rendered an adequate fit to the data, and identified a history of depression, infant illness, and infant prematurity as significant risk factors. Discussion This study supports a multidimensional view of depression, and should guide both clinical practice and future research with NICU mothers. PMID:25171558
Scheuhammer, A M; Lord, S I; Wayland, M; Burgess, N M; Champoux, L; Elliott, J E
2016-03-01
We investigated mercury (Hg) concentrations in small fish (mainly yellow perch, Perca flavescens; ∼60% of fish collected) and in blood of common loons (Gavia immer) that prey upon them during the breeding season on lakes in 4 large, widely separated study areas in Canada (>13 lakes per study area; total number of lakes = 93). Although surface sediments from lakes near a base metal smelter in Flin Flon, Manitoba had the highest Hg concentrations, perch and other small fish and blood of common loon chicks sampled from these same lakes had low Hg concentrations similar to those from uncontaminated reference lakes. Multiple regression modeling with AIC analysis indicated that lake pH was by far the most important single factor influencing perch Hg concentrations in lakes across the four study areas (R(2) = 0.29). The best model was a three-variable model (pH + alkalinity + sediment Se; Wi = 0.61, R(2) = 0.85). A single-variable model (fish Hg) best explained among-lake variability in loon chick blood Hg (Wi = 0.17; R(2) = 0.53). From a toxicological risk perspective, all lakes posing a potential Hg health risk for perch and possibly other small pelagic fish species (where mean fish muscle Hg concentrations exceeded 2.4 μg/g dry wt.), and for breeding common loons (where mean fish muscle Hg concentrations exceeded 0.8 μg/g dry wt., and loon chick blood Hg exceeded 1.4 μg/g dry wt.) had pH < 6.7 and were located in eastern Canada. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Effect of Bypass Capacitor in Common-mode Noise Reduction Technique for Automobile PCB
NASA Astrophysics Data System (ADS)
Uno, Takanori; Ichikawa, Kouji; Mabuchi, Yuichi; Nakamura, Atushi
In this letter, we studied the use of common mode noise reduction technique for in-vehicle electronic equipment, each comprising large-scale integrated circuit (LSI), printed circuit board (PCB), wiring harnesses, and ground plane. We have improved the model circuit of the common mode noise that flows to the wire harness to add the effect of by-pass capacitors located near an LSI.
Animal models of post-traumatic epilepsy.
Ostergard, Thomas; Sweet, Jennifer; Kusyk, Dorian; Herring, Eric; Miller, Jonathan
2016-10-15
Post-traumatic epilepsy (PTE) is defined as the development of unprovoked seizures in a delayed fashion after traumatic brain injury (TBI). PTE lies at the intersection of two distinct fields of study, epilepsy and neurotrauma. TBI is associated with a myriad of both focal and diffuse anatomic injuries, and an ideal animal model of epilepsy after TBI must mimic the characteristics of human PTE. The three most commonly used models of TBI are lateral fluid percussion, controlled cortical injury, and weight drop. Much of what is known about PTE has resulted from use of these models. In this review, we describe the most commonly used animal models of TBI with special attention to their advantages and disadvantages with respect to their use as a model of PTE. Copyright © 2016 Elsevier B.V. All rights reserved.
Sloppy-model universality class and the Vandermonde matrix.
Waterfall, Joshua J; Casey, Fergal P; Gutenkunst, Ryan N; Brown, Kevin S; Myers, Christopher R; Brouwer, Piet W; Elser, Veit; Sethna, James P
2006-10-13
In a variety of contexts, physicists study complex, nonlinear models with many unknown or tunable parameters to explain experimental data. We explain why such systems so often are sloppy: the system behavior depends only on a few "stiff" combinations of the parameters and is unchanged as other "sloppy" parameter combinations vary by orders of magnitude. We observe that the eigenvalue spectra for the sensitivity of sloppy models have a striking, characteristic form with a density of logarithms of eigenvalues which is roughly constant over a large range. We suggest that the common features of sloppy models indicate that they may belong to a common universality class. In particular, we motivate focusing on a Vandermonde ensemble of multiparameter nonlinear models and show in one limit that they exhibit the universal features of sloppy models.
Hey, Jody
2010-01-01
The divergence of bonobos and three subspecies of the common chimpanzee was examined under a multipopulation isolation-with-migration (IM) model with data from 73 loci drawn from the literature. A benefit of having a full multipopulation model, relative to conducting multiple pairwise analyses between sampled populations, is that a full model can reveal historical gene flow involving ancestral populations. An example of this was found in which gene flow is indicated between the western common chimpanzee subspecies and the ancestor of the central and the eastern common chimpanzee subspecies. The results of a full analysis on all four populations are strongly consistent with analyses on pairs of populations and generally similar to results from previous studies. The basal split between bonobos and common chimpanzees was estimated at 0.93 Ma (0.68–1.54 Ma, 95% highest posterior density interval), with the split among the ancestor of three common chimpanzee populations at 0.46 Ma (0.35–0.65), and the most recent split between central and eastern common chimpanzee populations at 0.093 Ma (0.041–0.157). Population size estimates mostly fell in the range from 5,000 to 10,000 individuals. The exceptions are the size of the ancestor of the common chimpanzee and the bonobo, at 17,000 (8,000–28,000) individuals, and the central common chimpanzee and its immediate ancestor with the eastern common chimpanzee, which have effective size estimates at 27,000 (16,000–44,000) and 32,000 (19,000–54,000) individuals, respectively. PMID:19955478
Hey, Jody
2010-04-01
The divergence of bonobos and three subspecies of the common chimpanzee was examined under a multipopulation isolation-with-migration (IM) model with data from 73 loci drawn from the literature. A benefit of having a full multipopulation model, relative to conducting multiple pairwise analyses between sampled populations, is that a full model can reveal historical gene flow involving ancestral populations. An example of this was found in which gene flow is indicated between the western common chimpanzee subspecies and the ancestor of the central and the eastern common chimpanzee subspecies. The results of a full analysis on all four populations are strongly consistent with analyses on pairs of populations and generally similar to results from previous studies. The basal split between bonobos and common chimpanzees was estimated at 0.93 Ma (0.68-1.54 Ma, 95% highest posterior density interval), with the split among the ancestor of three common chimpanzee populations at 0.46 Ma (0.35-0.65), and the most recent split between central and eastern common chimpanzee populations at 0.093 Ma (0.041-0.157). Population size estimates mostly fell in the range from 5,000 to 10,000 individuals. The exceptions are the size of the ancestor of the common chimpanzee and the bonobo, at 17,000 (8,000-28,000) individuals, and the central common chimpanzee and its immediate ancestor with the eastern common chimpanzee, which have effective size estimates at 27,000 (16,000-44,000) and 32,000 (19,000-54,000) individuals, respectively.
Analysis of NASA Common Research Model Dynamic Data
NASA Technical Reports Server (NTRS)
Balakrishna, S.; Acheson, Michael J.
2011-01-01
Recent NASA Common Research Model (CRM) tests at the Langley National Transonic Facility (NTF) and Ames 11-foot Transonic Wind Tunnel (11-foot TWT) have generated an experimental database for CFD code validation. The database consists of force and moment, surface pressures and wideband wing-root dynamic strain/wing Kulite data from continuous sweep pitch polars. The dynamic data sets, acquired at 12,800 Hz sampling rate, are analyzed in this study to evaluate CRM wing buffet onset and potential CRM wing flow separation.
USDA-ARS?s Scientific Manuscript database
Crop growth simulation models can address a variety of agricultural problems, but their use to directly assist in-season irrigation management decisions is less common. Confidence in model reliability can be increased if models are shown to provide improved in-season management recommendations, whi...
A Linear Variable-[theta] Model for Measuring Individual Differences in Response Precision
ERIC Educational Resources Information Center
Ferrando, Pere J.
2011-01-01
Models for measuring individual response precision have been proposed for binary and graded responses. However, more continuous formats are quite common in personality measurement and are usually analyzed with the linear factor analysis model. This study extends the general Gaussian person-fluctuation model to the continuous-response case and…
ACCUMULATION OF PBDE-47 IN PRIMARY CULTURES OF RAT NEOCORTICAL CELLS.
Cell culture models are often used in mechanistic studies of toxicant action. However, one area of uncertainty is the extrapolation of dose from the in vitro model to the in vivo tissue. A common assumption of in vitro studies is that media concentration is a predictive marker of...
Neighboring and Urbanism: Commonality versus Friendship.
ERIC Educational Resources Information Center
Silverman, Carol J.
1986-01-01
Examines a dimension of neighboring that need not assume friendship as the role model. When the model assumes only a sense of connectedness as defining neighboring, then the residential correlation, shown in many studies between urbanism and neighboring, disappears. Theories of neighboring, study variables, methods, and analysis are discussed.…
The Analysis of Measurement Equivalence in International Studies Using the Rasch Model
ERIC Educational Resources Information Center
Schulz, Wolfram; Fraillon, Julian
2011-01-01
When comparing data derived from tests or questionnaires in cross-national studies, researchers commonly assume measurement invariance in their underlying scaling models. However, different cultural contexts, languages, and curricula can have powerful effects on how students respond in different countries. This article illustrates how the…
Psychometric Measurement Models and Artificial Neural Networks
ERIC Educational Resources Information Center
Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.
2004-01-01
The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…
Eggo, Rosalind M; Scott, James G; Galvani, Alison P; Meyers, Lauren Ancel
2016-02-23
Asthma exacerbations exhibit a consistent annual pattern, closely mirroring the school calendar. Although respiratory viruses--the "common cold" viruses--are implicated as a principal cause, there is little evidence to link viral prevalence to seasonal differences in risk. We jointly fit a common cold transmission model and a model of biological and environmental exacerbation triggers to estimate effects on hospitalization risk. Asthma hospitalization rate, influenza prevalence, and air quality measures are available, but common cold circulation is not; therefore, we generate estimates of viral prevalence using a transmission model. Our deterministic multivirus transmission model includes transmission rates that vary when school is closed. We jointly fit the two models to 7 y of daily asthma hospitalizations in adults and children (66,000 events) in eight metropolitan areas. For children, we find that daily viral prevalence is the strongest predictor of asthma hospitalizations, with transmission reduced by 45% (95% credible interval =41-49%) during school closures. We detect a transient period of nonspecific immunity between infections lasting 19 (17-21) d. For adults, hospitalizations are more variable, with influenza driving wintertime peaks. Neither particulate matter nor ozone was an important predictor, perhaps because of the large geographic area of the populations. The school calendar clearly and predictably drives seasonal variation in common cold prevalence, which results in the "back-to-school" asthma exacerbation pattern seen in children and indirectly contributes to exacerbation risk in adults. This study provides a framework for anticipating the seasonal dynamics of common colds and the associated risks for asthmatics.
Scott, James G.; Galvani, Alison P.; Meyers, Lauren Ancel
2016-01-01
Asthma exacerbations exhibit a consistent annual pattern, closely mirroring the school calendar. Although respiratory viruses—the “common cold” viruses—are implicated as a principal cause, there is little evidence to link viral prevalence to seasonal differences in risk. We jointly fit a common cold transmission model and a model of biological and environmental exacerbation triggers to estimate effects on hospitalization risk. Asthma hospitalization rate, influenza prevalence, and air quality measures are available, but common cold circulation is not; therefore, we generate estimates of viral prevalence using a transmission model. Our deterministic multivirus transmission model includes transmission rates that vary when school is closed. We jointly fit the two models to 7 y of daily asthma hospitalizations in adults and children (66,000 events) in eight metropolitan areas. For children, we find that daily viral prevalence is the strongest predictor of asthma hospitalizations, with transmission reduced by 45% (95% credible interval =41–49%) during school closures. We detect a transient period of nonspecific immunity between infections lasting 19 (17–21) d. For adults, hospitalizations are more variable, with influenza driving wintertime peaks. Neither particulate matter nor ozone was an important predictor, perhaps because of the large geographic area of the populations. The school calendar clearly and predictably drives seasonal variation in common cold prevalence, which results in the “back-to-school” asthma exacerbation pattern seen in children and indirectly contributes to exacerbation risk in adults. This study provides a framework for anticipating the seasonal dynamics of common colds and the associated risks for asthmatics. PMID:26858436
Goodbred, Steven L.; Patino, Reynaldo; Orsak, Erik; Sharma, Prakash; Ruessler, Shane
2013-01-01
During a 2008 study to assess endocrine and reproductive health of common carp (Cyprinus carpio) in Lake Mead, Nevada (U.S.A.) we identified two fish, one male and one female, as hybrids with goldfish (Carassius auratus) based on morphology, lateral line scale count, and lack of anterior barbels. Gross examination of the female hybrid ovaries indicated presence of vitellogenic ovarian follicles; whereas histological evaluation of the male hybrid testes showed lobule-like structures with open lumens but without germ cells, suggesting it was sterile. Because common carp/goldfish hybrids are more susceptible to gonadal tumors and may have different endocrine profiles than common carp, researchers using common carp as a model for endocrine/reproductive studies should be aware of the possible presence of hybrids.
Use of Network Inference to Elucidate Common and Chemical-specific Effects on Steoidogenesis
Microarray data is a key source for modeling gene regulatory interactions. Regulatory network models based on multiple datasets are potentially more robust and can provide greater confidence. In this study, we used network modeling on microarray data generated by exposing the fat...
A Realization of Bias Correction Method in the GMAO Coupled System
NASA Technical Reports Server (NTRS)
Chang, Yehui; Koster, Randal; Wang, Hailan; Schubert, Siegfried; Suarez, Max
2018-01-01
Over the past several decades, a tremendous effort has been made to improve model performance in the simulation of the climate system. The cold or warm sea surface temperature (SST) bias in the tropics is still a problem common to most coupled ocean atmosphere general circulation models (CGCMs). The precipitation biases in CGCMs are also accompanied by SST and surface wind biases. The deficiencies and biases over the equatorial oceans through their influence on the Walker circulation likely contribute the precipitation biases over land surfaces. In this study, we introduce an approach in the CGCM modeling to correct model biases. This approach utilizes the history of the model's short-term forecasting errors and their seasonal dependence to modify model's tendency term and to minimize its climate drift. The study shows that such an approach removes most of model climate biases. A number of other aspects of the model simulation (e.g. extratropical transient activities) are also improved considerably due to the imposed pre-processed initial 3-hour model drift corrections. Because many regional biases in the GEOS-5 CGCM are common amongst other current models, our approaches and findings are applicable to these other models as well.
Prediction of breast cancer risk by genetic risk factors, overall and by hormone receptor status.
Hüsing, Anika; Canzian, Federico; Beckmann, Lars; Garcia-Closas, Montserrat; Diver, W Ryan; Thun, Michael J; Berg, Christine D; Hoover, Robert N; Ziegler, Regina G; Figueroa, Jonine D; Isaacs, Claudine; Olsen, Anja; Viallon, Vivian; Boeing, Heiner; Masala, Giovanna; Trichopoulos, Dimitrios; Peeters, Petra H M; Lund, Eiliv; Ardanaz, Eva; Khaw, Kay-Tee; Lenner, Per; Kolonel, Laurence N; Stram, Daniel O; Le Marchand, Loïc; McCarty, Catherine A; Buring, Julie E; Lee, I-Min; Zhang, Shumin; Lindström, Sara; Hankinson, Susan E; Riboli, Elio; Hunter, David J; Henderson, Brian E; Chanock, Stephen J; Haiman, Christopher A; Kraft, Peter; Kaaks, Rudolf
2012-09-01
There is increasing interest in adding common genetic variants identified through genome wide association studies (GWAS) to breast cancer risk prediction models. First results from such models showed modest benefits in terms of risk discrimination. Heterogeneity of breast cancer as defined by hormone-receptor status has not been considered in this context. In this study we investigated the predictive capacity of 32 GWAS-detected common variants for breast cancer risk, alone and in combination with classical risk factors, and for tumours with different hormone receptor status. Within the Breast and Prostate Cancer Cohort Consortium, we analysed 6009 invasive breast cancer cases and 7827 matched controls of European ancestry, with data on classical breast cancer risk factors and 32 common gene variants identified through GWAS. Discriminatory ability with respect to breast cancer of specific hormone receptor-status was assessed with the age adjusted and cohort-adjusted concordance statistic (AUROC(a)). Absolute risk scores were calculated with external reference data. Integrated discrimination improvement was used to measure improvements in risk prediction. We found a small but steady increase in discriminatory ability with increasing numbers of genetic variants included in the model (difference in AUROC(a) going from 2.7% to 4%). Discriminatory ability for all models varied strongly by hormone receptor status. Adding information on common polymorphisms provides small but statistically significant improvements in the quality of breast cancer risk prediction models. We consistently observed better performance for receptor-positive cases, but the gain in discriminatory quality is not sufficient for clinical application.
A General Model for Estimating and Correcting the Effects of Nonindependence in Meta-Analysis.
ERIC Educational Resources Information Center
Strube, Michael J.
A general model is described which can be used to represent the four common types of meta-analysis: (1) estimation of effect size by combining study outcomes; (2) estimation of effect size by contrasting study outcomes; (3) estimation of statistical significance by combining study outcomes; and (4) estimation of statistical significance by…
Non-steroidal anti-inflammatory drugs for the common cold.
Kim, Soo Young; Chang, Yoon-Jung; Cho, Hye Min; Hwang, Ye-Won; Moon, Yoo Sun
2015-09-21
Non-steroidal anti-inflammatory drugs (NSAIDs) have been widely used for the treatment of pain and fever associated with the common cold. To determine the effects of NSAIDs versus placebo (and other treatments) on signs and symptoms of the common cold, and to determine any adverse effects of NSAIDs in people with the common cold. We searched CENTRAL (2015, Issue 4, April), (January 1966 to April week 3, 2015), EMBASE (January 1980 to April 2015), CINAHL (January 1982 to April 2015) and ProQuest Digital Dissertations (January 1938 to April 2015). Randomised controlled trials (RCTs) of NSAIDS in adults or children with the common cold. Four review authors extracted data. We subdivided trials into placebo-controlled RCTs and head-to-head comparisons of NSAIDs. We extracted and summarised data on global analgesic effects (such as reduction of headache and myalgia), non-analgesic effects (such as reduction of nasal symptoms, cough, sputum and sneezing) and side effects. We expressed dichotomous outcomes as risk ratios (RR) with 95% confidence intervals (CI) and continuous data as mean differences (MD) or standardised mean differences (SMD). We pooled data using the fixed-effect and random-effects models. We included nine RCTs with 1069 participants, describing 37 comparisons: six were NSAIDs versus placebo and three were NSAIDs versus NSAIDs. The overall risk of bias in the included studies was mixed. In a pooled analysis, NSAIDs did not significantly reduce the total symptom score (SMD -0.40, 95% CI -1.03 to 0.24, three studies, random-effects model), or duration of colds (MD -0.23, 95% CI -1.75 to 1.29, two studies, random-effects model). For respiratory symptoms, cough did not improve (SMD -0.05, 95% CI -0.66 to 0.56, two studies, random-effects model) but the sneezing score significantly improved (SMD -0.44, 95% CI -0.75 to -0.12, two studies, random-effects model). For outcomes related to the analgesic effects of NSAIDs (headache, ear pain, and muscle and joint pain) the treatment produced significant benefits. The risk of adverse effects was not high with NSAIDs (RR 2.94, 95% CI 0.51 to 17.03, two studies, random-effects model) but it is difficult to conclude that such drugs are no different from placebo. The quality of the evidence may be estimated as 'moderate' because of imprecision. The major limitations of this review are that the results of the studies are quite diverse and the number of studies for one result is quite small. NSAIDs are somewhat effective in relieving the discomfort caused by a cold but there is no clear evidence of their effect in easing respiratory symptoms. The balance of benefit and harms needs to be considered when using NSAIDs for colds.
An Analysis of Machine- and Human-Analytics in Classification.
Tam, Gary K L; Kothari, Vivek; Chen, Min
2017-01-01
In this work, we present a study that traces the technical and cognitive processes in two visual analytics applications to a common theoretic model of soft knowledge that may be added into a visual analytics process for constructing a decision-tree model. Both case studies involved the development of classification models based on the "bag of features" approach. Both compared a visual analytics approach using parallel coordinates with a machine-learning approach using information theory. Both found that the visual analytics approach had some advantages over the machine learning approach, especially when sparse datasets were used as the ground truth. We examine various possible factors that may have contributed to such advantages, and collect empirical evidence for supporting the observation and reasoning of these factors. We propose an information-theoretic model as a common theoretic basis to explain the phenomena exhibited in these two case studies. Together we provide interconnected empirical and theoretical evidence to support the usefulness of visual analytics.
Peer-to-peer communication, cancer prevention, and the internet
Ancker, Jessica S.; Carpenter, Kristen M.; Greene, Paul; Hoffmann, Randi; Kukafka, Rita; Marlow, Laura A.V.; Prigerson, Holly G.; Quillin, John M.
2013-01-01
Online communication among patients and consumers through support groups, discussion boards, and knowledge resources is becoming more common. In this paper, we discuss key methods through which such web-based peer-to-peer communication may affect health promotion and disease prevention behavior (exchanges of information, emotional and instrumental support, and establishment of group norms and models). We also discuss several theoretical models for studying online peer communication, including social theory, health communication models, and health behavior models. Although online peer communication about health and disease is very common, research evaluating effects on health behaviors, mediators, and outcomes is still relatively sparse. We suggest that future research in this field should include formative evaluation and studies of effects on mediators of behavior change, behaviors, and outcomes. It will also be important to examine spontaneously emerging peer communication efforts to see how they can be integrated with theory-based efforts initiated by researchers. PMID:19449267
Cellular signaling identifiability analysis: a case study.
Roper, Ryan T; Pia Saccomani, Maria; Vicini, Paolo
2010-05-21
Two primary purposes for mathematical modeling in cell biology are (1) simulation for making predictions of experimental outcomes and (2) parameter estimation for drawing inferences from experimental data about unobserved aspects of biological systems. While the former purpose has become common in the biological sciences, the latter is less common, particularly when studying cellular and subcellular phenomena such as signaling-the focus of the current study. Data are difficult to obtain at this level. Therefore, even models of only modest complexity can contain parameters for which the available data are insufficient for estimation. In the present study, we use a set of published cellular signaling models to address issues related to global parameter identifiability. That is, we address the following question: assuming known time courses for some model variables, which parameters is it theoretically impossible to estimate, even with continuous, noise-free data? Following an introduction to this problem and its relevance, we perform a full identifiability analysis on a set of cellular signaling models using DAISY (Differential Algebra for the Identifiability of SYstems). We use our analysis to bring to light important issues related to parameter identifiability in ordinary differential equation (ODE) models. We contend that this is, as of yet, an under-appreciated issue in biological modeling and, more particularly, cell biology. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Osier, Nicole D.; Carlson, Shaun W.; DeSana, Anthony
2015-01-01
Abstract The purpose of this review is to survey the use of experimental animal models for studying the chronic histopathological and behavioral consequences of traumatic brain injury (TBI). The strategies employed to study the long-term consequences of TBI are described, along with a summary of the evidence available to date from common experimental TBI models: fluid percussion injury; controlled cortical impact; blast TBI; and closed-head injury. For each model, evidence is organized according to outcome. Histopathological outcomes included are gross changes in morphology/histology, ventricular enlargement, gray/white matter shrinkage, axonal injury, cerebrovascular histopathology, inflammation, and neurogenesis. Behavioral outcomes included are overall neurological function, motor function, cognitive function, frontal lobe function, and stress-related outcomes. A brief discussion is provided comparing the most common experimental models of TBI and highlighting the utility of each model in understanding specific aspects of TBI pathology. The majority of experimental TBI studies collect data in the acute postinjury period, but few continue into the chronic period. Available evidence from long-term studies suggests that many of the experimental TBI models can lead to progressive changes in histopathology and behavior. The studies described in this review contribute to our understanding of chronic TBI pathology. PMID:25490251
Ullén, Fredrik; Mosing, Miriam A; Madison, Guy
2015-03-01
Music performance depends critically on precise processing of time. A common model behavior in studies of motor timing is isochronous serial interval production (ISIP), that is, hand/finger movements with a regular beat. ISIP accuracy is related to both music practice and intelligence. Here we present a study of these associations in a large twin cohort, demonstrating that the effects of music practice and intelligence on motor timing are additive, with no significant multiplicative (interaction) effect. Furthermore, the association between music practice and motor timing was analyzed with the use of a co-twin control design using intrapair differences. These analyses revealed that the phenotypic association disappeared when all genetic and common environmental factors were controlled. This suggests that the observed association may not reflect a causal effect of music practice on ISIP performance but rather reflect common influences (e.g., genetic effects) on both outcomes. The relevance of these findings for models of practice and expert performance is discussed. © 2014 New York Academy of Sciences.
Predicting recreational water quality advisories: A comparison of statistical methods
Brooks, Wesley R.; Corsi, Steven R.; Fienen, Michael N.; Carvin, Rebecca B.
2016-01-01
Epidemiological studies indicate that fecal indicator bacteria (FIB) in beach water are associated with illnesses among people having contact with the water. In order to mitigate public health impacts, many beaches are posted with an advisory when the concentration of FIB exceeds a beach action value. The most commonly used method of measuring FIB concentration takes 18–24 h before returning a result. In order to avoid the 24 h lag, it has become common to ”nowcast” the FIB concentration using statistical regressions on environmental surrogate variables. Most commonly, nowcast models are estimated using ordinary least squares regression, but other regression methods from the statistical and machine learning literature are sometimes used. This study compares 14 regression methods across 7 Wisconsin beaches to identify which consistently produces the most accurate predictions. A random forest model is identified as the most accurate, followed by multiple regression fit using the adaptive LASSO.
A Penalized Robust Method for Identifying Gene-Environment Interactions
Shi, Xingjie; Liu, Jin; Huang, Jian; Zhou, Yong; Xie, Yang; Ma, Shuangge
2015-01-01
In high-throughput studies, an important objective is to identify gene-environment interactions associated with disease outcomes and phenotypes. Many commonly adopted methods assume specific parametric or semiparametric models, which may be subject to model mis-specification. In addition, they usually use significance level as the criterion for selecting important interactions. In this study, we adopt the rank-based estimation, which is much less sensitive to model specification than some of the existing methods and includes several commonly encountered data and models as special cases. Penalization is adopted for the identification of gene-environment interactions. It achieves simultaneous estimation and identification and does not rely on significance level. For computation feasibility, a smoothed rank estimation is further proposed. Simulation shows that under certain scenarios, for example with contaminated or heavy-tailed data, the proposed method can significantly outperform the existing alternatives with more accurate identification. We analyze a lung cancer prognosis study with gene expression measurements under the AFT (accelerated failure time) model. The proposed method identifies interactions different from those using the alternatives. Some of the identified genes have important implications. PMID:24616063
Hazelden's model of treatment and its outcome.
Stinchfield, R; Owen, P
1998-01-01
Although the Minnesota Model of treatment for alcohol and drug addiction is a common treatment approach, there are few published reports of its effectiveness. This study describes the Minnesota Model treatment approach as practiced at Hazelden, a private residential alcohol and drug abuse treatment center located in Center City, Minnesota (a founding program of the Minnesota Model) and presents recent outcome results from this program. This study includes 1,083 male and female clients admitted to Hazelden for treatment of a psychoactive substance-use disorder between 1989 and 1991. The outcome study is a one group pretest/posttest design. Data collection occurred at admission to treatment and at 1-month, 6-month, and 12-month posttreatment. At 1-year follow-up, 53% reported that they remained abstinent during the year following treatment and an additional 35% had reduced their alcohol and drug use. These results are similar to those reported by other private treatment programs. The Minnesota Model has consistently yielded satisfactory outcome results, and future research needs to focus on the therapeutic process of this common treatment approach.
ERIC Educational Resources Information Center
Sgammato, Adrienne N.
2009-01-01
This study examined the applicability of a relatively new unidimensional, unfolding item response theory (IRT) model called the generalized graded unfolding model (GGUM; Roberts, Donoghue, & Laughlin, 2000). A total of four scaling methods were applied. Two commonly used cumulative IRT models for polytomous data, the Partial Credit Model and…
Sensitivity of Fit Indices to Misspecification in Growth Curve Models
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.
2010-01-01
This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…
Applying Hierarchical Model Calibration to Automatically Generated Items.
ERIC Educational Resources Information Center
Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.
This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…
Teachers' Conceptions of Mathematical Modeling
ERIC Educational Resources Information Center
Gould, Heather
2013-01-01
The release of the "Common Core State Standards for Mathematics" in 2010 resulted in a new focus on mathematical modeling in United States curricula. Mathematical modeling represents a way of doing and understanding mathematics new to most teachers. The purpose of this study was to determine the conceptions and misconceptions held by…
TinkerPlots™ Model Construction Approaches for Comparing Two Groups: Student Perspectives
ERIC Educational Resources Information Center
Noll, Jennifer; Kirin, Dana
2017-01-01
Teaching introductory statistics using curricula focused on modeling and simulation is becoming increasingly common in introductory statistics courses and touted as a more beneficial approach for fostering students' statistical thinking. Yet, surprisingly little research has been conducted to study the impact of modeling and simulation curricula…
Human attribute concepts: relative ubiquity across twelve mutually isolated languages.
Saucier, Gerard; Thalmayer, Amber Gayle; Bel-Bahar, Tarik S
2014-07-01
It has been unclear which human-attribute concepts are most universal across languages. To identify common-denominator concepts, we used dictionaries for 12 mutually isolated languages-Maasai, Supyire Senoufo, Khoekhoe, Afar, Mara Chin, Hmong, Wik-Mungkan, Enga, Fijian, Inuktitut, Hopi, and Kuna-representing diverse cultural characteristics and language families, from multiple continents. A composite list of every person-descriptive term in each lexicon was closely examined to determine the content (in terms of English translation) most ubiquitous across languages. Study 1 identified 28 single-word concepts used to describe persons in all 12 languages, as well as 41 additional terms found in 11 of 12. Results indicated that attribute concepts related to morality and competence appear to be as cross-culturally ubiquitous as basic-emotion concepts. Formulations of universal-attribute concepts from Osgood and Wierzbicka were well-supported. Study 2 compared lexically based personality models on the relative ubiquity of key associated terms, finding that 1- and 2-dimensional models draw on markedly more ubiquitous terms than do 5- or 6-factor models. We suggest that ubiquitous attributes reflect common cultural as well as common biological processes.
Strong regularities in world wide web surfing
Huberman; Pirolli; Pitkow; Lukose
1998-04-03
One of the most common modes of accessing information in the World Wide Web is surfing from one document to another along hyperlinks. Several large empirical studies have revealed common patterns of surfing behavior. A model that assumes that users make a sequence of decisions to proceed to another page, continuing as long as the value of the current page exceeds some threshold, yields the probability distribution for the number of pages that a user visits within a given Web site. This model was verified by comparing its predictions with detailed measurements of surfing patterns. The model also explains the observed Zipf-like distributions in page hits observed at Web sites.
Brinjikji, Waleed; Ding, Yong H; Kallmes, David F; Kadirvel, Ramanathan
2016-01-01
Summary Pre-clinical studies are important in helping practitioners and device developers improve techniques and tools for endovascular treatment of intracranial aneurysms. Thus, an understanding of the major animal models used in such studies is important. The New Zealand rabbit elastase induced arterial aneurysm of the common carotid artery is one of the most commonly used models in testing the safety and efficacy of new endovascular devices. In this review we discuss 1) various techniques used to create the aneurysm, 2) complications of aneurysm creation, 3) natural history of the arterial aneurysm, 4) histopathologic and hemodynamic features of the aneurysm 5) devices tested using this model and 6) weaknesses of the model. We demonstrate how pre-clinical studies using this model are applied in treatment of intracranial aneurysms in humans. The model has a similar hemodynamic, morphological and histologic characteristics to human aneurysms and demonstrates similar healing responses to coiling as human aneurysms. Despite these strengths however, the model does have many weaknesses including the fact that the model does not emulate the complex inflammatory processes affecting growing and ruptured aneurysms. Furthermore the model’s extracranial location affects its ability to be used in preclinical safety assessments of new devices. We conclude that the rabbit elastase model has characteristics that make it a simple and effective model for preclinical studies on the endovascular treatment of intracranial aneurysms however further work is needed to develop aneurysm models that simulate the histopathologic and morphologic characteristics of growing and ruptured aneurysms. PMID:25904642
ERIC Educational Resources Information Center
Finch, Holmes
2010-01-01
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
Computer simulation modeling of recreation use: Current status, case studies, and future directions
David N. Cole
2005-01-01
This report compiles information about recent progress in the application of computer simulation modeling to planning and management of recreation use, particularly in parks and wilderness. Early modeling efforts are described in a chapter that provides an historical perspective. Another chapter provides an overview of modeling options, common data input requirements,...
Exploring the Full-Information Bifactor Model in Vertical Scaling with Construct Shift
ERIC Educational Resources Information Center
Li, Ying; Lissitz, Robert W.
2012-01-01
To address the lack of attention to construct shift in item response theory (IRT) vertical scaling, a multigroup, bifactor model was proposed to model the common dimension for all grades and the grade-specific dimensions. Bifactor model estimation accuracy was evaluated through a simulation study with manipulated factors of percentage of common…
USDA-ARS?s Scientific Manuscript database
Process based and distributed watershed models possess a large number of parameters that are not directly measured in field and need to be calibrated through matching modeled in-stream fluxes with monitored data. Recently, there have been waves of concern about the reliability of this common practic...
A Common Factors Approach to Supporting University Students Experiencing Psychological Distress
ERIC Educational Resources Information Center
Surette, Tanya E.; Shier, Micheal L.
2017-01-01
This study empirically assessed the applicability of the common factors model to students accessing university-based counseling (n = 102). Participants rated symptoms of depression, anxiety, and somatization at intake and discharge. Therapists kept detailed session notes on client factors and therapy process variables. Data were analyzed utilizing…
USDA-ARS?s Scientific Manuscript database
Soil surface roughness is commonly identified as one of the dominant factors governing runoff and interrill erosion. Yet, because of difficulties in acquiring the data, most studies pay little attention to soil surface roughness. This is particularly true for soil erosion models which commonly don't...
Patterns and Consequences of in ovo Exposure to Methylmercury in Common Loons, poster presentation
A critical component of a common loon/mercury (Hg) risk assessment model under development is the determination of the concentration of Hg in eggs that poses a population level risk. We conducted a field study to (1) characterize in ovo methylmercury (MeHg) exposure in Wisconsin...
Experimental anti-GBM nephritis as an analytical tool for studying spontaneous lupus nephritis.
Du, Yong; Fu, Yuyang; Mohan, Chandra
2008-01-01
Systemic lupus erythematosus (SLE) is an autoimmune disease that results in immune-mediated damage to multiple organs. Among these, kidney involvement is the most common and fatal. Spontaneous lupus nephritis (SLN) in mouse models has provided valuable insights into the underlying mechanisms of human lupus nephritis. However, SLN in mouse models takes 6-12 months to manifest; hence there is clearly the need for a mouse model that can be used to unveil the pathogenic processes that lead to immune nephritis over a shorter time frame. In this article more than 25 different molecules are reviewed that have been studied both in the anti-glomerular basement membrane (anti-GBM) model and in SLN and it was found that these molecules influence both diseases in a parallel fashion, suggesting that the two disease settings share common molecular mechanisms. Based on these observations, the authors believe the experimental anti-GBM disease model might be one of the best tools currently available for uncovering the downstream molecular mechanisms leading to SLN.
Wu, Sheng-Hui; Ozaki, Koken; Reed, Terry; Krasnow, Ruth E; Dai, Jun
2017-07-01
This study examined genetic and environmental influences on the lipid concentrations of 1028 male twins using the novel univariate non-normal structural equation modeling (nnSEM) ADCE and ACE models. In the best fitting nnSEM ADCE model that was also better than the nnSEM ACE model, additive genetic factors (A) explained 4%, dominant genetic factors (D) explained 17%, and common (C) and unique (E) environmental factors explained 47% and 33% of the total variance of high-density lipoprotein cholesterol (HDL-C). The percentage of variation explained for other lipids was 0% (A), 30% (D), 34% (C) and 37% (E) for low-density lipoprotein cholesterol (LDL-C); 30, 0, 31 and 39% for total cholesterol; and 0, 31, 12 and 57% for triglycerides. It was concluded that additive and dominant genetic factors simultaneously affected HDL-C concentrations but not other lipids. Common and unique environmental factors influenced concentrations of all lipids.
Bayes Factor Covariance Testing in Item Response Models.
Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip
2017-12-01
Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning the underlying covariance structure are evaluated using (fractional) Bayes factor tests. The support for a unidimensional factor (i.e., assumption of local independence) and differential item functioning are evaluated by testing the covariance components. The posterior distribution of common covariance components is obtained in closed form by transforming latent responses with an orthogonal (Helmert) matrix. This posterior distribution is defined as a shifted-inverse-gamma, thereby introducing a default prior and a balanced prior distribution. Based on that, an MCMC algorithm is described to estimate all model parameters and to compute (fractional) Bayes factor tests. Simulation studies are used to show that the (fractional) Bayes factor tests have good properties for testing the underlying covariance structure of binary response data. The method is illustrated with two real data studies.
The short-lived African turquoise killifish: an emerging experimental model for ageing
Kim, Yumi; Nam, Hong Gil; Valenzano, Dario Riccardo
2016-01-01
ABSTRACT Human ageing is a fundamental biological process that leads to functional decay, increased risk for various diseases and, ultimately, death. Some of the basic biological mechanisms underlying human ageing are shared with other organisms; thus, animal models have been invaluable in providing key mechanistic and molecular insights into the common bases of biological ageing. In this Review, we briefly summarise the major applications of the most commonly used model organisms adopted in ageing research and highlight their relevance in understanding human ageing. We compare the strengths and limitations of different model organisms and discuss in detail an emerging ageing model, the short-lived African turquoise killifish. We review the recent progress made in using the turquoise killifish to study the biology of ageing and discuss potential future applications of this promising animal model. PMID:26839399
High pressure common rail injection system modeling and control.
Wang, H P; Zheng, D; Tian, Y
2016-07-01
In this paper modeling and common-rail pressure control of high pressure common rail injection system (HPCRIS) is presented. The proposed mathematical model of high pressure common rail injection system which contains three sub-systems: high pressure pump sub-model, common rail sub-model and injector sub-model is a relative complicated nonlinear system. The mathematical model is validated by the software Matlab and a virtual detailed simulation environment. For the considered HPCRIS, an effective model free controller which is called Extended State Observer - based intelligent Proportional Integral (ESO-based iPI) controller is designed. And this proposed method is composed mainly of the referred ESO observer, and a time delay estimation based iPI controller. Finally, to demonstrate the performances of the proposed controller, the proposed ESO-based iPI controller is compared with a conventional PID controller and ADRC. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
TIME AND CONCENTRATION DEPENDENT ACCUMULATION OF [3H]-DELTAMETHRIN IN XENOPUS LAEVIS OOCYTES.
Cell culture models are often used in mechanistic studies of toxicant action. However, one area of uncertainty is the extrapolation of dose from the in vitro model to the in vivo tissue. A common assumption of in vitro studies is that media concentration is a predictive marker of...
Ritchie, Marylyn D; White, Bill C; Parker, Joel S; Hahn, Lance W; Moore, Jason H
2003-01-01
Background Appropriate definition of neural network architecture prior to data analysis is crucial for successful data mining. This can be challenging when the underlying model of the data is unknown. The goal of this study was to determine whether optimizing neural network architecture using genetic programming as a machine learning strategy would improve the ability of neural networks to model and detect nonlinear interactions among genes in studies of common human diseases. Results Using simulated data, we show that a genetic programming optimized neural network approach is able to model gene-gene interactions as well as a traditional back propagation neural network. Furthermore, the genetic programming optimized neural network is better than the traditional back propagation neural network approach in terms of predictive ability and power to detect gene-gene interactions when non-functional polymorphisms are present. Conclusion This study suggests that a machine learning strategy for optimizing neural network architecture may be preferable to traditional trial-and-error approaches for the identification and characterization of gene-gene interactions in common, complex human diseases. PMID:12846935
The adoption of an interdisciplinary instructional model in secondary education
NASA Astrophysics Data System (ADS)
Misicko, Martin W.
This study describes the experiences of a secondary high school involved in the adoption of an interdisciplinary curriculum. An interdisciplinary curriculum is defined as both the precalculus and physics curriculums taught collaboratively throughout the school year. The students' academic performances were analyzed to gage the success of the interdisciplinary model. The four year study compared students taught precalculus in a traditional discipline-based classroom versus those facilitated in an interdisciplinary precalculus/physics model. It also documents the administrative changes necessary in restructuring a high school to an interdisciplinary team teaching model. All of the students in both pedagogical models received instruction from the same teacher, and were given identical assessment materials. Additionally, the curriculum guidelines and standards of learning were duplicated for both models. The primary difference of the two models focused on the applications of mathematics in the physics curriculum. Prerequisite information was compared in both models to ensure that the students in the study had comparable qualifications prior to the facilitation of the precalculus curriculum. Common trends were analyzed and discussed from the student's performance data. The students enrolled in the interdisciplinary model appeared to outperform the discipline-based students in common evaluative assessments. The themes and outcomes described in this study provide discussion topics for further investigation by other school districts. Further study is necessary to determine whether scheduling changes may have influenced student performances, and to examine whether other content areas may experience similar results.
Beyond Corroboration: Strengthening Model Validation by Looking for Unexpected Patterns
Chérel, Guillaume; Cottineau, Clémentine; Reuillon, Romain
2015-01-01
Models of emergent phenomena are designed to provide an explanation to global-scale phenomena from local-scale processes. Model validation is commonly done by verifying that the model is able to reproduce the patterns to be explained. We argue that robust validation must not only be based on corroboration, but also on attempting to falsify the model, i.e. making sure that the model behaves soundly for any reasonable input and parameter values. We propose an open-ended evolutionary method based on Novelty Search to look for the diverse patterns a model can produce. The Pattern Space Exploration method was tested on a model of collective motion and compared to three common a priori sampling experiment designs. The method successfully discovered all known qualitatively different kinds of collective motion, and performed much better than the a priori sampling methods. The method was then applied to a case study of city system dynamics to explore the model’s predicted values of city hierarchisation and population growth. This case study showed that the method can provide insights on potential predictive scenarios as well as falsifiers of the model when the simulated dynamics are highly unrealistic. PMID:26368917
Fishing for causes and cures of motor neuron disorders
Patten, Shunmoogum A.; Armstrong, Gary A. B.; Lissouba, Alexandra; Kabashi, Edor; Parker, J. Alex; Drapeau, Pierre
2014-01-01
Motor neuron disorders (MNDs) are a clinically heterogeneous group of neurological diseases characterized by progressive degeneration of motor neurons, and share some common pathological pathways. Despite remarkable advances in our understanding of these diseases, no curative treatment for MNDs exists. To better understand the pathogenesis of MNDs and to help develop new treatments, the establishment of animal models that can be studied efficiently and thoroughly is paramount. The zebrafish (Danio rerio) is increasingly becoming a valuable model for studying human diseases and in screening for potential therapeutics. In this Review, we highlight recent progress in using zebrafish to study the pathology of the most common MNDs: spinal muscular atrophy (SMA), amyotrophic lateral sclerosis (ALS) and hereditary spastic paraplegia (HSP). These studies indicate the power of zebrafish as a model to study the consequences of disease-related genes, because zebrafish homologues of human genes have conserved functions with respect to the aetiology of MNDs. Zebrafish also complement other animal models for the study of pathological mechanisms of MNDs and are particularly advantageous for the screening of compounds with therapeutic potential. We present an overview of their potential usefulness in MND drug discovery, which is just beginning and holds much promise for future therapeutic development. PMID:24973750
Conditioning of FRF measurements for use with frequency based substructuring
NASA Astrophysics Data System (ADS)
Nicgorski, Dana; Avitabile, Peter
2010-02-01
Frequency based substructuring approaches have been used for the generation of system models from component data. While numerical models show successful results, there have been many difficulties with actual measurements in many instances. Previous work has identified some of these typical problems using simulated data to incorporate specific measurement difficulties commonly observed along with approaches to overcome some of these difficulties. This paper presents the results using actual measured data for a laboratory structure subjected to both analytical and experimental studies. Various commonly used approaches are shown to illustrate some of the difficulties with measured data. A new approach to better condition the measured functions and purge commonly found data measurement contaminants is utilized to provide dramatically improved results. Several cases are explored to show the difficulties commonly observed as well as the improved conditioning of the measured data to obtain acceptable results.
Orbital Noise in the Earth System is a Common Cause of Climate and Greenhouse-Gas Fluctuation
NASA Technical Reports Server (NTRS)
Liu, H. S.; Kolenkiewicz, R.; Wade, C., Jr.; Smith, David E. (Technical Monitor)
2002-01-01
The mismatch between fossil isotopic data and climate models known as the cool-tropic paradox implies that either the data are flawed or we understand very little about the climate models of greenhouse warming. Here we question the validity of the climate models on the scientific background of orbital noise in the Earth system. Our study shows that the insolation pulsation induced by orbital noise is the common cause of climate change and atmospheric concentrations of carbon dioxide and methane. In addition, we find that the intensity of the insolation pulses is dependent on the latitude of the Earth. Thus, orbital noise is the key to understanding the troubling paradox in climate models.
Modeling and Analysis of Mixed Synchronous/Asynchronous Systems
NASA Technical Reports Server (NTRS)
Driscoll, Kevin R.; Madl. Gabor; Hall, Brendan
2012-01-01
Practical safety-critical distributed systems must integrate safety critical and non-critical data in a common platform. Safety critical systems almost always consist of isochronous components that have synchronous or asynchronous interface with other components. Many of these systems also support a mix of synchronous and asynchronous interfaces. This report presents a study on the modeling and analysis of asynchronous, synchronous, and mixed synchronous/asynchronous systems. We build on the SAE Architecture Analysis and Design Language (AADL) to capture architectures for analysis. We present preliminary work targeted to capture mixed low- and high-criticality data, as well as real-time properties in a common Model of Computation (MoC). An abstract, but representative, test specimen system was created as the system to be modeled.
Fungal Biofilms: In vivo models for discovery of anti-biofilm drugs
Nett, Jeniel E.; Andes, David
2015-01-01
SUMMARY During infection, fungi frequently transition to a biofilm lifestyle, proliferating as communities of surface-adherent aggregates of cells. Phenotypically, cells in a biofilm are distinct from free-floating cells. Their high tolerance of antifungals and ability to withstand host defenses are two characteristics that foster resilience. Biofilm infections are particularly difficult to eradicate and most available antifungals have minimal activity. Therefore, the discovery of novel compounds and innovative strategies to treat fungal biofilms is of great interest. Although many fungi have been observed to form biofilms, the most well-studied is Candida albicans. Animal models have been developed to simulate common Candida device-associated infections, including those involving vascular catheters, dentures, urinary catheters, and subcutaneous implants. Models have also reproduced the most common mucosal biofilm infections, oropharyngeal and vaginal candidiasis. These models incorporate the anatomical site, immune components, and fluid dynamics of clinical niches and have been instrumental in the study of drug resistance and investigation of novel therapies. This chapter describes the significance of fungal biofilm infections, the animal models developed for biofilm study, and how these models have contributed to development of new strategies for eradication of fungal biofilm infections. PMID:26397003
Fungal Biofilms: In Vivo Models for Discovery of Anti-Biofilm Drugs.
Nett, Jeniel E; Andes, David R
2015-06-01
During infection, fungi frequently transition to a biofilm lifestyle, proliferating as communities of surface-adherent aggregates of cells. Phenotypically, cells in a biofilm are distinct from free-floating cells. Their high tolerance of antifungals and ability to withstand host defenses are two characteristics that foster resilience. Biofilm infections are particularly difficult to eradicate, and most available antifungals have minimal activity. Therefore, the discovery of novel compounds and innovative strategies to treat fungal biofilms is of great interest. Although many fungi have been observed to form biofilms, the most well-studied is Candida albicans. Animal models have been developed to simulate common Candida device-associated infections, including those involving vascular catheters, dentures, urinary catheters, and subcutaneous implants. Models have also reproduced the most common mucosal biofilm infections: oropharyngeal and vaginal candidiasis. These models incorporate the anatomical site, immune components, and fluid dynamics of clinical niches and have been instrumental in the study of drug resistance and investigation of novel therapies. This chapter describes the significance of fungal biofilm infections, the animal models developed for biofilm study, and how these models have contributed to the development of new strategies for the eradication of fungal biofilm infections.
Evaluating a common semi-mechanistic mathematical model of gene-regulatory networks
2015-01-01
Modeling and simulation of gene-regulatory networks (GRNs) has become an important aspect of modern systems biology investigations into mechanisms underlying gene regulation. A key challenge in this area is the automated inference (reverse-engineering) of dynamic, mechanistic GRN models from gene expression time-course data. Common mathematical formalisms for representing such models capture two aspects simultaneously within a single parameter: (1) Whether or not a gene is regulated, and if so, the type of regulator (activator or repressor), and (2) the strength of influence of the regulator (if any) on the target or effector gene. To accommodate both roles, "generous" boundaries or limits for possible values of this parameter are commonly allowed in the reverse-engineering process. This approach has several important drawbacks. First, in the absence of good guidelines, there is no consensus on what limits are reasonable. Second, because the limits may vary greatly among different reverse-engineering experiments, the concrete values obtained for the models may differ considerably, and thus it is difficult to compare models. Third, if high values are chosen as limits, the search space of the model inference process becomes very large, adding unnecessary computational load to the already complex reverse-engineering process. In this study, we demonstrate that restricting the limits to the [−1, +1] interval is sufficient to represent the essential features of GRN systems and offers a reduction of the search space without loss of quality in the resulting models. To show this, we have carried out reverse-engineering studies on data generated from artificial and experimentally determined from real GRN systems. PMID:26356485
Estimating the Diets of Animals Using Stable Isotopes and a Comprehensive Bayesian Mixing Model
Hopkins, John B.; Ferguson, Jake M.
2012-01-01
Using stable isotope mixing models (SIMMs) as a tool to investigate the foraging ecology of animals is gaining popularity among researchers. As a result, statistical methods are rapidly evolving and numerous models have been produced to estimate the diets of animals—each with their benefits and their limitations. Deciding which SIMM to use is contingent on factors such as the consumer of interest, its food sources, sample size, the familiarity a user has with a particular framework for statistical analysis, or the level of inference the researcher desires to make (e.g., population- or individual-level). In this paper, we provide a review of commonly used SIMM models and describe a comprehensive SIMM that includes all features commonly used in SIMM analysis and two new features. We used data collected in Yosemite National Park to demonstrate IsotopeR's ability to estimate dietary parameters. We then examined the importance of each feature in the model and compared our results to inferences from commonly used SIMMs. IsotopeR's user interface (in R) will provide researchers a user-friendly tool for SIMM analysis. The model is also applicable for use in paleontology, archaeology, and forensic studies as well as estimating pollution inputs. PMID:22235246
2016-03-07
planning hulls and plan for N62909-15-1-2052 collaboration studies between NSWC and KRISO - Washington DC Sb. GRANT NUMBER N62909-15-1-2052 Sc. PROGRAM...be carried out in MASK’s facilities. We discussed common interests on planing hulls , and made plans for collaboration studies between NSWC and KRISO... hull forms for satisfying the requirements of the project. Model tests and analyses are required to assess the maneuvering and seakeeping performance
A local structure model for network analysis
Casleton, Emily; Nordman, Daniel; Kaiser, Mark
2017-04-01
The statistical analysis of networks is a popular research topic with ever widening applications. Exponential random graph models (ERGMs), which specify a model through interpretable, global network features, are common for this purpose. In this study we introduce a new class of models for network analysis, called local structure graph models (LSGMs). In contrast to an ERGM, a LSGM specifies a network model through local features and allows for an interpretable and controllable local dependence structure. In particular, LSGMs are formulated by a set of full conditional distributions for each network edge, e.g., the probability of edge presence/absence, depending onmore » neighborhoods of other edges. Additional model features are introduced to aid in specification and to help alleviate a common issue (occurring also with ERGMs) of model degeneracy. Finally, the proposed models are demonstrated on a network of tornadoes in Arkansas where a LSGM is shown to perform significantly better than a model without local dependence.« less
A local structure model for network analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casleton, Emily; Nordman, Daniel; Kaiser, Mark
The statistical analysis of networks is a popular research topic with ever widening applications. Exponential random graph models (ERGMs), which specify a model through interpretable, global network features, are common for this purpose. In this study we introduce a new class of models for network analysis, called local structure graph models (LSGMs). In contrast to an ERGM, a LSGM specifies a network model through local features and allows for an interpretable and controllable local dependence structure. In particular, LSGMs are formulated by a set of full conditional distributions for each network edge, e.g., the probability of edge presence/absence, depending onmore » neighborhoods of other edges. Additional model features are introduced to aid in specification and to help alleviate a common issue (occurring also with ERGMs) of model degeneracy. Finally, the proposed models are demonstrated on a network of tornadoes in Arkansas where a LSGM is shown to perform significantly better than a model without local dependence.« less
Noisy Spins and the Richardson-Gaudin Model
NASA Astrophysics Data System (ADS)
Rowlands, Daniel A.; Lamacraft, Austen
2018-03-01
We study a system of spins (qubits) coupled to a common noisy environment, each precessing at its own frequency. The correlated noise experienced by the spins implies long-lived correlations that relax only due to the differing frequencies. We use a mapping to a non-Hermitian integrable Richardson-Gaudin model to find the exact spectrum of the quantum master equation in the high-temperature limit and, hence, determine the decay rate. Our solution can be used to evaluate the effect of inhomogeneous splittings on a system of qubits coupled to a common bath.
Light deflection in gadolinium molybdate ferroelastic crystals
NASA Astrophysics Data System (ADS)
Staniorowski, Piotr; Bornarel, Jean
2000-02-01
The deflection of a He-Ne light beam by polydomain gadolinium molybdate (GMO) crystals has been studied with respect to incidence angle icons/Journals/Common/alpha" ALT="alpha" ALIGN="TOP"/> i on the sample at room temperature. The A and B deflected beams do not cross each other during the icons/Journals/Common/alpha" ALT="alpha" ALIGN="TOP"/> i variation, in contrast to results and calculations previously published. The model using the Fresnel equation confirms this result. The model presented is more accurate for numerical calculation than that using the Huygens construction.
Sallam, Karim; Li, Yingxin; Sager, Philip T; Houser, Steven R; Wu, Joseph C
2015-06-05
Sudden cardiac death is a common cause of death in patients with structural heart disease, genetic mutations, or acquired disorders affecting cardiac ion channels. A wide range of platforms exist to model and study disorders associated with sudden cardiac death. Human clinical studies are cumbersome and are thwarted by the extent of investigation that can be performed on human subjects. Animal models are limited by their degree of homology to human cardiac electrophysiology, including ion channel expression. Most commonly used cellular models are cellular transfection models, which are able to mimic the expression of a single-ion channel offering incomplete insight into changes of the action potential profile. Induced pluripotent stem cell-derived cardiomyocytes resemble, but are not identical, adult human cardiomyocytes and provide a new platform for studying arrhythmic disorders leading to sudden cardiac death. A variety of platforms exist to phenotype cellular models, including conventional and automated patch clamp, multielectrode array, and computational modeling. Induced pluripotent stem cell-derived cardiomyocytes have been used to study long QT syndrome, catecholaminergic polymorphic ventricular tachycardia, hypertrophic cardiomyopathy, and other hereditary cardiac disorders. Although induced pluripotent stem cell-derived cardiomyocytes are distinct from adult cardiomyocytes, they provide a robust platform to advance the science and clinical care of sudden cardiac death. © 2015 American Heart Association, Inc.
Fonseca, Antonio F B DA; Scheffer, Jussara P; Coelho, Barbara P; Aiello, Graciane; Guimarães, Arthur G; Gama, Carlos R B; Vescovini, Victor; Cabral, Paula G A; Oliveira, André L A
2016-09-01
The most common cause of spinal cord injury are high impact trauma, which often result in some motor impairment, sensory or autonomic a greater or lesser extent in the distal areas the level of trauma. In terms of survival and complications due to sequelae, veterinary patients have a poor prognosis unfavorable. Therefore justified the study of experimental models of spinal cord injury production that could provide more support to research potential treatments for spinal cord injuries in medicine and veterinary medicine. Preclinical studies of acute spinal cord injury require an experimental animal model easily reproducible. The most common experimental animal model is the rat, and several techniques for producing a spinal cord injury. The objective of this study was to describe and evaluate the effectiveness of acute spinal cord injury production technique through inflation of Fogarty(r) catheter using rabbits as an experimental model because it is a species that has fewer conclusive publications and contemplating. The main requirements of a model as low cost, handling convenience, reproducibility and uniformity. The technique was adequate for performing preclinical studies in neuro-traumatology area, effectively leading to degeneration and necrosis of the nervous tissue fostering the emergence of acute paraplegia.
Modeling a Common-Source Amplifier Using a Ferroelectric Transistor
NASA Technical Reports Server (NTRS)
Sayyah, Rana; Hunt, Mitchell; MacLeond, Todd C.; Ho, Fat D.
2010-01-01
This paper presents a mathematical model characterizing the behavior of a common-source amplifier using a FeFET. The model is based on empirical data and incorporates several variables that affect the output, including frequency, load resistance, and gate-to-source voltage. Since the common-source amplifier is the most widely used amplifier in MOS technology, understanding and modeling the behavior of the FeFET-based common-source amplifier will help in the integration of FeFETs into many circuits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watson, R.
Waterflooding is the most commonly used secondary oil recovery technique. One of the requirements for understanding waterflood performance is a good knowledge of the basic properties of the reservoir rocks. This study is aimed at correlating rock-pore characteristics to oil recovery from various reservoir rock types and incorporating these properties into empirical models for Predicting oil recovery. For that reason, this report deals with the analyses and interpretation of experimental data collected from core floods and correlated against measurements of absolute permeability, porosity. wettability index, mercury porosimetry properties and irreducible water saturation. The results of the radial-core the radial-core andmore » linear-core flow investigations and the other associated experimental analyses are presented and incorporated into empirical models to improve the predictions of oil recovery resulting from waterflooding, for sandstone and limestone reservoirs. For the radial-core case, the standardized regression model selected, based on a subset of the variables, predicted oil recovery by waterflooding with a standard deviation of 7%. For the linear-core case, separate models are developed using common, uncommon and combination of both types of rock properties. It was observed that residual oil saturation and oil recovery are better predicted with the inclusion of both common and uncommon rock/fluid properties into the predictive models.« less
Representation of Reserves Through a Brownian Motion Model
NASA Astrophysics Data System (ADS)
Andrade, M.; Ferreira, M. A. M.; Filipe, J. A.
2012-11-01
The Brownian Motion is commonly used as an approximation for some Random Walks and also for the Classic Risk Process. As the Random Walks and the Classic Risk Process are used frequently as stochastic models to represent reserves, it is natural to consider the Brownian Motion with the same purpose. In this study a model, based on the Brownian Motion, is presented to represent reserves. The Brownian Motion is used in this study to estimate the ruin probability of a fund. This kind of models is considered often in the study of pensions funds.
Latent spatial models and sampling design for landscape genetics
Ephraim M. Hanks; Melvin B. Hooten; Steven T. Knick; Sara J. Oyler-McCance; Jennifer A. Fike; Todd B. Cross; Michael K. Schwartz
2016-01-01
We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial...
How Programming Can Make a Difference for Gifted Students--A Multi-Methods Model.
ERIC Educational Resources Information Center
Hall, Eleanor G.
A multimethod model of educating gifted and talented students was based on graduate students' study of 14 eminent self actualized individuals. Common environmental elements of these individuals were found in parent background, birth order, relationship with family, education, task commitment, personality traits, and interests. The model was…
Achievement Goals and Discrete Achievement Emotions: A Theoretical Model and Prospective Test
ERIC Educational Resources Information Center
Pekrun, Reinhard; Elliot, Andrew J.; Maier, Markus A.
2006-01-01
A theoretical model linking achievement goals to discrete achievement emotions is proposed. The model posits relations between the goals of the trichotomous achievement goal framework and 8 commonly experienced achievement emotions organized in a 2 (activity/outcome focus) x 2 (positive/negative valence) taxonomy. Two prospective studies tested…
A Study of Collaborative Software Development Using Groupware Tools
ERIC Educational Resources Information Center
Defranco-Tommarello, Joanna; Deek, Fadi P.
2005-01-01
The experimental results of a collaborative problem solving and program development model that takes into consideration the cognitive and social activities that occur during software development is presented in this paper. This collaborative model is based on the Dual Common Model that focuses on individual cognitive aspects of problem solving and…
A Model Plant for a Biology Curriculum: Spider Flower ("Cleome Hasslerana L.")
ERIC Educational Resources Information Center
Marquard, Robert D.; Steinback, Rebecca
2009-01-01
Major advances in fundamental science are developed using model systems. Classic examples of model systems include Mendel's work with the common garden pea ("Pisium sativa"), classic inheritance work by Morgan with the fruit fly ("Drosophila"), developmental studies with the nematode ("C. elegans"), and transposable elements in maize ("Zea…
Striking a Balance: Students' Tendencies to Oversimplify or Overcomplicate in Mathematical Modeling
ERIC Educational Resources Information Center
Gould, Heather; Wasserman, Nicholas H.
2014-01-01
With the adoption of the "Common Core State Standards for Mathematics" (CCSSM), the process of mathematical modeling has been given increased attention in mathematics education. This article reports on a study intended to inform the implementation of modeling in classroom contexts by examining students' interactions with the process of…
Comparative Analyses of MIRT Models and Software (BMIRT and flexMIRT)
ERIC Educational Resources Information Center
Yavuz, Guler; Hambleton, Ronald K.
2017-01-01
Application of MIRT modeling procedures is dependent on the quality of parameter estimates provided by the estimation software and techniques used. This study investigated model parameter recovery of two popular MIRT packages, BMIRT and flexMIRT, under some common measurement conditions. These packages were specifically selected to investigate the…
Maximum Likelihood Estimation in Meta-Analytic Structural Equation Modeling
ERIC Educational Resources Information Center
Oort, Frans J.; Jak, Suzanne
2016-01-01
Meta-analytic structural equation modeling (MASEM) involves fitting models to a common population correlation matrix that is estimated on the basis of correlation coefficients that are reported by a number of independent studies. MASEM typically consist of two stages. The method that has been found to perform best in terms of statistical…
Animal models for studying female genital tract infection with Chlamydia trachomatis.
De Clercq, Evelien; Kalmar, Isabelle; Vanrompay, Daisy
2013-09-01
Chlamydia trachomatis is a Gram-negative obligate intracellular bacterial pathogen. It is the leading cause of bacterial sexually transmitted disease in the world, with more than 100 million new cases of genital tract infections with C. trachomatis occurring each year. Animal models are indispensable for the study of C. trachomatis infections and the development and evaluation of candidate vaccines. In this paper, the most commonly used animal models to study female genital tract infections with C. trachomatis will be reviewed, namely, the mouse, guinea pig, and nonhuman primate models. Additionally, we will focus on the more recently developed pig model.
A remark on the GNSS single difference model with common clock scheme for attitude determination
NASA Astrophysics Data System (ADS)
Chen, Wantong
2016-09-01
GNSS-based attitude determination technique is an important field of study, in which two schemes can be used to construct the actual system: the common clock scheme and the non-common clock scheme. Compared with the non-common clock scheme, the common clock scheme can strongly improve both the reliability and the accuracy. However, in order to gain these advantages, specific care must be taken in the implementation. The cares are thus discussed, based on the generating technique of carrier phase measurement in GNSS receivers. A qualitative assessment of potential phase bias contributes is also carried out. Possible technical difficulties are pointed out for the development of single-board multi-antenna GNSS attitude systems with a common clock.
Recursive formulae and performance comparisons for first mode dynamics of periodic structures
NASA Astrophysics Data System (ADS)
Hobeck, Jared D.; Inman, Daniel J.
2017-05-01
Periodic structures are growing in popularity especially in the energy harvesting and metastructures communities. Common types of these unique structures are referred to in the literature as zigzag, orthogonal spiral, fan-folded, and longitudinal zigzag structures. Many of these studies on periodic structures have two competing goals in common: (a) minimizing natural frequency, and (b) minimizing mass or volume. These goals suggest that no single design is best for all applications; therefore, there is a need for design optimization and comparison tools which first require efficient easy-to-implement models. All available structural dynamics models for these types of structures do provide exact analytical solutions; however, they are complex requiring tedious implementation and providing more information than necessary for practical applications making them computationally inefficient. This paper presents experimentally validated recursive models that are able to very accurately and efficiently predict the dynamics of the four most common types of periodic structures. The proposed modeling technique employs a combination of static deflection formulae and Rayleigh’s Quotient to estimate the first mode shape and natural frequency of periodic structures having any number of beams. Also included in this paper are the results of an extensive experimental validation study which show excellent agreement between model prediction and measurement. Lastly, the proposed models are used to evaluate the performance of each type of structure. Results of this performance evaluation reveal key advantages and disadvantages associated with each type of structure.
Load Modeling and Calibration Techniques for Power System Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, Forrest S.; Mayhorn, Ebony T.; Elizondo, Marcelo A.
2011-09-23
Load modeling is the most uncertain area in power system simulations. Having an accurate load model is important for power system planning and operation. Here, a review of load modeling and calibration techniques is given. This paper is not comprehensive, but covers some of the techniques most commonly found in the literature. The advantages and disadvantages of each technique are outlined.
Morrison, Kathryn T; Shaddick, Gavin; Henderson, Sarah B; Buckeridge, David L
2016-08-15
This paper outlines a latent process model for forecasting multiple health outcomes arising from a common environmental exposure. Traditionally, surveillance models in environmental health do not link health outcome measures, such as morbidity or mortality counts, to measures of exposure, such as air pollution. Moreover, different measures of health outcomes are treated as independent, while it is known that they are correlated with one another over time as they arise in part from a common underlying exposure. We propose modelling an environmental exposure as a latent process, and we describe the implementation of such a model within a hierarchical Bayesian framework and its efficient computation using integrated nested Laplace approximations. Through a simulation study, we compare distinct univariate models for each health outcome with a bivariate approach. The bivariate model outperforms the univariate models in bias and coverage of parameter estimation, in forecast accuracy and in computational efficiency. The methods are illustrated with a case study using healthcare utilization and air pollution data from British Columbia, Canada, 2003-2011, where seasonal wildfires produce high levels of air pollution, significantly impacting population health. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Discovery of cancer common and specific driver gene sets
2017-01-01
Abstract Cancer is known as a disease mainly caused by gene alterations. Discovery of mutated driver pathways or gene sets is becoming an important step to understand molecular mechanisms of carcinogenesis. However, systematically investigating commonalities and specificities of driver gene sets among multiple cancer types is still a great challenge, but this investigation will undoubtedly benefit deciphering cancers and will be helpful for personalized therapy and precision medicine in cancer treatment. In this study, we propose two optimization models to de novo discover common driver gene sets among multiple cancer types (ComMDP) and specific driver gene sets of one certain or multiple cancer types to other cancers (SpeMDP), respectively. We first apply ComMDP and SpeMDP to simulated data to validate their efficiency. Then, we further apply these methods to 12 cancer types from The Cancer Genome Atlas (TCGA) and obtain several biologically meaningful driver pathways. As examples, we construct a common cancer pathway model for BRCA and OV, infer a complex driver pathway model for BRCA carcinogenesis based on common driver gene sets of BRCA with eight cancer types, and investigate specific driver pathways of the liquid cancer lymphoblastic acute myeloid leukemia (LAML) versus other solid cancer types. In these processes more candidate cancer genes are also found. PMID:28168295
Teachers' Understanding of and Concerns about Mathematical Modeling in the Common Core Standards
ERIC Educational Resources Information Center
Wolf, Nancy Butler
2013-01-01
Educational reform is most likely to be successful when teachers are knowledgeable about the intended reform, and when their concerns about the reform are understood and addressed. The Common Core State Standards (CCSS) is an effort to establish a set of nationwide expectations for students and teachers. This study examined teacher understanding…
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2018-01-01
This article outlines a procedure for examining the degree to which a common factor may be dominating additional factors in a multicomponent measuring instrument consisting of binary items. The procedure rests on an application of the latent variable modeling methodology and accounts for the discrete nature of the manifest indicators. The method…
NASA Astrophysics Data System (ADS)
Markauskaite, Lina; Kelly, Nick; Jacobson, Michael J.
2017-12-01
This paper gives a grounded cognition account of model-based learning of complex scientific knowledge related to socio-scientific issues, such as climate change. It draws on the results from a study of high school students learning about the carbon cycle through computational agent-based models and investigates two questions: First, how do students ground their understanding about the phenomenon when they learn and solve problems with computer models? Second, what are common sources of mistakes in students' reasoning with computer models? Results show that students ground their understanding in computer models in five ways: direct observation, straight abstraction, generalisation, conceptualisation, and extension. Students also incorporate into their reasoning their knowledge and experiences that extend beyond phenomena represented in the models, such as attitudes about unsustainable carbon emission rates, human agency, external events, and the nature of computational models. The most common difficulties of the students relate to seeing the modelled scientific phenomenon and connecting results from the observations with other experiences and understandings about the phenomenon in the outside world. An important contribution of this study is the constructed coding scheme for establishing different ways of grounding, which helps to understand some challenges that students encounter when they learn about complex phenomena with agent-based computer models.
The Effect of Attending Tutoring on Course Grades in Calculus I
ERIC Educational Resources Information Center
Rickard, Brian; Mills, Melissa
2018-01-01
Tutoring centres are common in universities in the United States, but there are few published studies that statistically examine the effects of tutoring on student success. This study utilizes multiple regression analysis to model the effect of tutoring attendance on final course grades in Calculus I. Our model predicted that every three visits to…
The Use of an Expectancy-Value Model in Studying a University's Image. AIR Forum 1982 Paper.
ERIC Educational Resources Information Center
Muffo, John A.; Whipple, Thomas W.
The use of an expectancy-value model, common to consumer marketing studies, in analyzing the market position of Cleveland State University was investigated. Attention was focused on showing how consumer attitude concepts and methodologies can be used in developing a strategic marketing plan. Six populations were identified as groups important to…
ERIC Educational Resources Information Center
Martin, Neilson C.; Levy, Florence; Pieka, Jan; Hay, David A.
2006-01-01
Attention Deficit Hyperactivity Disorder (ADHD) commonly co-occurs with Oppositional Defiant Disorder, Conduct Disorder and Reading Disability. Twin studies are an important approach to understanding and modelling potential causes of such comorbidity. Univariate and bivariate genetic models were fitted to maternal report data from 2040 families of…
NASA Astrophysics Data System (ADS)
Wang, Qinpeng; Yang, Jianguo; Xin, Dong; He, Yuhai; Yu, Yonghua
2018-05-01
In this paper, based on the characteristic analyzing of the mechanical fuel injection system for the marine medium-speed diesel engine, a sectional high-pressure common rail fuel injection system is designed, rated condition rail pressure of which is 160MPa. The system simulation model is built and the performance of the high pressure common rail fuel injection system is analyzed, research results provide the technical foundation for the system engineering development.
A Study of a Mechanical Swimming Dolphin
NASA Astrophysics Data System (ADS)
Fang, Lilly; Maass, Daniel; Leftwich, Megan; Smits, Alexander
2007-11-01
A one-third scale dolphin model was constructed to investigate dolphin swimming hydrodynamics. Design and construction of the model were achieved using body coordinate data from the common dolphin (Delphinus delphis) to ensure geometric similarity. The front two-thirds of the model are rigid and stationary, while an external mechanism drives the rear third. This motion mimics the kinematics of dolphin swimming. Planar laser induced florescence (PLIF) and particle image velocimetry (PIV) are used to study the hydrodynamics of the wake and to develop a vortex skeleton model.
The short-lived African turquoise killifish: an emerging experimental model for ageing.
Kim, Yumi; Nam, Hong Gil; Valenzano, Dario Riccardo
2016-02-01
Human ageing is a fundamental biological process that leads to functional decay, increased risk for various diseases and, ultimately, death. Some of the basic biological mechanisms underlying human ageing are shared with other organisms; thus, animal models have been invaluable in providing key mechanistic and molecular insights into the common bases of biological ageing. In this Review, we briefly summarise the major applications of the most commonly used model organisms adopted in ageing research and highlight their relevance in understanding human ageing. We compare the strengths and limitations of different model organisms and discuss in detail an emerging ageing model, the short-lived African turquoise killifish. We review the recent progress made in using the turquoise killifish to study the biology of ageing and discuss potential future applications of this promising animal model. © 2016. Published by The Company of Biologists Ltd.
A test of inflated zeros for Poisson regression models.
He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan
2017-01-01
Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.
Bradberry, Trent J; Metman, Leonard Verhagen; Contreras-Vidal, José L; van den Munckhof, Pepijn; Hosey, Lara A; Thompson, Jennifer L W; Schulz, Geralyn M; Lenz, Fredrick; Pahwa, Rajesh; Lyons, Kelly E; Braun, Allen R
2012-10-01
Dopamine agonist therapy and deep brain stimulation (DBS) of the subthalamic nucleus (STN) are antiparkinsonian treatments that act on a different part of the basal ganglia-thalamocortical motor circuitry, yet produce similar symptomatic improvements. The purpose of this study was to identify common and unique brain network features of these standard treatments. We analyzed images produced by H(2)(15)O positron emission tomography (PET) of patients with Parkinson's disease (PD) at rest. Nine patients were scanned before and after injection of apomorphine, and 11 patients were scanned while bilateral stimulators were off and while they were on. Both treatments produced common deactivations of the neocortical sensorimotor areas, including the supplementary motor area, precentral gyrus, and postcentral gyrus, and in subcortical structures, including the putamen and cerebellum. We observed concomitant activations of the superior parietal lobule and the midbrain in the region of the substantia nigra/STN. We also detected unique, treatment-specific changes with possible motor-related consequences in the basal ganglia, thalamus, neocortical sensorimotor cortex, and posterolateral cerebellum. Unique changes in nonmotor regions may reflect treatment-specific effects on verbal fluency and limbic functions. Many of the common effects of these treatments are consistent with the standard pathophysiologic model of PD. However, the common effects in the cerebellum are not readily explained by the model. Consistent deactivation of the cerebellum is interesting in light of recent reports of synaptic pathways directly connecting the cerebellum and basal ganglia, and may warrant further consideration for incorporation into the model. Published by Elsevier Inc.
Development of CCHE2D embankment break model
USDA-ARS?s Scientific Manuscript database
Earthen embankment breach often results in detrimental impact on downstream residents and infrastructure, especially those located in the flooding zone. Embankment failures are most commonly caused by overtopping or internal erosion. This study is to develop a practical numerical model for simulat...
Development of stable isotope mixing models in ecology - Dublin
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Historical development of stable isotope mixing models in ecology
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Development of stable isotope mixing models in ecology - Perth
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Development of stable isotope mixing models in ecology - Fremantle
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
Development of stable isotope mixing models in ecology - Sydney
More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...
ERIC Educational Resources Information Center
Dolan, Conor V.; Colom, Roberto; Abad, Francisco J.; Wicherts, Jelte M.; Hessen, David J.; van de Sluis, Sophie
2006-01-01
We investigated sex effects and the effects of educational attainment (EA) on the covariance structure of the WAIS-III in a subsample of the Spanish standardization data. We fitted both first order common factor models and second order common factor models. The latter include general intelligence ("g") as a second order common factor.…
Eye-hand coordination during a double-step task: evidence for a common stochastic accumulator
Gopal, Atul
2015-01-01
Many studies of reaching and pointing have shown significant spatial and temporal correlations between eye and hand movements. Nevertheless, it remains unclear whether these correlations are incidental, arising from common inputs (independent model); whether these correlations represent an interaction between otherwise independent eye and hand systems (interactive model); or whether these correlations arise from a single dedicated eye-hand system (common command model). Subjects were instructed to redirect gaze and pointing movements in a double-step task in an attempt to decouple eye-hand movements and causally distinguish between the three architectures. We used a drift-diffusion framework in the context of a race model, which has been previously used to explain redirect behavior for eye and hand movements separately, to predict the pattern of eye-hand decoupling. We found that the common command architecture could best explain the observed frequency of different eye and hand response patterns to the target step. A common stochastic accumulator for eye-hand coordination also predicts comparable variances, despite significant difference in the means of the eye and hand reaction time (RT) distributions, which we tested. Consistent with this prediction, we observed that the variances of the eye and hand RTs were similar, despite much larger hand RTs (∼90 ms). Moreover, changes in mean eye RTs, which also increased eye RT variance, produced a similar increase in mean and variance of the associated hand RT. Taken together, these data suggest that a dedicated circuit underlies coordinated eye-hand planning. PMID:26084906
Lord, Dominique
2006-07-01
There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.
Soong, Ming Foong; Ramli, Rahizar; Saifizul, Ahmad
2017-01-01
Quarter vehicle model is the simplest representation of a vehicle that belongs to lumped-mass vehicle models. It is widely used in vehicle and suspension analyses, particularly those related to ride dynamics. However, as much as its common adoption, it is also commonly accepted without quantification that this model is not as accurate as many higher-degree-of-freedom models due to its simplicity and limited degrees of freedom. This study investigates the trade-off between simplicity and accuracy within the context of quarter vehicle model by determining the effect of adding various modeling details on model accuracy. In the study, road input detail, tire detail, suspension stiffness detail and suspension damping detail were factored in, and several enhanced models were compared to the base model to assess the significance of these details. The results clearly indicated that these details do have effect on simulated vehicle response, but to various extents. In particular, road input detail and suspension damping detail have the most significance and are worth being added to quarter vehicle model, as the inclusion of these details changed the response quite fundamentally. Overall, when it comes to lumped-mass vehicle modeling, it is reasonable to say that model accuracy depends not just on the number of degrees of freedom employed, but also on the contributions from various modeling details.
Between simplicity and accuracy: Effect of adding modeling details on quarter vehicle model accuracy
2017-01-01
Quarter vehicle model is the simplest representation of a vehicle that belongs to lumped-mass vehicle models. It is widely used in vehicle and suspension analyses, particularly those related to ride dynamics. However, as much as its common adoption, it is also commonly accepted without quantification that this model is not as accurate as many higher-degree-of-freedom models due to its simplicity and limited degrees of freedom. This study investigates the trade-off between simplicity and accuracy within the context of quarter vehicle model by determining the effect of adding various modeling details on model accuracy. In the study, road input detail, tire detail, suspension stiffness detail and suspension damping detail were factored in, and several enhanced models were compared to the base model to assess the significance of these details. The results clearly indicated that these details do have effect on simulated vehicle response, but to various extents. In particular, road input detail and suspension damping detail have the most significance and are worth being added to quarter vehicle model, as the inclusion of these details changed the response quite fundamentally. Overall, when it comes to lumped-mass vehicle modeling, it is reasonable to say that model accuracy depends not just on the number of degrees of freedom employed, but also on the contributions from various modeling details. PMID:28617819
Wu, Anqi; Dong, Qiaoxiang; Gao, Hui; Shi, Yuanshuo; Chen, Yuanhong; Zhang, Fuchuang; Bandyopadhyay, Abhik; Wang, Danhan; Gorena, Karla M; Huang, Changjiang; Tardif, Suzette; Nathanielsz, Peter W; Sun, Lu-Zhe
2016-08-25
Age is the number one risk factor for breast cancer, yet the underlying mechanisms are unexplored. Age-associated mammary stem cell (MaSC) dysfunction is thought to play an important role in breast cancer carcinogenesis. Non-human primates with their close phylogenetic relationship to humans provide a powerful model system to study the effects of aging on human MaSC. In particular, the common marmoset monkey (Callithrix jacchus) with a relatively short life span is an ideal model for aging research. In the present study, we characterized for the first time the mammary epithelial stem/progenitor cells in the common marmoset. The MaSC-enriched cells formed four major types of morphologically distinct colonies when cultured on plates pre-seeded with irradiated NIH3T3 fibroblasts, and were also capable of forming mammospheres in suspension culture and subsequent formation of 3D organoids in Matrigel culture. Most importantly, these 3D organoids were found to contain stem/progenitor cells that can undergo self-renewal and multi-lineage differentiation both in vitro and in vivo. We also observed a significant decrease of luminal-restricted progenitors with age. Our findings demonstrate that common marmoset mammary stem/progenitor cells can be isolated and quantified with established in vitro and in vivo assays used for mouse and human studies.
Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods
NASA Astrophysics Data System (ADS)
Gong, W.; Duan, Q.; Huo, X.
2017-12-01
Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.
Nogueira, Katia T; Lopes, Claudia S; Faerstein, Eduardo
2007-07-01
This study investigates the association between history of asthma and common mental disorders among employees at a public university in the State of Rio de Janeiro, Brazil. Phase 1 cross-sectional data from a cohort study (the Pró-Saúde Study) were collected from 4,030 employees. Asthma was ascertained by self-reported medical diagnosis, and the occurrence of common mental disorders was based on the General Health Questionnaire (GHQ-12). Generalized linear models were used to calculate prevalence rates. Asthma prevalence was 11% (444), of whom 39.7% (176) presented common mental disorders. History of asthma was associated with higher income (p = 0.01) and female gender (p = 0.01). The analysis adjusted by gender, age, and per capita income revealed an association between asthma and common mental disorders (PR = 1.37; 95%CI: 1.22-1.55). Employees with less than 10 years since their asthma diagnosis showed a higher prevalence of common mental disorders (PR = 1.88; 95%CI: 1.32-2.70). These findings suggest that multidisciplinary teams should consider emotional aspects of asthma patients, especially those recently diagnosed.
1985-11-01
As a o11066v. nlstle VSuSY £6I5PSAY I’ Iu PAS 11. Title Integrated Information Support System (1SS) Vol V - Common Data Model Subsystem Part 2 - CIMP ...AD-Mel1 236 INTEGRATED INFORMATION SUPPORT SYSTEM (IISS) VOLUME 5 1/2 COMMON DATA MODEL S.. (U) GENERAL ELECTRIC CO SCHENECTADY NY PRODUCTION...Volume V - Common Data Model Subsystem Part 2 - CDMP Test Case Report General Electric Company Production Resources Consulting One River Road
The BTBR mouse model of idiopathic autism – current view on mechanisms
Meyza, K. Z.; Blanchard, D. C.
2017-01-01
Autism spectrum disorder (ASD) is the most commonly diagnosed neurodevelopmental disorder, with current estimates of more than 1% of affected children across nations. The patients form a highly heterogeneous group with only the behavioral phenotype in common. The genetic heterogeneity is reflected in a plethora of animal models representing multiple mutations found in families of affected children. Despite many years of scientific effort, for the majority of cases the genetic cause remains elusive. It is therefore crucial to include well-validated models of idiopathic autism in studies searching for potential therapeutic agents. One of these models is the BTBR T+Itpr3tf/J mouse. The current review summarizes data gathered in recent research on potential molecular mechanisms responsible for the autism-like behavioral phenotype of this strain. PMID:28167097
Applying Multivariate Discrete Distributions to Genetically Informative Count Data.
Kirkpatrick, Robert M; Neale, Michael C
2016-03-01
We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.
Reliability of four models for clinical gait analysis.
Kainz, Hans; Graham, David; Edwards, Julie; Walsh, Henry P J; Maine, Sheanna; Boyd, Roslyn N; Lloyd, David G; Modenese, Luca; Carty, Christopher P
2017-05-01
Three-dimensional gait analysis (3DGA) has become a common clinical tool for treatment planning in children with cerebral palsy (CP). Many clinical gait laboratories use the conventional gait analysis model (e.g. Plug-in-Gait model), which uses Direct Kinematics (DK) for joint kinematic calculations, whereas, musculoskeletal models, mainly used for research, use Inverse Kinematics (IK). Musculoskeletal IK models have the advantage of enabling additional analyses which might improve the clinical decision-making in children with CP. Before any new model can be used in a clinical setting, its reliability has to be evaluated and compared to a commonly used clinical gait model (e.g. Plug-in-Gait model) which was the purpose of this study. Two testers performed 3DGA in eleven CP and seven typically developing participants on two occasions. Intra- and inter-tester standard deviations (SD) and standard error of measurement (SEM) were used to compare the reliability of two DK models (Plug-in-Gait and a six degrees-of-freedom model solved using Vicon software) and two IK models (two modifications of 'gait2392' solved using OpenSim). All models showed good reliability (mean SEM of 3.0° over all analysed models and joint angles). Variations in joint kinetics were less in typically developed than in CP participants. The modified 'gait2392' model which included all the joint rotations commonly reported in clinical 3DGA, showed reasonable reliable joint kinematic and kinetic estimates, and allows additional musculoskeletal analysis on surgically adjustable parameters, e.g. muscle-tendon lengths, and, therefore, is a suitable model for clinical gait analysis. Copyright © 2017. Published by Elsevier B.V.
Patient-Specific Computational Modeling of Human Phonation
NASA Astrophysics Data System (ADS)
Xue, Qian; Zheng, Xudong; University of Maine Team
2013-11-01
Phonation is a common biological process resulted from the complex nonlinear coupling between glottal aerodynamics and vocal fold vibrations. In the past, the simplified symmetric straight geometric models were commonly employed for experimental and computational studies. The shape of larynx lumen and vocal folds are highly three-dimensional indeed and the complex realistic geometry produces profound impacts on both glottal flow and vocal fold vibrations. To elucidate the effect of geometric complexity on voice production and improve the fundamental understanding of human phonation, a full flow-structure interaction simulation is carried out on a patient-specific larynx model. To the best of our knowledge, this is the first patient-specific flow-structure interaction study of human phonation. The simulation results are well compared to the established human data. The effects of realistic geometry on glottal flow and vocal fold dynamics are investigated. It is found that both glottal flow and vocal fold dynamics present a high level of difference from the previous simplified model. This study also paved the important step toward the development of computer model for voice disease diagnosis and surgical planning. The project described was supported by Grant Number ROlDC007125 from the National Institute on Deafness and Other Communication Disorders (NIDCD).
Psychophysiological correlates of aggression and violence: an integrative review.
Patrick, Christopher J
2008-08-12
This paper reviews existing psychophysiological studies of aggression and violent behaviour including research employing autonomic, electrocortical and neuroimaging measures. Robust physiological correlates of persistent aggressive behaviour evident in this literature include low baseline heart rate, enhanced autonomic reactivity to stressful or aversive stimuli, enhanced EEG slow wave activity, reduced P300 brain potential response and indications from structural and functional neuroimaging studies of dysfunction in frontocortical and limbic brain regions that mediate emotional processing and regulation. The findings are interpreted within a conceptual framework that draws on two integrative models in the literature. The first is a recently developed hierarchical model of impulse control (externalizing) problems, in which various disinhibitory syndromes including aggressive and addictive behaviours of different kinds are seen as arising from common as well as distinctive aetiologic factors. This model represents an approach to organizing these various interrelated phenotypes and investigating their common and distinctive aetiologic substrates. The other is a neurobiological model that posits impairments in affective regulatory circuits in the brain as a key mechanism for impulsive aggressive behaviour. This model provides a perspective for integrating findings from studies employing different measures that have implicated varying brain structures and physiological systems in violent and aggressive behaviour.
Importance of Personalized Health-Care Models: A Case Study in Activity Recognition.
Zdravevski, Eftim; Lameski, Petre; Trajkovik, Vladimir; Pombo, Nuno; Garcia, Nuno
2018-01-01
Novel information and communication technologies create possibilities to change the future of health care. Ambient Assisted Living (AAL) is seen as a promising supplement of the current care models. The main goal of AAL solutions is to apply ambient intelligence technologies to enable elderly people to continue to live in their preferred environments. Applying trained models from health data is challenging because the personalized environments could differ significantly than the ones which provided training data. This paper investigates the effects on activity recognition accuracy using single accelerometer of personalized models compared to models built on general population. In addition, we propose a collaborative filtering based approach which provides balance between fully personalized models and generic models. The results show that the accuracy could be improved to 95% with fully personalized models, and up to 91.6% with collaborative filtering based models, which is significantly better than common models that exhibit accuracy of 85.1%. The collaborative filtering approach seems to provide highly personalized models with substantial accuracy, while overcoming the cold start problem that is common for fully personalized models.
NASA Astrophysics Data System (ADS)
Turner, Andrew
2014-05-01
In this study we examine monsoon onset characteristics in 20th century historical and AMIP integrations of the CMIP5 multi-model database. We use a period of 1979-2005, common to both the AMIP and historical integrations. While all available observed boundary conditions, including sea-surface temperature (SST), are prescribed in the AMIP integrations, the historical integrations feature ocean-atmosphere models that generate SSTs via air-sea coupled processes. The onset of Indian monsoon rainfall is shown to be systematically earlier in the AMIP integrations when comparing groups of models that provide both experiments, and in the multi-model ensemble means for each experiment in turn. We also test some common circulation indices of the monsoon onset including the horizontal shear in the lower troposphere and wind kinetic energy. Since AMIP integrations are forced by observed SSTs and CMIP5 models are known to have large cold SST biases in the northern Arabian Sea during winter and spring that limits their monsoon rainfall, we relate the delayed onset in the coupled historical integrations to cold Arabian Sea SST biases. This study provides further motivation for solving cold SST biases in the Arabian Sea in coupled models.
COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS
Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...
Evaluating the Predictive Value of Growth Prediction Models
ERIC Educational Resources Information Center
Murphy, Daniel L.; Gaertner, Matthew N.
2014-01-01
This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…
Self-noise models of five commercial strong-motion accelerometers
Ringler, Adam; Evans, John R.; Hutt, Charles R.
2015-01-01
To better characterize the noise of a number of commonly deployed accelerometers in a standardized way, we conducted noise measurements on five different models of strong‐motion accelerometers. Our study was limited to traditional accelerometers (Fig. 1) and is in no way exhaustive.
A scoping review of malaria forecasting: past work and future directions
Zinszer, Kate; Verma, Aman D; Charland, Katia; Brewer, Timothy F; Brownstein, John S; Sun, Zhuoyu; Buckeridge, David L
2012-01-01
Objectives There is a growing body of literature on malaria forecasting methods and the objective of our review is to identify and assess methods, including predictors, used to forecast malaria. Design Scoping review. Two independent reviewers searched information sources, assessed studies for inclusion and extracted data from each study. Information sources Search strategies were developed and the following databases were searched: CAB Abstracts, EMBASE, Global Health, MEDLINE, ProQuest Dissertations & Theses and Web of Science. Key journals and websites were also manually searched. Eligibility criteria for included studies We included studies that forecasted incidence, prevalence or epidemics of malaria over time. A description of the forecasting model and an assessment of the forecast accuracy of the model were requirements for inclusion. Studies were restricted to human populations and to autochthonous transmission settings. Results We identified 29 different studies that met our inclusion criteria for this review. The forecasting approaches included statistical modelling, mathematical modelling and machine learning methods. Climate-related predictors were used consistently in forecasting models, with the most common predictors being rainfall, relative humidity, temperature and the normalised difference vegetation index. Model evaluation was typically based on a reserved portion of data and accuracy was measured in a variety of ways including mean-squared error and correlation coefficients. We could not compare the forecast accuracy of models from the different studies as the evaluation measures differed across the studies. Conclusions Applying different forecasting methods to the same data, exploring the predictive ability of non-environmental variables, including transmission reducing interventions and using common forecast accuracy measures will allow malaria researchers to compare and improve models and methods, which should improve the quality of malaria forecasting. PMID:23180505
Advantages and disadvantages of the animal models v. in vitro studies in iron metabolism: a review.
García, Y; Díaz-Castro, J
2013-10-01
Iron deficiency is the most common nutritional deficiency in the world. Special molecules have evolved for iron acquisition, transport and storage in soluble, nontoxic forms. Studies about the effects of iron on health are focused on iron metabolism or nutrition to prevent or treat iron deficiency and anemia. These studies are focused in two main aspects: (1) basic studies to elucidate iron metabolism and (2) nutritional studies to evaluate the efficacy of iron supplementation to prevent or treat iron deficiency and anemia. This paper reviews the advantages and disadvantages of the experimental models commonly used as well as the methods that are more used in studies related to iron. In vitro studies have used different parts of the gut. In vivo studies are done in humans and animals such as mice, rats, pigs and monkeys. Iron metabolism is a complex process that includes interactions at the systemic level. In vitro studies, despite physiological differences to humans, are useful to increase knowledge related to this essential micronutrient. Isotopic techniques are the most recommended in studies related to iron, but their high cost and required logistic, making them difficult to use. The depletion-repletion of hemoglobin is a method commonly used in animal studies. Three depletion-repletion techniques are mostly used: hemoglobin regeneration efficiency, relative biological values (RBV) and metabolic balance, which are official methods of the association of official analytical chemists. These techniques are well-validated to be used as studies related to iron and their results can be extrapolated to humans. Knowledge about the main advantages and disadvantages of the in vitro and animal models, and methods used in these studies, could increase confidence of researchers in the experimental results with less costs.
NASA Astrophysics Data System (ADS)
Gu, En-Guo
In this paper, we formulate a dynamical model of common fishery resource harvested by multiagents with heterogeneous strategy: profit maximizers and gradient learners. Special attention is paid to the problem of heterogeneity of strategic behaviors. We mainly study the existence and the local stability of non-negative equilibria for the model through mathematical analysis. We analyze local bifurcations and complex dynamics such as coexisting attractors by numerical simulations. We also study the local and global dynamics of the exclusive gradient learners as a special case of the model. We discover that when adjusting the speed to be slightly high, the increasing ratio of gradient learners may lead to instability of the fixed point and makes the system sink into complicated dynamics such as quasiperiodic or chaotic attractor. The results reveal that gradient learners with high adjusting speed may ultimately be more harmful to the sustainable use of fish stock than the profit maximizers.
Brain-Computer Interfaces: A Neuroscience Paradigm of Social Interaction? A Matter of Perspective
Mattout, Jérémie
2012-01-01
A number of recent studies have put human subjects in true social interactions, with the aim of better identifying the psychophysiological processes underlying social cognition. Interestingly, this emerging Neuroscience of Social Interactions (NSI) field brings up challenges which resemble important ones in the field of Brain-Computer Interfaces (BCI). Importantly, these challenges go beyond common objectives such as the eventual use of BCI and NSI protocols in the clinical domain or common interests pertaining to the use of online neurophysiological techniques and algorithms. Common fundamental challenges are now apparent and one can argue that a crucial one is to develop computational models of brain processes relevant to human interactions with an adaptive agent, whether human or artificial. Coupled with neuroimaging data, such models have proved promising in revealing the neural basis and mental processes behind social interactions. Similar models could help BCI to move from well-performing but offline static machines to reliable online adaptive agents. This emphasizes a social perspective to BCI, which is not limited to a computational challenge but extends to all questions that arise when studying the brain in interaction with its environment. PMID:22675291
Jiao, Yong; Zhang, Yu; Wang, Yu; Wang, Bei; Jin, Jing; Wang, Xingyu
2018-05-01
Multiset canonical correlation analysis (MsetCCA) has been successfully applied to optimize the reference signals by extracting common features from multiple sets of electroencephalogram (EEG) for steady-state visual evoked potential (SSVEP) recognition in brain-computer interface application. To avoid extracting the possible noise components as common features, this study proposes a sophisticated extension of MsetCCA, called multilayer correlation maximization (MCM) model for further improving SSVEP recognition accuracy. MCM combines advantages of both CCA and MsetCCA by carrying out three layers of correlation maximization processes. The first layer is to extract the stimulus frequency-related information in using CCA between EEG samples and sine-cosine reference signals. The second layer is to learn reference signals by extracting the common features with MsetCCA. The third layer is to re-optimize the reference signals set in using CCA with sine-cosine reference signals again. Experimental study is implemented to validate effectiveness of the proposed MCM model in comparison with the standard CCA and MsetCCA algorithms. Superior performance of MCM demonstrates its promising potential for the development of an improved SSVEP-based brain-computer interface.
Zebrafish: an animal model for research in veterinary medicine.
Nowik, N; Podlasz, P; Jakimiuk, A; Kasica, N; Sienkiewicz, W; Kaleczyc, J
2015-01-01
The zebrafish (Danio rerio) has become known as an excellent model organism for studies of vertebrate biology, vertebrate genetics, embryonal development, diseases and drug screening. Nevertheless, there is still lack of detailed reports about usage of the zebrafish as a model in veterinary medicine. Comparing to other vertebrates, they can lay hundreds of eggs at weekly intervals, externally fertilized zebrafish embryos are accessible to observation and manipulation at all stages of their development, which makes possible to simplify the research techniques such as fate mapping, fluorescent tracer time-lapse lineage analysis and single cell transplantation. Although zebrafish are only 2.5 cm long, they are easy to maintain. Intraperitoneal and intracerebroventricular injections, blood sampling and measurement of food intake are possible to be carry out in adult zebrafish. Danio rerio is a useful animal model for neurobiology, developmental biology, drug research, virology, microbiology and genetics. A lot of diseases, for which the zebrafish is a perfect model organism, affect aquatic animals. For a part of them, like those caused by Mycobacterium marinum or Pseudoloma neutrophila, Danio rerio is a natural host, but the zebrafish is also susceptible to the most of fish diseases including Itch, Spring viraemia of carp and Infectious spleen and kidney necrosis. The zebrafish is commonly used in research of bacterial virulence. The zebrafish embryo allows for rapid, non-invasive and real time analysis of bacterial infections in a vertebrate host. Plenty of common pathogens can be examined using zebrafish model: Streptococcus iniae, Vibrio anguillarum or Listeria monocytogenes. The steps are taken to use the zebrafish also in fungal research, especially that dealing with Candida albicans and Cryptococcus neoformans. Although, the zebrafish is used commonly as an animal model to study diseases caused by external agents, it is also useful in studies of metabolic disorders including fatty liver disease and diabetes. The zebrafish is also a valuable tool as a model in behavioral studies connected with feeding, predator evasion, habituation and memory or lateralized control of behavior. The aim of the present article is to familiarize the reader with the possibilities of Danio rerio as an experimental model for veterinary medicine.
ERIC Educational Resources Information Center
Xu, Ruifang
2010-01-01
Service-learning as a popular term refers to an educational model that combines academic study with social activism and civic service. However, some countries, such as China, use different terms. This article explores the differences and commonalities between service-learning in the USA and social practice in China in the following areas:…
ERIC Educational Resources Information Center
Myers, Mandy F.
2010-01-01
Graduate teaching assistants are common fixtures on college campuses, and their roles encompass a wide range of duties, including supervising labs, working alongside mentors, and teaching a variety of beginner courses to students. It is common practice in the field of composition and rhetoric, for example, to employ second year master's students…
Comparing estimates of climate change impacts from process-based and statistical crop models
NASA Astrophysics Data System (ADS)
Lobell, David B.; Asseng, Senthold
2017-01-01
The potential impacts of climate change on crop productivity are of widespread interest to those concerned with addressing climate change and improving global food security. Two common approaches to assess these impacts are process-based simulation models, which attempt to represent key dynamic processes affecting crop yields, and statistical models, which estimate functional relationships between historical observations of weather and yields. Examples of both approaches are increasingly found in the scientific literature, although often published in different disciplinary journals. Here we compare published sensitivities to changes in temperature, precipitation, carbon dioxide (CO2), and ozone from each approach for the subset of crops, locations, and climate scenarios for which both have been applied. Despite a common perception that statistical models are more pessimistic, we find no systematic differences between the predicted sensitivities to warming from process-based and statistical models up to +2 °C, with limited evidence at higher levels of warming. For precipitation, there are many reasons why estimates could be expected to differ, but few estimates exist to develop robust comparisons, and precipitation changes are rarely the dominant factor for predicting impacts given the prominent role of temperature, CO2, and ozone changes. A common difference between process-based and statistical studies is that the former tend to include the effects of CO2 increases that accompany warming, whereas statistical models typically do not. Major needs moving forward include incorporating CO2 effects into statistical studies, improving both approaches’ treatment of ozone, and increasing the use of both methods within the same study. At the same time, those who fund or use crop model projections should understand that in the short-term, both approaches when done well are likely to provide similar estimates of warming impacts, with statistical models generally requiring fewer resources to produce robust estimates, especially when applied to crops beyond the major grains.
Commonly dysregulated genes in murine APL cells
Yuan, Wenlin; Payton, Jacqueline E.; Holt, Matthew S.; Link, Daniel C.; Watson, Mark A.; DiPersio, John F.; Ley, Timothy J.
2007-01-01
To identify genes that are commonly dysregulated in a murine model of acute promyelocytic leukemia (APL), we first defined gene expression patterns during normal murine myeloid development; serial gene expression profiling studies were performed with primary murine hematopoietic progenitors that were induced to undergo myeloid maturation in vitro with G-CSF. Many genes were reproducibly expressed in restricted developmental “windows,” suggesting a structured hierarchy of expression that is relevant for the induction of developmental fates and/or differentiated cell functions. We compared the normal myeloid developmental transcriptome with that of APL cells derived from mice expressing PML-RARα under control of the murine cathepsin G locus. While many promyelocyte-specific genes were highly expressed in all APL samples, 116 genes were reproducibly dysregulated in many independent APL samples, including Fos, Jun, Egr1, Tnf, and Vcam1. However, this set of commonly dysregulated genes was expressed normally in preleukemic, early myeloid cells from the same mouse model, suggesting that dysregulation occurs as a “downstream” event during disease progression. These studies suggest that the genetic events that lead to APL progression may converge on common pathways that are important for leukemia pathogenesis. PMID:17008535
Schofield, Thomas; Beaumont, Kelly; Widaman, Keith; Jochem, Rachel; Robins, Richard; Conger, Rand
2013-01-01
The current study tested elements of the theoretical model of Portes and Rumbaut (1996), which proposes that parent–child differences in English fluency in immigrant families affect various family processes that, in turn, relate to changes in academic success. The current study of 674 Mexican- origin families provided support for the model in that parent–child fluency in a common language was associated with several dimensions of the parent–child relationship, including communication, role reversal, and conflict. In turn, these family processes predicted child academic performance, school problems, and academic aspirations and expectations. The current findings extend the Portes and Rumbaut (1996) model, however, inasmuch as joint fluency in either English or Spanish was associated with better parent–child relationships. The findings have implications for educational and human service issues involving Mexican Americans and other immigrant groups. PMID:23244454
Two-echelon competitive integrated supply chain model with price and credit period dependent demand
NASA Astrophysics Data System (ADS)
Pal, Brojeswar; Sankar Sana, Shib; Chaudhuri, Kripasindhu
2016-04-01
This study considers a two-echelon competitive supply chain consisting of two rivaling retailers and one common supplier with trade credit policy. The retailers hope that they can enhance their market demand by offering a credit period to the customers and the supplier also offers a credit period to the retailers. We assume that the market demand of the products of one retailer depends not only on their own market price and offering a credit period to the customers, but also on the market price and offering a credit period of the other retailer. The supplier supplies the product with a common wholesale price and offers the same credit period to the retailers. We study the model under a centralised (integrated) case and a decentralised (Vertical Nash) case and compare them numerically. Finally, we investigate the model by the collected numerical data.
A re-evaluation of a case-control model with contaminated controls for resource selection studies
Christopher T. Rota; Joshua J. Millspaugh; Dylan C. Kesler; Chad P. Lehman; Mark A. Rumble; Catherine M. B. Jachowski
2013-01-01
A common sampling design in resource selection studies involves measuring resource attributes at sample units used by an animal and at sample units considered available for use. Few models can estimate the absolute probability of using a sample unit from such data, but such approaches are generally preferred over statistical methods that estimate a relative probability...
Tan, Chuen Seng; Støer, Nathalie C; Chen, Ying; Andersson, Marielle; Ning, Yilin; Wee, Hwee-Lin; Khoo, Eric Yin Hao; Tai, E-Shyong; Kao, Shih Ling; Reilly, Marie
2017-01-01
The control of confounding is an area of extensive epidemiological research, especially in the field of causal inference for observational studies. Matched cohort and case-control study designs are commonly implemented to control for confounding effects without specifying the functional form of the relationship between the outcome and confounders. This paper extends the commonly used regression models in matched designs for binary and survival outcomes (i.e. conditional logistic and stratified Cox proportional hazards) to studies of continuous outcomes through a novel interpretation and application of logit-based regression models from the econometrics and marketing research literature. We compare the performance of the maximum likelihood estimators using simulated data and propose a heuristic argument for obtaining the residuals for model diagnostics. We illustrate our proposed approach with two real data applications. Our simulation studies demonstrate that our stratification approach is robust to model misspecification and that the distribution of the estimated residuals provides a useful diagnostic when the strata are of moderate size. In our applications to real data, we demonstrate that parity and menopausal status are associated with percent mammographic density, and that the mean level and variability of inpatient blood glucose readings vary between medical and surgical wards within a national tertiary hospital. Our work highlights how the same class of regression models, available in most statistical software, can be used to adjust for confounding in the study of binary, time-to-event and continuous outcomes.
Fishing for causes and cures of motor neuron disorders.
Patten, Shunmoogum A; Armstrong, Gary A B; Lissouba, Alexandra; Kabashi, Edor; Parker, J Alex; Drapeau, Pierre
2014-07-01
Motor neuron disorders (MNDs) are a clinically heterogeneous group of neurological diseases characterized by progressive degeneration of motor neurons, and share some common pathological pathways. Despite remarkable advances in our understanding of these diseases, no curative treatment for MNDs exists. To better understand the pathogenesis of MNDs and to help develop new treatments, the establishment of animal models that can be studied efficiently and thoroughly is paramount. The zebrafish (Danio rerio) is increasingly becoming a valuable model for studying human diseases and in screening for potential therapeutics. In this Review, we highlight recent progress in using zebrafish to study the pathology of the most common MNDs: spinal muscular atrophy (SMA), amyotrophic lateral sclerosis (ALS) and hereditary spastic paraplegia (HSP). These studies indicate the power of zebrafish as a model to study the consequences of disease-related genes, because zebrafish homologues of human genes have conserved functions with respect to the aetiology of MNDs. Zebrafish also complement other animal models for the study of pathological mechanisms of MNDs and are particularly advantageous for the screening of compounds with therapeutic potential. We present an overview of their potential usefulness in MND drug discovery, which is just beginning and holds much promise for future therapeutic development. © 2014. Published by The Company of Biologists Ltd.
Gu, Zhan; Qi, Xiuzhong; Zhai, Xiaofeng; Lang, Qingbo; Lu, Jianying; Ma, Changping; Liu, Long; Yue, Xiaoqiang
2015-01-01
Primary liver cancer (PLC) is one of the most common malignant tumors because of its high incidence and high mortality. Traditional Chinese medicine (TCM) plays an active role in the treatment of PLC. As the most important part in the TCM system, syndrome differentiation based on the clinical manifestations from traditional four diagnostic methods has met great challenges and questions with the lack of statistical validation support. In this study, we provided evidences for TCM syndrome differentiation of PLC using the method of analysis of latent structural model from clinic data, thus providing basis for establishing TCM syndrome criteria. And also we obtain the common syndromes of PLC as well as their typical clinical manifestations, respectively.
Wrosch, Carsten; Dunne, Erin; Scheier, Michael F; Schulz, Richard
2006-06-01
This article addresses the role played by adaptive self-regulation in protecting older adults' psychological and physical health. A theoretical model is outlined illustrating how common age-related challenges (i.e., physical challenges and life regrets) can influence older adults' health. In addition, the proposed model suggests that older adults can avoid the adverse health effects of encountering these problems if they engage in adaptive self-regulation. Finally, this article reviews recent studies that examined the adaptive value of self-regulation processes for managing physical challenges and life regrets in the elderly. The findings from cross-sectional, longitudinal, and experimental studies document the importance of adaptive self-regulation for maintaining older adults' health.
Prenatal Alcohol Exposure in Rodents As a Promising Model for the Study of ADHD Molecular Basis
Rojas-Mayorquín, Argelia E.; Padilla-Velarde, Edgar; Ortuño-Sahagún, Daniel
2016-01-01
A physiological parallelism, or even a causal effect relationship, can be deducted from the analysis of the main characteristics of the “Alcohol Related Neurodevelopmental Disorders” (ARND), derived from prenatal alcohol exposure (PAE), and the behavioral performance in the Attention-deficit/hyperactivity disorder (ADHD). These two clinically distinct disease entities, exhibits many common features. They affect neurological shared pathways, and also related neurotransmitter systems. We briefly review here these parallelisms, with their common and uncommon characteristics, and with an emphasis in the subjacent molecular mechanisms of the behavioral manifestations, that lead us to propose that PAE in rats can be considered as a suitable model for the study of ADHD. PMID:28018163
Svenning, J.-C.; Engelbrecht, B.M.J.; Kinner, D.A.; Kursar, T.A.; Stallard, R.F.; Wright, S.J.
2006-01-01
We used regression models and information-theoretic model selection to assess the relative importance of environment, local dispersal and historical contingency as controls of the distributions of 26 common plant species in tropical forest on Barro Colorado Island (BCI), Panama. We censused eighty-eight 0.09-ha plots scattered across the landscape. Environmental control, local dispersal and historical contingency were represented by environmental variables (soil moisture, slope, soil type, distance to shore, old-forest presence), a spatial autoregressive parameter (??), and four spatial trend variables, respectively. We built regression models, representing all combinations of the three hypotheses, for each species. The probability that the best model included the environmental variables, spatial trend variables and ?? averaged 33%, 64% and 50% across the study species, respectively. The environmental variables, spatial trend variables, ??, and a simple intercept model received the strongest support for 4, 15, 5 and 2 species, respectively. Comparing the model results to information on species traits showed that species with strong spatial trends produced few and heavy diaspores, while species with strong soil moisture relationships were particularly drought-sensitive. In conclusion, history and local dispersal appeared to be the dominant controls of the distributions of common plant species on BCI. Copyright ?? 2006 Cambridge University Press.
Student Conceptions of Ionic Bonding: Patterns of thinking across three European contexts
NASA Astrophysics Data System (ADS)
Taber, Keith S.; Tsaparlis, Georgios; Nakiboğlu, Canan
2012-12-01
Previous research has reported that students commonly develop alternative conceptions in the core topic of chemical bonding. Research in England has reported that students there commonly demonstrate an alternative 'molecular' conceptual framework for thinking about ionic bonding: in terms of the formation of molecule-like ions pairs through electron transfer, which are internally bonded, but not bonded to other ions. The present study reports the use of translated versions of a diagnostic instrument to elicit the conceptions of bonding in NaCl (commonly used as the teaching example of an ionic compound) from two samples of students setting out on university courses in Greece and Turkey. The study reports that students in these two contexts displayed high levels of support for statements based upon the alternative conceptual framework identified in the English context. Students commonly develop similar alternative conceptions of ionic bonding in these three different educational contexts. The study also found some quite large differences in the specific response patterns across these three contexts, some of which could reflect specific features of the different curriculum contexts. The study reinforces the cross-national nature of the challenge of effectively teaching the abstract models of chemistry at the submicroscopic level. It also provides intriguing suggestions that a close study of the interactions between specific curriculum contexts and specific patterns in students' thinking offers much potential for identifying particular aspects of subject pedagogy that either support or impede the learning of accepted scientific models.
Liegl, Gregor; Wahl, Inka; Berghöfer, Anne; Nolte, Sandra; Pieh, Christoph; Rose, Matthias; Fischer, Felix
2016-03-01
To investigate the validity of a common depression metric in independent samples. We applied a common metrics approach based on item-response theory for measuring depression to four German-speaking samples that completed the Patient Health Questionnaire (PHQ-9). We compared the PHQ item parameters reported for this common metric to reestimated item parameters that derived from fitting a generalized partial credit model solely to the PHQ-9 items. We calibrated the new model on the same scale as the common metric using two approaches (estimation with shifted prior and Stocking-Lord linking). By fitting a mixed-effects model and using Bland-Altman plots, we investigated the agreement between latent depression scores resulting from the different estimation models. We found different item parameters across samples and estimation methods. Although differences in latent depression scores between different estimation methods were statistically significant, these were clinically irrelevant. Our findings provide evidence that it is possible to estimate latent depression scores by using the item parameters from a common metric instead of reestimating and linking a model. The use of common metric parameters is simple, for example, using a Web application (http://www.common-metrics.org) and offers a long-term perspective to improve the comparability of patient-reported outcome measures. Copyright © 2016 Elsevier Inc. All rights reserved.
Castro-Guerrero, Norma A; Isidra-Arellano, Mariel C; Mendoza-Cozatl, David G; Valdés-López, Oswaldo
2016-01-01
Common bean (Phaseolus vulgaris) was domesticated ∼8000 years ago in the Americas and today is a staple food worldwide. Besides caloric intake, common bean is also an important source of protein and micronutrients and it is widely appreciated in developing countries for their affordability (compared to animal protein) and its long storage life. As a legume, common bean also has the economic and environmental benefit of associating with nitrogen-fixing bacteria, thus reducing the use of synthetic fertilizers, which is key for sustainable agriculture. Despite significant advances in the plant nutrition field, the mechanisms underlying the adaptation of common bean to low nutrient input remains largely unknown. The recent release of the common bean genome offers, for the first time, the possibility of applying techniques and approaches that have been exclusive to model plants to study the adaptive responses of common bean to challenging environments. In this review, we discuss the hallmarks of common bean domestication and subsequent distribution around the globe. We also discuss recent advances in phosphate, iron, and zinc homeostasis, as these nutrients often limit plant growth, development, and yield. In addition, iron and zinc are major targets of crop biofortification to improve human nutrition. Developing common bean varieties able to thrive under nutrient limiting conditions will have a major impact on human nutrition, particularly in countries where dry beans are the main source of carbohydrates, protein and minerals.
Supervisory Behaviors of Cooperating Agricultural Education Teachers
ERIC Educational Resources Information Center
Thobega, Moreetsi; Miller, Greg
2007-01-01
The purpose of this study was to determine the extent to which cooperating agricultural education teachers used selected supervision models. The relationships between maturity characteristics of the cooperating teachers and their choices of a supervision model were also examined. Results showed that cooperating teachers commonly used clinical,…
Information and complexity measures for hydrologic model evaluation
USDA-ARS?s Scientific Manuscript database
Hydrological models are commonly evaluated through the residual-based performance measures such as the root-mean square error or efficiency criteria. Such measures, however, do not evaluate the degree of similarity of patterns in simulated and measured time series. The objective of this study was to...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strons, Philip; Bailey, James L.; Davis, John
2016-03-01
In this work, we apply the CFD in modeling airflow and particulate transport. This modeling is then compared to field validation studies to both inform and validate the modeling assumptions. Based on the results of field tests, modeling assumptions and boundary conditions are refined and the process is repeated until the results are found to be reliable with a high level of confidence.
Improved heat transfer modeling of the eye for electromagnetic wave exposures.
Hirata, Akimasa
2007-05-01
This study proposed an improved heat transfer model of the eye for exposure to electromagnetic (EM) waves. Particular attention was paid to the difference from the simplified heat transfer model commonly used in this field. From our computational results, the temperature elevation in the eye calculated with the simplified heat transfer model was largely influenced by the EM absorption outside the eyeball, but not when we used our improved model.
Heterogeneity in perinatal depression: how far have we come? A systematic review.
Santos, Hudson; Tan, Xianming; Salomon, Rebecca
2017-02-01
Despite perinatal depression (PND) being a common mental disorder affecting pregnant women and new mothers, limited attention has been paid to the heterogeneous nature of this disorder. We examined heterogeneity in PND symptom profiles and symptom trajectories. Literature searches revealed 247 studies, 23 of which were included in the final review. The most common statistical approaches used to explore symptom and trajectory heterogeneity were latent class model and growth mixture model. All but one study examined PND symptom trajectories and provided collective evidence of at least three heterogeneous patterns: low, medium, or chronic-high symptom levels. Social and psychological risk factors were the most common group of predictors related to a higher burden (high sum of score) of depressive symptoms. These studies were consistent in reporting poorer health outcomes for children of mothers assigned to high burden symptom trajectories. Only one study explored heterogeneity in symptom profile and was the only one to describe the specific constellations of depressive symptoms related to the PND heterogeneous patterns identified. Therefore, there is limited evidence on the specific symptoms and symptom configurations that make up PND heterogeneity. We suggest directions for future research to further clarify the PND heterogeneity and its related mechanisms.
Fezeu, Léopold K; Batty, G David; Batty, David G; Gale, Catharine R; Kivimaki, Mika; Hercberg, Serge; Czernichow, Sebastien
2015-01-01
The direction of the association between mental health and adiposity is poorly understood. Our objective was to empirically examine this link in a UK study. This is a prospective cohort study of 3 388 people (men) aged ≥ 18 years at study induction who participated in both the UK Health and Lifestyle Survey at baseline (HALS-1, 1984/1985) and the re-survey (HALS-2, 1991/1992). At both survey examinations, body mass index, waist circumference and self-reported common mental disorder (the 30-item General Health Questionnaire, GHQ) were measured. Logistic regression models were used to compute odds ratios (OR) and accompanying 95% confidence intervals (CI) for the associations between (1) baseline common mental disorder (QHQ score > 4) and subsequent general and abdominal obesity and (2) baseline general and abdominal obesity and re-survey common mental disorders. After controlling for a range of covariates, participants with common mental disorder at baseline experienced greater odds of subsequently becoming overweight (women, OR: 1.30, 1.03 - 1.64; men, 1.05, 0.81 - 1.38) and obese (women, 1.26, 0.82 - 1.94; men, OR: 2.10, 1.23 - 3.55) than those who were free of common mental disorder. Similarly, having baseline common mental health disorder was also related to a greater risk of developing moderate (1.57, 1.21 - 2.04) and severe (1.48, 1.09 - 2.01) abdominal obesity (women only). Baseline general or abdominal obesity was not associated with the risk of future common mental disorder. These findings of the present study suggest that the direction of association between common mental disorders and adiposity is from common mental disorder to increased future risk of adiposity as opposed to the converse.
Bastos, João Luiz; Barros, Aluisio J D; Celeste, Roger Keller; Paradies, Yin; Faerstein, Eduardo
2014-01-01
Although research on discrimination and health has progressed significantly, it has tended to focus on racial discrimination and US populations. This study explored different types of discrimination, their interactions and associations with common mental disorders among Brazilian university students, in Rio de Janeiro in 2010. Associations between discrimination and common mental disorders were examined using multiple logistic regression models, adjusted for confounders. Interactions between discrimination and socio-demographics were tested. Discrimination attributed to age, class and skin color/race were the most frequently reported. In a fully adjusted model, discrimination attributed to skin color/race and class were both independently associated with increased odds of common mental disorders. The simultaneous reporting of skin color/race, class and age discrimination was associated with the highest odds ratio. No significant interactions were found. Skin color/race and class discrimination were important, but their simultaneous reporting, in conjunction with age discrimination, were associated with the highest occurrence of common mental disorders.
Patounakis, George; Hill, Micah J
2018-06-01
The purpose of the current review is to describe the common pitfalls in design and statistical analysis of reproductive medicine studies. It serves to guide both authors and reviewers toward reducing the incidence of spurious statistical results and erroneous conclusions. The large amount of data gathered in IVF cycles leads to problems with multiplicity, multicollinearity, and over fitting of regression models. Furthermore, the use of the word 'trend' to describe nonsignificant results has increased in recent years. Finally, methods to accurately account for female age in infertility research models are becoming more common and necessary. The pitfalls of study design and analysis reviewed provide a framework for authors and reviewers to approach clinical research in the field of reproductive medicine. By providing a more rigorous approach to study design and analysis, the literature in reproductive medicine will have more reliable conclusions that can stand the test of time.
Pauci ex tanto numero: reduce redundancy in multi-model ensembles
NASA Astrophysics Data System (ADS)
Solazzo, E.; Riccio, A.; Kioutsioukis, I.; Galmarini, S.
2013-08-01
We explicitly address the fundamental issue of member diversity in multi-model ensembles. To date, no attempts in this direction have been documented within the air quality (AQ) community despite the extensive use of ensembles in this field. Common biases and redundancy are the two issues directly deriving from lack of independence, undermining the significance of a multi-model ensemble, and are the subject of this study. Shared, dependant biases among models do not cancel out but will instead determine a biased ensemble. Redundancy derives from having too large a portion of common variance among the members of the ensemble, producing overconfidence in the predictions and underestimation of the uncertainty. The two issues of common biases and redundancy are analysed in detail using the AQMEII ensemble of AQ model results for four air pollutants in two European regions. We show that models share large portions of bias and variance, extending well beyond those induced by common inputs. We make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble with the advantage of being poorly correlated. Selecting the members for generating skilful, non-redundant ensembles from such subsets proved, however, non-trivial. We propose and discuss various methods of member selection and rate the ensemble performance they produce. In most cases, the full ensemble is outscored by the reduced ones. We conclude that, although independence of outputs may not always guarantee enhancement of scores (but this depends upon the skill being investigated), we discourage selecting the members of the ensemble simply on the basis of scores; that is, independence and skills need to be considered disjointly.
Pauci ex tanto numero: reducing redundancy in multi-model ensembles
NASA Astrophysics Data System (ADS)
Solazzo, E.; Riccio, A.; Kioutsioukis, I.; Galmarini, S.
2013-02-01
We explicitly address the fundamental issue of member diversity in multi-model ensembles. To date no attempts in this direction are documented within the air quality (AQ) community, although the extensive use of ensembles in this field. Common biases and redundancy are the two issues directly deriving from lack of independence, undermining the significance of a multi-model ensemble, and are the subject of this study. Shared biases among models will determine a biased ensemble, making therefore essential the errors of the ensemble members to be independent so that bias can cancel out. Redundancy derives from having too large a portion of common variance among the members of the ensemble, producing overconfidence in the predictions and underestimation of the uncertainty. The two issues of common biases and redundancy are analysed in detail using the AQMEII ensemble of AQ model results for four air pollutants in two European regions. We show that models share large portions of bias and variance, extending well beyond those induced by common inputs. We make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble with the advantage of being poorly correlated. Selecting the members for generating skilful, non-redundant ensembles from such subsets proved, however, non-trivial. We propose and discuss various methods of member selection and rate the ensemble performance they produce. In most cases, the full ensemble is outscored by the reduced ones. We conclude that, although independence of outputs may not always guarantee enhancement of scores (but this depends upon the skill being investigated) we discourage selecting the members of the ensemble simply on the basis of scores, that is, independence and skills need to be considered disjointly.
Common modeling system for digital simulation
NASA Technical Reports Server (NTRS)
Painter, Rick
1994-01-01
The Joint Modeling and Simulation System is a tri-service investigation into a common modeling framework for the development digital models. The basis for the success of this framework is an X-window-based, open systems architecture, object-based/oriented methodology, standard interface approach to digital model construction, configuration, execution, and post processing. For years Department of Defense (DOD) agencies have produced various weapon systems/technologies and typically digital representations of the systems/technologies. These digital representations (models) have also been developed for other reasons such as studies and analysis, Cost Effectiveness Analysis (COEA) tradeoffs, etc. Unfortunately, there have been no Modeling and Simulation (M&S) standards, guidelines, or efforts towards commonality in DOD M&S. The typical scenario is an organization hires a contractor to build hardware and in doing so an digital model may be constructed. Until recently, this model was not even obtained by the organization. Even if it was procured, it was on a unique platform, in a unique language, with unique interfaces, and, with the result being UNIQUE maintenance required. Additionally, the constructors of the model expended more effort in writing the 'infrastructure' of the model/simulation (e.g. user interface, database/database management system, data journalizing/archiving, graphical presentations, environment characteristics, other components in the simulation, etc.) than in producing the model of the desired system. Other side effects include: duplication of efforts; varying assumptions; lack of credibility/validation; and decentralization in policy and execution. J-MASS provides the infrastructure, standards, toolset, and architecture to permit M&S developers and analysts to concentrate on the their area of interest.
Navarro, Albert; Casanovas, Georgina; Alvarado, Sergio; Moriña, David
Researchers in public health are often interested in examining the effect of several exposures on the incidence of a recurrent event. The aim of the present study is to assess how well the common-baseline hazard models perform to estimate the effect of multiple exposures on the hazard of presenting an episode of a recurrent event, in presence of event dependence and when the history of prior-episodes is unknown or is not taken into account. Through a comprehensive simulation study, using specific-baseline hazard models as the reference, we evaluate the performance of common-baseline hazard models by means of several criteria: bias, mean squared error, coverage, confidence intervals mean length and compliance with the assumption of proportional hazards. Results indicate that the bias worsen as event dependence increases, leading to a considerable overestimation of the exposure effect; coverage levels and compliance with the proportional hazards assumption are low or extremely low, worsening with increasing event dependence, effects to be estimated, and sample sizes. Common-baseline hazard models cannot be recommended when we analyse recurrent events in the presence of event dependence. It is important to have access to the history of prior-episodes per subject, it can permit to obtain better estimations of the effects of the exposures. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Jarnevich, Catherine S.; Talbert, Marian; Morisette, Jeffrey T.; Aldridge, Cameron L.; Brown, Cynthia; Kumar, Sunil; Manier, Daniel; Talbert, Colin; Holcombe, Tracy R.
2017-01-01
Evaluating the conditions where a species can persist is an important question in ecology both to understand tolerances of organisms and to predict distributions across landscapes. Presence data combined with background or pseudo-absence locations are commonly used with species distribution modeling to develop these relationships. However, there is not a standard method to generate background or pseudo-absence locations, and method choice affects model outcomes. We evaluated combinations of both model algorithms (simple and complex generalized linear models, multivariate adaptive regression splines, Maxent, boosted regression trees, and random forest) and background methods (random, minimum convex polygon, and continuous and binary kernel density estimator (KDE)) to assess the sensitivity of model outcomes to choices made. We evaluated six questions related to model results, including five beyond the common comparison of model accuracy assessment metrics (biological interpretability of response curves, cross-validation robustness, independent data accuracy and robustness, and prediction consistency). For our case study with cheatgrass in the western US, random forest was least sensitive to background choice and the binary KDE method was least sensitive to model algorithm choice. While this outcome may not hold for other locations or species, the methods we used can be implemented to help determine appropriate methodologies for particular research questions.
Preparation of a New Oligolamellar Stratum Corneum Lipid Model.
Mueller, Josefin; Schroeter, Annett; Steitz, Roland; Trapp, Marcus; Neubert, Reinhard H H
2016-05-10
In this study, we present a preparation method for a new stratum corneum (SC) model system, which is closer to natural SC than the commonly used multilayer models. The complex setup of the native SC lipid matrix was mimicked by a ternary lipid mixture of ceramide [AP], cholesterol, and stearic acid. A spin coating procedure was applied to realize oligo-layered samples. The influence of lipid concentration, rotation speed, polyethylenimine, methanol content, cholesterol fraction, and annealing on the molecular arrangement of the new SC model was investigated by X-ray reflectivity measurements. The new oligo-SC model is closer to native SC in the total number of lipid membranes found between corneocytes. The reduction in thickness provides the opportunity to study the effects of drugs and/or hydrophilic penetration enhancers on the structure of SC in full detail by X-ray or neutron reflectivity. In addition, the oligo-lamellar systems allows one to infer not only the lamellar spacing, but also the total thickness of the oligo-SC model and changes thereof can be monitored. This improvement is most helpful for the understanding of transdermal drug administration on the nanoscale. The results are compared to the commonly used multilamellar lipid model systems and advantages and disadvantages of both models are discussed.
Modeling abundance effects in distance sampling
Royle, J. Andrew; Dawson, D.K.; Bates, S.
2004-01-01
Distance-sampling methods are commonly used in studies of animal populations to estimate population density. A common objective of such studies is to evaluate the relationship between abundance or density and covariates that describe animal habitat or other environmental influences. However, little attention has been focused on methods of modeling abundance covariate effects in conventional distance-sampling models. In this paper we propose a distance-sampling model that accommodates covariate effects on abundance. The model is based on specification of the distance-sampling likelihood at the level of the sample unit in terms of local abundance (for each sampling unit). This model is augmented with a Poisson regression model for local abundance that is parameterized in terms of available covariates. Maximum-likelihood estimation of detection and density parameters is based on the integrated likelihood, wherein local abundance is removed from the likelihood by integration. We provide an example using avian point-transect data of Ovenbirds (Seiurus aurocapillus) collected using a distance-sampling protocol and two measures of habitat structure (understory cover and basal area of overstory trees). The model yields a sensible description (positive effect of understory cover, negative effect on basal area) of the relationship between habitat and Ovenbird density that can be used to evaluate the effects of habitat management on Ovenbird populations.
Standard Information Models for Representing Adverse Sensitivity Information in Clinical Documents.
Topaz, M; Seger, D L; Goss, F; Lai, K; Slight, S P; Lau, J J; Nandigam, H; Zhou, L
2016-01-01
Adverse sensitivity (e.g., allergy and intolerance) information is a critical component of any electronic health record system. While several standards exist for structured entry of adverse sensitivity information, many clinicians record this data as free text. This study aimed to 1) identify and compare the existing common adverse sensitivity information models, and 2) to evaluate the coverage of the adverse sensitivity information models for representing allergy information on a subset of inpatient and outpatient adverse sensitivity clinical notes. We compared four common adverse sensitivity information models: Health Level 7 Allergy and Intolerance Domain Analysis Model, HL7-DAM; the Fast Healthcare Interoperability Resources, FHIR; the Consolidated Continuity of Care Document, C-CDA; and OpenEHR, and evaluated their coverage on a corpus of inpatient and outpatient notes (n = 120). We found that allergy specialists' notes had the highest frequency of adverse sensitivity attributes per note, whereas emergency department notes had the fewest attributes. Overall, the models had many similarities in the central attributes which covered between 75% and 95% of adverse sensitivity information contained within the notes. However, representations of some attributes (especially the value-sets) were not well aligned between the models, which is likely to present an obstacle for achieving data interoperability. Also, adverse sensitivity exceptions were not well represented among the information models. Although we found that common adverse sensitivity models cover a significant portion of relevant information in the clinical notes, our results highlight areas needed to be reconciled between the standards for data interoperability.
Edwardson, Matthew A.; Wang, Ximing; Liu, Brent; Ding, Li; Lane, Christianne J.; Park, Caron; Nelsen, Monica A.; Jones, Theresa A; Wolf, Steven L; Winstein, Carolee J; Dromerick, Alexander W.
2017-01-01
Background Stroke patients with mild-moderate upper extremity (UE) motor impairments and minimal sensory and cognitive deficits provide a useful model to study recovery and improve rehabilitation. Laboratory-based investigators use lesioning techniques for similar goals. Objective Determine whether stroke lesions in an UE rehabilitation trial cohort match lesions from the preclinical stroke recovery models used to drive translational research. Methods Clinical neuroimages from 297 participants enrolled in the Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (ICARE) study were reviewed. Images were characterized based on lesion type (ischemic or hemorrhagic), volume, vascular territory, depth (cortical gray matter, cortical white matter, subcortical), old strokes, and leukoaraiosis. Lesions were compared with those of preclinical stroke models commonly used to study upper limb recovery. Results Among the ischemic stroke participants, median infarct volume was 1.8 mL, with most lesions confined to subcortical structures (61%) including the anterior choroidal artery territory (30%) and the pons (23%). Of ICARE participants, <1 % had lesions resembling proximal MCA or surface vessel occlusion models. Preclinical models of subcortical white matter injury best resembled the ICARE population (33%). Intracranial hemorrhage participants had small (median 12.5 mL) lesions that best matched the capsular hematoma preclinical model. Conclusions ICARE subjects are not representative of all stroke patients, but they represent a clinically and scientifically important subgroup. Compared to lesions in general stroke populations and widely-studied animal models of recovery, ICARE participants had smaller, more subcortically-based strokes. Improved preclinical-clinical translational efforts may require better alignment of lesions between preclinical and human stroke recovery models. PMID:28337932
A Negative Binomial Regression Model for Accuracy Tests
ERIC Educational Resources Information Center
Hung, Lai-Fa
2012-01-01
Rasch used a Poisson model to analyze errors and speed in reading tests. An important property of the Poisson distribution is that the mean and variance are equal. However, in social science research, it is very common for the variance to be greater than the mean (i.e., the data are overdispersed). This study embeds the Rasch model within an…
Sample Size Determination for Regression Models Using Monte Carlo Methods in R
ERIC Educational Resources Information Center
Beaujean, A. Alexander
2014-01-01
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
LAVA Simulations for the 3rd AIAA CFD High Lift Prediction Workshop with Body Fitted Grids
NASA Technical Reports Server (NTRS)
Jensen, James C.; Stich, Gerrit-Daniel; Housman, Jeffrey A.; Denison, Marie; Kiris, Cetin C.
2018-01-01
In response to the 3rd AIAA CFD High Lift Prediction Workshop, the workshop cases were analyzed using Reynolds-averaged Navier-Stokes flow solvers within the Launch Ascent and Vehicle Aerodynamics (LAVA) solver framework. For the workshop cases the advantages and limitations of both overset-structured an unstructured polyhedral meshes were assessed. The workshop included 3 cases: a 2D airfoil validation case, a mesh convergence study using the High Lift Common Research Model, and a nacelle/pylon integration study using the JAXA (Japan Aerospace Exploration Agency) Standard Model. The 2D airfoil case from the workshop is used to verify the implementation of the Spalart-Allmaras turbulence model along with some of its variants within the solver. The High Lift Common Research Model case is used to assess solver performance and accuracy at varying mesh resolutions, as well as identify the minimum mesh fidelity required for LAVA on this class of problem. The JAXA Standard Model case is used to assess the solver's sensitivity to the turbulence model and to compare the structured and unstructured mesh paradigms. These workshop cases have helped establish best practices for high lift flow configurations for the LAVA solver.
Comparison of experimental models for predicting laser-tissue interaction from 3.8-micron lasers
NASA Astrophysics Data System (ADS)
Williams, Piper C. M.; Winston, Golda C. H.; Randolph, Don Q.; Neal, Thomas A.; Eurell, Thomas E.; Johnson, Thomas E.
2004-07-01
The purpose of this study was to evaluate the laser-tissue interactions of engineered human skin and in-vivo pig skin following exposure to a single 3.8 micron laser light pulse. The goal of the study was to determine if these tissues shared common histologic features following laser exposure that might prove useful in developing in-vitro and in-vivo experimental models to predict the bioeffects of human laser exposure. The minimum exposure required to produce gross morphologic changes following a four microsecond, pulsed skin exposure for both models was determined. Histology was used to compare the cellular responses of the experimental models following laser exposure. Eighteen engineered skin equivalents (in-vitro model), were exposed to 3.8 micron laser light and the tissue responses compared to equivalent exposures made on five Yorkshire pigs (in-vivo model). Representative biopsies of pig skin were taken for histologic evaluation from various body locations immediately, one hour, and 24 hours following exposure. The pattern of epithelial changes seen following in-vitro laser exposure of the engineered human skin and in-vivo exposure of pig skin indicated a common histologic response for this particular combination of laser parameters.
The Use of Object-Oriented Analysis Methods in Surety Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craft, Richard L.; Funkhouser, Donald R.; Wyss, Gregory D.
1999-05-01
Object-oriented analysis methods have been used in the computer science arena for a number of years to model the behavior of computer-based systems. This report documents how such methods can be applied to surety analysis. By embodying the causality and behavior of a system in a common object-oriented analysis model, surety analysts can make the assumptions that underlie their models explicit and thus better communicate with system designers. Furthermore, given minor extensions to traditional object-oriented analysis methods, it is possible to automatically derive a wide variety of traditional risk and reliability analysis methods from a single common object model. Automaticmore » model extraction helps ensure consistency among analyses and enables the surety analyst to examine a system from a wider variety of viewpoints in a shorter period of time. Thus it provides a deeper understanding of a system's behaviors and surety requirements. This report documents the underlying philosophy behind the common object model representation, the methods by which such common object models can be constructed, and the rules required to interrogate the common object model for derivation of traditional risk and reliability analysis models. The methodology is demonstrated in an extensive example problem.« less
USDA-ARS?s Scientific Manuscript database
Insecticide resistance is the most broadly recognized and well studied ecological problem resulting from intensive insecticide use, which also provides useful evolutionary models of newly adapted phenotypes to changing environments. Two common assumptions in such population-oriented models are the e...
Spectral nudging – a scale-selective interior constraint technique – is commonly used in regional climate models to maintain consistency with large-scale forcing while permitting mesoscale features to develop in the downscaled simulations. Several studies have demonst...
Item Screening in Graphical Loglinear Rasch Models
ERIC Educational Resources Information Center
Kreiner, Svend; Christensen, Karl Bang
2011-01-01
In behavioural sciences, local dependence and DIF are common, and purification procedures that eliminate items with these weaknesses often result in short scales with poor reliability. Graphical loglinear Rasch models (Kreiner & Christensen, in "Statistical Methods for Quality of Life Studies," ed. by M. Mesbah, F.C. Cole & M.T.…
Structure and Etiology of Co-Occurring Internalizing and Externalizing Disorders in Adolescents
ERIC Educational Resources Information Center
Cosgrove, Victoria E.; Rhee, Soo H.; Gelhorn, Heather L.; Boeldt, Debra; Corley, Robin C.; Ehringer, Marissa A.; Young, Susan E.; Hewitt, John K.
2011-01-01
Several studies suggest that a two-factor model positing internalizing and externalizing factors explains the interrelationships among psychiatric disorders. However, it is unclear whether the covariation between internalizing and externalizing disorders is due to common genetic or environmental influences. We examined whether a model positing two…
Using Covariation Reasoning to Support Mathematical Modeling
ERIC Educational Resources Information Center
Jacobson, Erik
2014-01-01
For many students, making connections between mathematical ideas and the real world is one of the most intriguing and rewarding aspects of the study of mathematics. In the Common Core State Standards for Mathematics (CCSSI 2010), mathematical modeling is highlighted as a mathematical practice standard for all grades. To engage in mathematical…
History of Physics and Conceptual Constructions: The Case of Magnetism
ERIC Educational Resources Information Center
Voutsina, Lambrini; Ravanis, Konstantinos
2011-01-01
This study documents the mental representations of magnetism constructed by students aged 15-17 and attempts to investigate whether these display the characteristics of models with an inner cohesiveness and constancy; whether they share common features with typical historical models of the Sciences; and whether they evolve through conventional…
Using Robust Variance Estimation to Combine Multiple Regression Estimates with Meta-Analysis
ERIC Educational Resources Information Center
Williams, Ryan
2013-01-01
The purpose of this study was to explore the use of robust variance estimation for combining commonly specified multiple regression models and for combining sample-dependent focal slope estimates from diversely specified models. The proposed estimator obviates traditionally required information about the covariance structure of the dependent…
Challenging Conventional Wisdom for Multivariate Statistical Models with Small Samples
ERIC Educational Resources Information Center
McNeish, Daniel
2017-01-01
In education research, small samples are common because of financial limitations, logistical challenges, or exploratory studies. With small samples, statistical principles on which researchers rely do not hold, leading to trust issues with model estimates and possible replication issues when scaling up. Researchers are generally aware of such…
Distance Education Strategy: Mental Models and Strategic Choices
ERIC Educational Resources Information Center
Adams, John C.; Seagren, Alan T.
2004-01-01
What issues do distance education (DE) leaders believe will influence the future of DE? What are their colleges' DE strategies? This qualitative study compares DE strategic thinking and strategic choices at three community colleges. Two propositions are investigated: (1) each college's DE leaders use common strategic mental models (ways of…
USDA-ARS?s Scientific Manuscript database
Parametric non-linear regression (PNR) techniques commonly are used to develop weed seedling emergence models. Such techniques, however, require statistical assumptions that are difficult to meet. To examine and overcome these limitations, we compared PNR with a nonparametric estimation technique. F...
Explanatory models in patients with first episode depression: a study from north India.
Grover, Sandeep; Kumar, Vineet; Chakrabarti, Subho; Hollikatti, Prabhakar; Singh, Pritpal; Tyagi, Shikha; Kulhara, Parmanand; Avasthi, Ajit
2012-09-01
The purpose of this work was to study the explanatory models of patients with first episode depression presenting to a tertiary care hospital located in North-western India. One hundred sixty four consecutive patients with diagnosis of first episode depression (except severe depression with psychotic symptoms) according to the International Classification of Diseases-10th Revision (ICD-10) and ≥18 years of age were evaluated for their explanatory models using the causal models section of Explanatory Model Interview Catalogue (EMIC). The most common explanations given were categorized into Karma-deed-heredity category (77.4%), followed by psychological explanations (62.2%), weakness (50%) and social causes (40.2%). Among the various specific causes the commonly reported explanations by at least one-fourth of the sample in decreasing order were: will of god (51.2%), fate/chance (40.9%), weakness of nerves (37.8%), general weakness (34.7%), bad deeds (26.2%), evil eye (24.4%) and family problems (21.9%). There was some influence of sociodemographic features on the explanations given by the patients. From the study, it can be concluded that patients with first episode depression have multiple explanatory models for their symptoms of depression which are slightly different than those reported in previous studies done from other parts of India. Understanding the multiple explanatory models for their symptoms of depression can have important treatment implications. Copyright © 2012 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Pegrum, Mark; Oakley, Grace; Faulkner, Robert
2013-01-01
This paper reports on the adoption of mobile handheld technologies in ten Western Australian independent schools, based on interviews with staff conducted in 2011. iPads were the most popular device, followed by iPod Touches and iPhones. Class sets were common at lower levels, with 1:1 models becoming increasingly common at higher levels. Mobile…
Oxlade, Olivia; Pinto, Marcia; Trajman, Anete; Menzies, Dick
2013-01-01
Introduction Cost effectiveness analyses (CEA) can provide useful information on how to invest limited funds, however they are less useful if different analysis of the same intervention provide unclear or contradictory results. The objective of our study was to conduct a systematic review of methodologic aspects of CEA that evaluate Interferon Gamma Release Assays (IGRA) for the detection of Latent Tuberculosis Infection (LTBI), in order to understand how differences affect study results. Methods A systematic review of studies was conducted with particular focus on study quality and the variability in inputs used in models used to assess cost-effectiveness. A common decision analysis model of the IGRA versus Tuberculin Skin Test (TST) screening strategy was developed and used to quantify the impact on predicted results of observed differences of model inputs taken from the studies identified. Results Thirteen studies were ultimately included in the review. Several specific methodologic issues were identified across studies, including how study inputs were selected, inconsistencies in the costing approach, the utility of the QALY (Quality Adjusted Life Year) as the effectiveness outcome, and how authors choose to present and interpret study results. When the IGRA versus TST test strategies were compared using our common decision analysis model predicted effectiveness largely overlapped. Implications Many methodologic issues that contribute to inconsistent results and reduced study quality were identified in studies that assessed the cost-effectiveness of the IGRA test. More specific and relevant guidelines are needed in order to help authors standardize modelling approaches, inputs, assumptions and how results are presented and interpreted. PMID:23505412
Katherine A. Zeller; Kevin McGarigal; Paul Beier; Samuel A. Cushman; T. Winston Vickers; Walter M. Boyce
2014-01-01
Estimating landscape resistance to animal movement is the foundation for connectivity modeling, and resource selection functions based on point data are commonly used to empirically estimate resistance. In this study, we used GPS data points acquired at 5-min intervals from radiocollared pumas in southern California to model context-dependent point selection...
Common carp disrupt ecosystem structure and function through middle-out effects
Kaemingk, Mark A.; Jolley, Jeffrey C.; Paukert, Craig P.; Willis, David W.; Henderson, Kjetil R.; Holland, Richard S.; Wanner, Greg A.; Lindvall, Mark L.
2016-01-01
Middle-out effects or a combination of top-down and bottom-up processes create many theoretical and empirical challenges in the realm of trophic ecology. We propose using specific autecology or species trait (i.e. behavioural) information to help explain and understand trophic dynamics that may involve complicated and non-unidirectional trophic interactions. The common carp (Cyprinus carpio) served as our model species for whole-lake observational and experimental studies; four trophic levels were measured to assess common carp-mediated middle-out effects across multiple lakes. We hypothesised that common carp could influence aquatic ecosystems through multiple pathways (i.e. abiotic and biotic foraging, early life feeding, nutrient). Both studies revealed most trophic levels were affected by common carp, highlighting strong middle-out effects likely caused by common carp foraging activities and abiotic influence (i.e. sediment resuspension). The loss of water transparency, submersed vegetation and a shift in zooplankton dynamics were the strongest effects. Trophic levels furthest from direct pathway effects were also affected (fish life history traits). The present study demonstrates that common carp can exert substantial effects on ecosystem structure and function. Species capable of middle-out effects can greatly modify communities through a variety of available pathways and are not confined to traditional top-down or bottom-up processes.
A Model Independent S/W Framework for Search-Based Software Testing
Baik, Jongmoon
2014-01-01
In Model-Based Testing (MBT) area, Search-Based Software Testing (SBST) has been employed to generate test cases from the model of a system under test. However, many types of models have been used in MBT. If the type of a model has changed from one to another, all functions of a search technique must be reimplemented because the types of models are different even if the same search technique has been applied. It requires too much time and effort to implement the same algorithm over and over again. We propose a model-independent software framework for SBST, which can reduce redundant works. The framework provides a reusable common software platform to reduce time and effort. The software framework not only presents design patterns to find test cases for a target model but also reduces development time by using common functions provided in the framework. We show the effectiveness and efficiency of the proposed framework with two case studies. The framework improves the productivity by about 50% when changing the type of a model. PMID:25302314
Modeling absolute differences in life expectancy with a censored skew-normal regression approach
Clough-Gorr, Kerri; Zwahlen, Marcel
2015-01-01
Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest. PMID:26339544
Björck-Åkesson, Eva; Wilder, Jenny; Granlund, Mats; Pless, Mia; Simeonsson, Rune; Adolfsson, Margareta; Almqvist, Lena; Augustine, Lilly; Klang, Nina; Lillvist, Anne
2010-01-01
Early childhood intervention and habilitation services for children with disabilities operate on an interdisciplinary basis. It requires a common language between professionals, and a shared framework for intervention goals and intervention implementation. The International Classification of Functioning, Disability and Health (ICF) and the version for children and youth (ICF-CY) may serve as this common framework and language. This overview of studies implemented by our research group is based on three research questions: Do the ICF-CY conceptual model have a valid content and is it logically coherent when investigated empirically? Is the ICF-CY classification useful for documenting child characteristics in services? What difficulties and benefits are related to using ICF-CY model as a basis for intervention when it is implemented in services? A series of studies, undertaken by the CHILD researchers are analysed. The analysis is based on data sets from published studies or master theses. Results and conclusion show that the ICF-CY has a useful content and is logically coherent on model level. Professionals find it useful for documenting children's body functions and activities. Guidelines for separating activity and participation are needed. ICF-CY is a complex classification, implementing it in services is a long-term project.
MODELING SNAKE MICROHABITAT FROM RADIOTELEMETRY STUDIES USING POLYTOMOUS LOGISTIC REGRESSION
Multivariate analysis of snake microhabitat has historically used techniques that were derived under assumptions of normality and common covariance structure (e.g., discriminant function analysis, MANOVA). In this study, polytomous logistic regression (PLR which does not require ...
Modeling Local Item Dependence Due to Common Test Format with a Multidimensional Rasch Model
ERIC Educational Resources Information Center
Baghaei, Purya; Aryadoust, Vahid
2015-01-01
Research shows that test method can exert a significant impact on test takers' performance and thereby contaminate test scores. We argue that common test method can exert the same effect as common stimuli and violate the conditional independence assumption of item response theory models because, in general, subsets of items which have a shared…
NASA Technical Reports Server (NTRS)
Hunt, Mitchell; Sayyah, Rana; Mitchell, Cody; Laws, Crystal; MacLeod, Todd C.; Ho, Fat D.
2013-01-01
Mathematical models of the common-source and common-gate amplifiers using metal-ferroelectric- semiconductor field effect transistors (MOSFETs) are developed in this paper. The models are compared against data collected with MOSFETs of varying channel lengths and widths, and circuit parameters such as biasing conditions are varied as well. Considerations are made for the capacitance formed by the ferroelectric layer present between the gate and substrate of the transistors. Comparisons between the modeled and measured data are presented in depth as well as differences and advantages as compared to the performance of each circuit using a MOSFET.
Common lines modeling for reference free Ab-initio reconstruction in cryo-EM.
Greenberg, Ido; Shkolnisky, Yoel
2017-11-01
We consider the problem of estimating an unbiased and reference-free ab initio model for non-symmetric molecules from images generated by single-particle cryo-electron microscopy. The proposed algorithm finds the globally optimal assignment of orientations that simultaneously respects all common lines between all images. The contribution of each common line to the estimated orientations is weighted according to a statistical model for common lines' detection errors. The key property of the proposed algorithm is that it finds the global optimum for the orientations given the common lines. In particular, any local optima in the common lines energy landscape do not affect the proposed algorithm. As a result, it is applicable to thousands of images at once, very robust to noise, completely reference free, and not biased towards any initial model. A byproduct of the algorithm is a set of measures that allow to asses the reliability of the obtained ab initio model. We demonstrate the algorithm using class averages from two experimental data sets, resulting in ab initio models with resolutions of 20Å or better, even from class averages consisting of as few as three raw images per class. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Laben, Joyce
2012-01-01
With the implementation of RTI, educators are attempting to find models that are the best fit for their schools. The problem solving and standard protocol models are the two most common. This study of 65 students examines a new model, the dynamic skills protocol implemented in an elementary school starting in their fourth quarter of kindergarten…
Simulated impacts of climate on hydrology can vary greatly as a function of the scale of the input data, model assumptions, and model structure. Four models are commonly used to simulate streamflow in model assumptions, and model structure. Four models are commonly used to simu...
NASA Astrophysics Data System (ADS)
Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha
2018-01-01
It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.
Research Methods in Healthcare Epidemiology and Antimicrobial Stewardship-Mathematical Modeling.
Barnes, Sean L; Kasaie, Parastu; Anderson, Deverick J; Rubin, Michael
2016-11-01
Mathematical modeling is a valuable methodology used to study healthcare epidemiology and antimicrobial stewardship, particularly when more traditional study approaches are infeasible, unethical, costly, or time consuming. We focus on 2 of the most common types of mathematical modeling, namely compartmental modeling and agent-based modeling, which provide important advantages-such as shorter developmental timelines and opportunities for extensive experimentation-over observational and experimental approaches. We summarize these advantages and disadvantages via specific examples and highlight recent advances in the methodology. A checklist is provided to serve as a guideline in the development of mathematical models in healthcare epidemiology and antimicrobial stewardship. Infect Control Hosp Epidemiol 2016;1-7.
Modeling air concentration over macro roughness conditions by Artificial Intelligence techniques
NASA Astrophysics Data System (ADS)
Roshni, T.; Pagliara, S.
2018-05-01
Aeration is improved in rivers by the turbulence created in the flow over macro and intermediate roughness conditions. Macro and intermediate roughness flow conditions are generated by flows over block ramps or rock chutes. The measurements are taken in uniform flow region. Efficacy of soft computing methods in modeling hydraulic parameters are not common so far. In this study, modeling efficiencies of MPMR model and FFNN model are found for estimating the air concentration over block ramps under macro roughness conditions. The experimental data are used for training and testing phases. Potential capability of MPMR and FFNN model in estimating air concentration are proved through this study.
Absar, Syeda Mariya; Preston, Benjamin L.
2015-05-25
The exploration of alternative socioeconomic futures is an important aspect of understanding the potential consequences of climate change. While socioeconomic scenarios are common and, at times essential, tools for the impact, adaptation and vulnerability and integrated assessment modeling research communities, their approaches to scenario development have historically been quite distinct. However, increasing convergence of impact, adaptation and vulnerability and integrated assessment modeling research in terms of scales of analysis suggests there may be value in the development of a common framework for socioeconomic scenarios. The Shared Socioeconomic Pathways represents an opportunity for the development of such a common framework. However,more » the scales at which these global storylines have been developed are largely incommensurate with the sub-national scales at which impact, adaptation and vulnerability, and increasingly integrated assessment modeling, studies are conducted. Our objective for this study was to develop sub-national and sectoral extensions of the global SSP storylines in order to identify future socioeconomic challenges for adaptation for the U.S. Southeast. A set of nested qualitative socioeconomic storyline elements, integrated storylines, and accompanying quantitative indicators were developed through an application of the Factor-Actor-Sector framework. Finally, in addition to revealing challenges and opportunities associated with the use of the SSPs as a basis for more refined scenario development, this study generated sub-national storyline elements and storylines that can subsequently be used to explore the implications of alternative subnational socioeconomic futures for the assessment of climate change impacts and adaptation.« less
The Parallel System for Integrating Impact Models and Sectors (pSIMS)
NASA Technical Reports Server (NTRS)
Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian
2014-01-01
We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.
Robust inference in discrete hazard models for randomized clinical trials.
Nguyen, Vinh Q; Gillen, Daniel L
2012-10-01
Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.
Summary of Data from the Sixth AIAA CFD Drag Prediction Workshop: CRM Cases 2 to 5
NASA Technical Reports Server (NTRS)
Tinoco, Edward N.; Brodersen, Olaf P.; Keye, Stefan; Laflin, Kelly R.; Feltrop, Edward; Vassberg, John C.; Mani, Mori; Rider, Ben; Wahls, Richard A.; Morrison, Joseph H.;
2017-01-01
Results from the Sixth AIAA CFD Drag Prediction Workshop Common Research Model Cases 2 to 5 are presented. As with past workshops, numerical calculations are performed using industry-relevant geometry, methodology, and test cases. Cases 2 to 5 focused on force/moment and pressure predictions for the NASA Common Research Model wing-body and wing-body-nacelle-pylon configurations, including Case 2 - a grid refinement study and nacelle-pylon drag increment prediction study; Case 3 - an angle-of-attack buffet study; Case 4 - an optional wing-body grid adaption study; and Case 5 - an optional wing-body coupled aero-structural simulation. The Common Research Model geometry differed from previous workshops in that it was deformed to the appropriate static aeroelastic twist and deflection at each specified angle-of-attack. The grid refinement study used a common set of overset and unstructured grids, as well as user created Multiblock structured, unstructured, and Cartesian based grids. For the supplied common grids, six levels of refinement were created resulting in grids ranging from 7x10(exp 6) to 208x10(exp 6) cells. This study (Case 2) showed further reduced scatter from previous workshops, and very good prediction of the nacelle-pylon drag increment. Case 3 studied buffet onset at M=0.85 using the Medium grid (20 to 40x10(exp 6) nodes) from the above described sequence. The prescribed alpha sweep used finely spaced intervals through the zone where wing separation was expected to begin. Although the use of the prescribed aeroelastic twist and deflection at each angle-of-attack greatly improved the wing pressure distribution agreement with test data, many solutions still exhibited premature flow separation. The remaining solutions exhibited a significant spread of lift and pitching moment at each angle-of-attack, much of which can be attributed to excessive aft pressure loading and shock location variation. Four Case 4 grid adaption solutions were submitted. Starting with grids less than 2x10(exp 6) grid points, two solutions showed a rapid convergence to an acceptable solution. Four Case 5 coupled aerostructural solutions were submitted. Both showed good agreement with experimental data. Results from this workshop highlight the continuing need for CFD improvement, particularly for conditions with significant flow separation. These comparisons also suggest the need for improved experimental diagnostics to guide future CFD development.
Common and Innovative Visuals: A sparsity modeling framework for video.
Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder
2014-05-02
Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.
Animal models of pancreatitis: Can it be translated to human pain study?
Zhao, Jing-Bo; Liao, Dong-Hua; Nissen, Thomas Dahl
2013-01-01
Chronic pancreatitis affects many individuals around the world, and the study of the underlying mechanisms leading to better treatment possibilities are important tasks. Therefore, animal models are needed to illustrate the basic study of pancreatitis. Recently, animal models of acute and chronic pancreatitis have been thoroughly reviewed, but few reviews address the important aspect on the translation of animal studies to human studies. It is well known that pancreatitis is associated with epigastric pain, but the understanding regarding to mechanisms and appropriate treatment of this pain is still unclear. Using animal models to study pancreatitis associated visceral pain is difficult, however, these types of models are a unique way to reveal the mechanisms behind pancreatitis associated visceral pain. In this review, the animal models of acute, chronic and un-common pancreatitis are briefly outlined and animal models related to pancreatitis associated visceral pain are also addressed. PMID:24259952
Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M
2017-10-01
Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal transmission. This is the first application of a deterministic state-space model to represent the discharge characteristics of motor units during voluntary contractions. Copyright © 2017 the American Physiological Society.
2004-06-01
Situation Understanding) Common Operational Pictures Planning & Decision Support Capabilities Message & Order Processing Common Operational...Pictures Planning & Decision Support Capabilities Message & Order Processing Common Languages & Data Models Modeling & Simulation Domain
Clinical models are inaccurate in predicting bile duct stones in situ for patients with gallbladder.
Topal, B; Fieuws, S; Tomczyk, K; Aerts, R; Van Steenbergen, W; Verslype, C; Penninckx, F
2009-01-01
The probability that a patient has common bile duct stones (CBDS) is a key factor in determining diagnostic and treatment strategies. This prospective cohort study evaluated the accuracy of clinical models in predicting CBDS for patients who will undergo cholecystectomy for lithiasis. From October 2005 until September 2006, 335 consecutive patients with symptoms of gallstone disease underwent cholecystectomy. Statistical analysis was performed on prospective patient data obtained at the time of first presentation to the hospital. Demonstrable CBDS at the time of endoscopic retrograde cholangiopancreatography (ERCP) or intraoperative cholangiography (IOC) was considered the gold standard for the presence of CBDS. Common bile duct stones were demonstrated in 53 patients. For 35 patients, ERCP was performed, with successful stone clearance in 24 of 30 patients who had proven CBDS. In 29 patients, IOC showed CBDS, which were managed successfully via laparoscopic common bile duct exploration, with stone extraction at the time of cholecystectomy. Prospective validation of the existing model for CBDS resulted in a predictive accuracy rate of 73%. The new model showed a predictive accuracy rate of 79%. Clinical models are inaccurate in predicting CBDS in patients with cholelithiasis. Management strategies should be based on the local availability of therapeutic expertise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hale, Richard Edward; Cetiner, Sacit M.; Fugate, David L.
The Small Modular Reactor (SMR) Dynamic System Modeling Tool project is in the third year of development. The project is designed to support collaborative modeling and study of various advanced SMR (non-light water cooled) concepts, including the use of multiple coupled reactors at a single site. The objective of the project is to provide a common simulation environment and baseline modeling resources to facilitate rapid development of dynamic advanced reactor SMR models, ensure consistency among research products within the Instrumentation, Controls, and Human-Machine Interface (ICHMI) technical area, and leverage cross-cutting capabilities while minimizing duplication of effort. The combined simulation environmentmore » and suite of models are identified as the Modular Dynamic SIMulation (MoDSIM) tool. The critical elements of this effort include (1) defining a standardized, common simulation environment that can be applied throughout the program, (2) developing a library of baseline component modules that can be assembled into full plant models using existing geometry and thermal-hydraulic data, (3) defining modeling conventions for interconnecting component models, and (4) establishing user interfaces and support tools to facilitate simulation development (i.e., configuration and parameterization), execution, and results display and capture.« less
Unifying error structures in commonly used biotracer mixing models.
Stock, Brian C; Semmens, Brice X
2016-10-01
Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.
Experimental Observation of Dispersion Phenomenon for Non-Newtonian flow in Porous Media
NASA Astrophysics Data System (ADS)
Bowers, C.; Schultz, P. B.; Fowler, C. P.; McClure, J. E.; Miller, C. T.
2017-12-01
The EPA has identified over 100 toxic species which are commonly found in hydraulic fracturing fluids, leading to concerns about their movement into endangered water supplies through spills and accelerated geological pathways. Before these concerns can be allayed, detailed study of the transport of dissolved species in non-Newtonian fluids is required. Up until now, most research into non-Newtonian flow has focused on two-parameter models, such as the Power law model; however, these models have been found to be insufficient when applied to hydraulic fracturing applications, due to high pressure flow through thin fractures and pore-throats. This work is focused on the Cross model, a four parameter model which has been found to accurately represent the flow of fracturing fluids. A series of one-dimensional flow through tracer tests have been conducted using a tritiated water tracer and an aqueous guar gum solution, a non-Newtonian fluid commonly used in the fracturing process, to investigate the effects of dispersion on species transport. These tests are compared to modeling results, and may be used to develop macroscale models for Cross model non-Newtonian fluids.
Beyond attributions: Understanding public stigma of mental illness with the common sense model.
Mak, Winnie W S; Chong, Eddie S K; Wong, Celia C Y
2014-03-01
The present study applied the common sense model (i.e., cause, controllability, timeline, consequences, and illness coherence) to understand public attitudes toward mental illness and help-seeking intention and to examine the mediating role of perceived controllability between causal attributions with public attitudes and help seeking. Based on a randomized household sample of 941 Chinese community adults in Hong Kong, results of the structural equation modeling demonstrated that people who endorsed cultural lay beliefs tended to perceive the course of mental illness as less controllable, whereas those with psychosocial attributions see its course as more controllable. The more people perceived the course of mental illness as less controllable, more chronic, and incomprehensible, the lower was their acceptance and the greater was mental illness stigma. Furthermore, those who perceived mental illness with dire consequences were more likely to feel greater stigma and social distance. Conversely, when people were more accepting, they were more likely to seek help for psychological services and felt a shorter social distance. The common sense model provides a multidimensional framework in understanding public's mental illness perceptions and stigma. Not only should biopsychosocial determinants of mental illness be advocated to the public, cultural myths toward mental illness must be debunked.
ERIC Educational Resources Information Center
Davidson, J. Cody
2016-01-01
Mathematics is the most common subject area of remedial need and the majority of remedial math students never pass a college-level credit-bearing math class. The majorities of studies that investigate this phenomenon are conducted at community colleges and use some type of regression model; however, none have used a continuation ratio model. The…
Air-water analogy and the study of hydraulic models
NASA Technical Reports Server (NTRS)
Supino, Giulio
1953-01-01
The author first sets forth some observations about the theory of models. Then he established certain general criteria for the construction of dynamically similar models in water and in air, through reference to the perfect fluid equations and to the ones pertaining to viscous flow. It is, in addition, pointed out that there are more cases in which the analogy is possible than is commonly supposed.
Uncovering Local Trends in Genetic Effects of Multiple Phenotypes via Functional Linear Models.
Vsevolozhskaya, Olga A; Zaykin, Dmitri V; Barondess, David A; Tong, Xiaoren; Jadhav, Sneha; Lu, Qing
2016-04-01
Recent technological advances equipped researchers with capabilities that go beyond traditional genotyping of loci known to be polymorphic in a general population. Genetic sequences of study participants can now be assessed directly. This capability removed technology-driven bias toward scoring predominantly common polymorphisms and let researchers reveal a wealth of rare and sample-specific variants. Although the relative contributions of rare and common polymorphisms to trait variation are being debated, researchers are faced with the need for new statistical tools for simultaneous evaluation of all variants within a region. Several research groups demonstrated flexibility and good statistical power of the functional linear model approach. In this work we extend previous developments to allow inclusion of multiple traits and adjustment for additional covariates. Our functional approach is unique in that it provides a nuanced depiction of effects and interactions for the variables in the model by representing them as curves varying over a genetic region. We demonstrate flexibility and competitive power of our approach by contrasting its performance with commonly used statistical tools and illustrate its potential for discovery and characterization of genetic architecture of complex traits using sequencing data from the Dallas Heart Study. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
Levi, Benjamin; Brugman, Samantha; Wong, Victor W; Grova, Monica; Longaker, Michael T
2011-01-01
Cleft palate represents the second most common birth defect and carries substantial physiologic and social challenges for affected patients, as they often require multiple surgical interventions during their lifetime. A number of genes have been identified to be associated with the cleft palate phenotype, but etiology in the majority of cases remains elusive. In order to better understand cleft palate and both surgical and potential tissue engineering approaches for repair, we have performed an in-depth literature review into cleft palate development in humans and mice, as well as into molecular pathways underlying these pathologic developments. We summarize the multitude of pathways underlying cleft palate development, with the transforming growth factor β superfamily being the most commonly studied. Furthermore, while the majority of cleft palate studies are performed using a mouse model, studies focusing on tissue engineering have also focused heavily on mouse models. A paucity of human randomized controlled studies exists for cleft palate repair, and so far, tissue engineering approaches are limited. In this review, we discuss the development of the palate, explain the basic science behind normal and pathologic palate development in humans as well as mouse models and elaborate on how these studies may lead to future advances in palatal tissue engineering and cleft palate treatments. PMID:21964245
Functional linear models for association analysis of quantitative traits.
Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao
2013-11-01
Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY PERIODICALS, INC.
Fischer, H Felix; Rose, Matthias
2016-10-19
Recently, a growing number of Item-Response Theory (IRT) models has been published, which allow estimation of a common latent variable from data derived by different Patient Reported Outcomes (PROs). When using data from different PROs, direct estimation of the latent variable has some advantages over the use of sum score conversion tables. It requires substantial proficiency in the field of psychometrics to fit such models using contemporary IRT software. We developed a web application ( http://www.common-metrics.org ), which allows estimation of latent variable scores more easily using IRT models calibrating different measures on instrument independent scales. Currently, the application allows estimation using six different IRT models for Depression, Anxiety, and Physical Function. Based on published item parameters, users of the application can directly estimate latent trait estimates using expected a posteriori (EAP) for sum scores as well as for specific response patterns, Bayes modal (MAP), Weighted likelihood estimation (WLE) and Maximum likelihood (ML) methods and under three different prior distributions. The obtained estimates can be downloaded and analyzed using standard statistical software. This application enhances the usability of IRT modeling for researchers by allowing comparison of the latent trait estimates over different PROs, such as the Patient Health Questionnaire Depression (PHQ-9) and Anxiety (GAD-7) scales, the Center of Epidemiologic Studies Depression Scale (CES-D), the Beck Depression Inventory (BDI), PROMIS Anxiety and Depression Short Forms and others. Advantages of this approach include comparability of data derived with different measures and tolerance against missing values. The validity of the underlying models needs to be investigated in the future.
Synthetic Biology Outside the Cell: Linking Computational Tools to Cell-Free Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Daniel D.; Department of Biomedical Engineering, University of California Davis, Davis, CA; Villarreal, Fernando D.
As mathematical models become more commonly integrated into the study of biology, a common language for describing biological processes is manifesting. Many tools have emerged for the simulation of in vivo synthetic biological systems, with only a few examples of prominent work done on predicting the dynamics of cell-free synthetic systems. At the same time, experimental biologists have begun to study dynamics of in vitro systems encapsulated by amphiphilic molecules, opening the door for the development of a new generation of biomimetic systems. In this review, we explore both in vivo and in vitro models of biochemical networks with amore » special focus on tools that could be applied to the construction of cell-free expression systems. We believe that quantitative studies of complex cellular mechanisms and pathways in synthetic systems can yield important insights into what makes cells different from conventional chemical systems.« less
Synthetic Biology Outside the Cell: Linking Computational Tools to Cell-Free Systems
Lewis, Daniel D.; Villarreal, Fernando D.; Wu, Fan; Tan, Cheemeng
2014-01-01
As mathematical models become more commonly integrated into the study of biology, a common language for describing biological processes is manifesting. Many tools have emerged for the simulation of in vivo synthetic biological systems, with only a few examples of prominent work done on predicting the dynamics of cell-free synthetic systems. At the same time, experimental biologists have begun to study dynamics of in vitro systems encapsulated by amphiphilic molecules, opening the door for the development of a new generation of biomimetic systems. In this review, we explore both in vivo and in vitro models of biochemical networks with a special focus on tools that could be applied to the construction of cell-free expression systems. We believe that quantitative studies of complex cellular mechanisms and pathways in synthetic systems can yield important insights into what makes cells different from conventional chemical systems. PMID:25538941
Synthetic biology outside the cell: linking computational tools to cell-free systems.
Lewis, Daniel D; Villarreal, Fernando D; Wu, Fan; Tan, Cheemeng
2014-01-01
As mathematical models become more commonly integrated into the study of biology, a common language for describing biological processes is manifesting. Many tools have emerged for the simulation of in vivo synthetic biological systems, with only a few examples of prominent work done on predicting the dynamics of cell-free synthetic systems. At the same time, experimental biologists have begun to study dynamics of in vitro systems encapsulated by amphiphilic molecules, opening the door for the development of a new generation of biomimetic systems. In this review, we explore both in vivo and in vitro models of biochemical networks with a special focus on tools that could be applied to the construction of cell-free expression systems. We believe that quantitative studies of complex cellular mechanisms and pathways in synthetic systems can yield important insights into what makes cells different from conventional chemical systems.
Automatic classification of animal vocalizations
NASA Astrophysics Data System (ADS)
Clemins, Patrick J.
2005-11-01
Bioacoustics, the study of animal vocalizations, has begun to use increasingly sophisticated analysis techniques in recent years. Some common tasks in bioacoustics are repertoire determination, call detection, individual identification, stress detection, and behavior correlation. Each research study, however, uses a wide variety of different measured variables, called features, and classification systems to accomplish these tasks. The well-established field of human speech processing has developed a number of different techniques to perform many of the aforementioned bioacoustics tasks. Melfrequency cepstral coefficients (MFCCs) and perceptual linear prediction (PLP) coefficients are two popular feature sets. The hidden Markov model (HMM), a statistical model similar to a finite autonoma machine, is the most commonly used supervised classification model and is capable of modeling both temporal and spectral variations. This research designs a framework that applies models from human speech processing for bioacoustic analysis tasks. The development of the generalized perceptual linear prediction (gPLP) feature extraction model is one of the more important novel contributions of the framework. Perceptual information from the species under study can be incorporated into the gPLP feature extraction model to represent the vocalizations as the animals might perceive them. By including this perceptual information and modifying parameters of the HMM classification system, this framework can be applied to a wide range of species. The effectiveness of the framework is shown by analyzing African elephant and beluga whale vocalizations. The features extracted from the African elephant data are used as input to a supervised classification system and compared to results from traditional statistical tests. The gPLP features extracted from the beluga whale data are used in an unsupervised classification system and the results are compared to labels assigned by experts. The development of a framework from which to build animal vocalization classifiers will provide bioacoustics researchers with a consistent platform to analyze and classify vocalizations. A common framework will also allow studies to compare results across species and institutions. In addition, the use of automated classification techniques can speed analysis and uncover behavioral correlations not readily apparent using traditional techniques.
ERIC Educational Resources Information Center
Strickland, Amanda M.; Kraft, Adam; Bhattacharyya, Gautam
2010-01-01
As part of our investigations into the development of representational competence, we report results from a study in which we elicited sixteen graduate students' expressed mental models of commonly-used terms for describing organic reactions--functional group, nucleophile/electrophile, acid/base--and for diagrams of transformations and their…
Testing DRAINMOD-FOREST for predicting evapotranspiration in a mid-rotation pine plantation
Shiying Tian; Mohamed A. Youssef; Ge Sun; George M. Chescheir; Asko Noormets; Devendra M. Amatya; R. Wayne Skaggs; John S. King; Steve McNulty; Michael Gavazzi; Guofang Miao; Jean-Christophe Domec
2015-01-01
Evapotranspiration (ET) is a key component of the hydrologic cycle in terrestrial ecosystems and accurate description of ET processes is essential for developing reliable ecohydrological models. This study investigated the accuracy of ET prediction by the DRAINMOD-FOREST after its calibration/validation for predicting commonly measured hydrological variables. The model...
ERIC Educational Resources Information Center
Li, Spencer D.
2011-01-01
Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…
Barriers to Self-Management Behaviors in College Students with Food Allergies
ERIC Educational Resources Information Center
Duncan, Sarah E.; Annunziato, Rachel A.
2018-01-01
Objective: This study examined barriers to engagement in self-management behaviors among food-allergic college students (1) within the frameworks of the health belief model (HBM) and common sense self-regulation model (CS-SRM) and (2) in the context of overall risky behaviors. Participants: Undergraduate college students who reported having a…
Model Based Usability Heuristics for Constructivist E-Learning
ERIC Educational Resources Information Center
Katre, Dinesh S.
2007-01-01
Many e-learning applications and games have been studied to identify the common interaction models of constructivist learning, namely: 1. Move the object to appropriate location; 2. Place objects in appropriate order and location(s); 3. Click to identify; 4. Change the variable factors to observe the effects; and 5. System personification and…
Cognitive Desegregation: Unmasking Human Sexuality in the US Military
2010-06-01
Segregation, Model 1 (Sexuality) . . . . . . . . . . . . . . . . . 18 2 Cognitive Segregation, Model 2 (Sexuality and Gender ) . . . . . . . 27...Figure 1 Independent Concepts of Gender and Sexuality . . . . . . . . . . . . 22 2 Common Alignment of Gender and Sexuality...human sexuality studies should suffice. Sexual orientation is an individual’s pattern of sexual and emotional attraction based on the gender of his or
Fitting Meta-Analytic Structural Equation Models with Complex Datasets
ERIC Educational Resources Information Center
Wilson, Sandra Jo; Polanin, Joshua R.; Lipsey, Mark W.
2016-01-01
A modification of the first stage of the standard procedure for two-stage meta-analytic structural equation modeling for use with large complex datasets is presented. This modification addresses two common problems that arise in such meta-analyses: (a) primary studies that provide multiple measures of the same construct and (b) the correlation…
ERIC Educational Resources Information Center
Cameron, Lindsey; Rutland, Adam; Brown, Rupert; Douch, Rebecca
2006-01-01
The present research evaluated an intervention, derived from the "extended contact hypothesis," which aimed to change children's intergroup attitudes toward refugees. The study (n=253) tested 3 models of extended contact among 5- to 11-year-old children: dual identity, common ingroup identity, and decategorization. Children read friendship stories…
To understand the combined health effects of exposure to ambient air pollutant mixtures, it is becoming more common to include multiple pollutants in epidemiologic models. However, the complex spatial and temporal pattern of ambient pollutant concentrations and related exposures ...
Implicit and Explicit Preference Structures in Models of Labor Supply.
ERIC Educational Resources Information Center
Dickinson, Jonathan
The study of labor supply is directed to a theoretical methodology under which the choice of the general functional form of the income-leisure preference structure may be regarded as an empirical question. The author has reviewed the common functional forms employed in empirical labor supply models and has characterized the inherent preference…
ERIC Educational Resources Information Center
Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David
2012-01-01
Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively…
In risk assessment there is a need to accelerate toxicological evaluation of vast numbers of chemicals. New programs focus on identifying common modes of action and on model systems for rapid screening. In this study we address both these issues. Oxidative stress is a good can...
ERIC Educational Resources Information Center
Guevara, Porfirio
2014-01-01
This article identifies elements and connections that seem to be relevant to explain persistent aggregate behavioral patterns in educational systems when using complex dynamical systems modeling and simulation approaches. Several studies have shown what factors are at play in educational fields, but confusion still remains about the underlying…
A Test of Two Alternative Cognitive Processing Models: Learning Styles and Dual Coding
ERIC Educational Resources Information Center
Cuevas, Joshua; Dawson, Bryan L.
2018-01-01
This study tested two cognitive models, learning styles and dual coding, which make contradictory predictions about how learners process and retain visual and auditory information. Learning styles-based instructional practices are common in educational environments despite a questionable research base, while the use of dual coding is less…
A Comparison of Exposure Control Procedures in CATs Using the 3PL Model
ERIC Educational Resources Information Center
Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G.
2013-01-01
This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…
RTEL1 and TERT polymorphisms are associated with astrocytoma risk in the Chinese Han population.
Jin, Tian-Bo; Zhang, Jia-Yi; Li, Gang; Du, Shu-Li; Geng, Ting-Ting; Gao, Jing; Liu, Qian-Ping; Gao, Guo-Dong; Kang, Long-Li; Chen, Chao; Li, Shan-Qu
2013-12-01
Common variants of multiple genes play a role in glioma onset. However, research related to astrocytoma, the most common primary brain neoplasm, is rare. In this study, we chose 21 tagging SNPs (tSNPs), previously reported to be associated with glioma risk in a Chinese case-control study from Xi'an, China, and identified their contributions to astrocytoma susceptibility. We found an association with astrocytoma susceptibility for two tSNPs (rs6010620 and rs2853676) in two different genes: regulator of telomere elongation helicase 1 (RTEL1) and telomerase reverse transcriptase (TERT), respectively. We confirmed our results using recessive, dominant, and additive models. In the recessive model, we found two tSNPs (rs2297440 and rs6010620) associated with increased astrocytoma risk. In the dominant model, we found that rs2853676 was associated with increased astrocytoma risk. In the additive model, all three tSNPs (rs2297440, rs2853676, and rs6010620) were associated with increased astrocytoma risk. Our results demonstrate, for the first time, the potential roles of RTEL1 and TERT in astrocytoma development.
Wu, Zheyang; Zhao, Hongyu
2012-01-01
For more fruitful discoveries of genetic variants associated with diseases in genome-wide association studies, it is important to know whether joint analysis of multiple markers is more powerful than the commonly used single-marker analysis, especially in the presence of gene-gene interactions. This article provides a statistical framework to rigorously address this question through analytical power calculations for common model search strategies to detect binary trait loci: marginal search, exhaustive search, forward search, and two-stage screening search. Our approach incorporates linkage disequilibrium, random genotypes, and correlations among score test statistics of logistic regressions. We derive analytical results under two power definitions: the power of finding all the associated markers and the power of finding at least one associated marker. We also consider two types of error controls: the discovery number control and the Bonferroni type I error rate control. After demonstrating the accuracy of our analytical results by simulations, we apply them to consider a broad genetic model space to investigate the relative performances of different model search strategies. Our analytical study provides rapid computation as well as insights into the statistical mechanism of capturing genetic signals under different genetic models including gene-gene interactions. Even though we focus on genetic association analysis, our results on the power of model selection procedures are clearly very general and applicable to other studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendon, Vrushali V.; Taylor, Zachary T.
ABSTRACT: Recent advances in residential building energy efficiency and codes have resulted in increased interest in detailed residential building energy models using the latest energy simulation software. One of the challenges of developing residential building models to characterize new residential building stock is to allow for flexibility to address variability in house features like geometry, configuration, HVAC systems etc. Researchers solved this problem in a novel way by creating a simulation structure capable of creating fully-functional EnergyPlus batch runs using a completely scalable residential EnergyPlus template system. This system was used to create a set of thirty-two residential prototype buildingmore » models covering single- and multifamily buildings, four common foundation types and four common heating system types found in the United States (US). A weighting scheme with detailed state-wise and national weighting factors was designed to supplement the residential prototype models. The complete set is designed to represent a majority of new residential construction stock. The entire structure consists of a system of utility programs developed around the core EnergyPlus simulation engine to automate the creation and management of large-scale simulation studies with minimal human effort. The simulation structure and the residential prototype building models have been used for numerous large-scale studies, one of which is briefly discussed in this paper.« less
Wu, Zheyang; Zhao, Hongyu
2013-01-01
For more fruitful discoveries of genetic variants associated with diseases in genome-wide association studies, it is important to know whether joint analysis of multiple markers is more powerful than the commonly used single-marker analysis, especially in the presence of gene-gene interactions. This article provides a statistical framework to rigorously address this question through analytical power calculations for common model search strategies to detect binary trait loci: marginal search, exhaustive search, forward search, and two-stage screening search. Our approach incorporates linkage disequilibrium, random genotypes, and correlations among score test statistics of logistic regressions. We derive analytical results under two power definitions: the power of finding all the associated markers and the power of finding at least one associated marker. We also consider two types of error controls: the discovery number control and the Bonferroni type I error rate control. After demonstrating the accuracy of our analytical results by simulations, we apply them to consider a broad genetic model space to investigate the relative performances of different model search strategies. Our analytical study provides rapid computation as well as insights into the statistical mechanism of capturing genetic signals under different genetic models including gene-gene interactions. Even though we focus on genetic association analysis, our results on the power of model selection procedures are clearly very general and applicable to other studies. PMID:23956610
Development of a Common Research Model for Applied CFD Validation Studies
NASA Technical Reports Server (NTRS)
Vassberg, John C.; Dehaan, Mark A.; Rivers, S. Melissa; Wahls, Richard A.
2008-01-01
The development of a wing/body/nacelle/pylon/horizontal-tail configuration for a common research model is presented, with focus on the aerodynamic design of the wing. Here, a contemporary transonic supercritical wing design is developed with aerodynamic characteristics that are well behaved and of high performance for configurations with and without the nacelle/pylon group. The horizontal tail is robustly designed for dive Mach number conditions and is suitably sized for typical stability and control requirements. The fuselage is representative of a wide/body commercial transport aircraft; it includes a wing-body fairing, as well as a scrubbing seal for the horizontal tail. The nacelle is a single-cowl, high by-pass-ratio, flow-through design with an exit area sized to achieve a natural unforced mass-flow-ratio typical of commercial aircraft engines at cruise. The simplicity of this un-bifurcated nacelle geometry will facilitate grid generation efforts of subsequent CFD validation exercises. Detailed aerodynamic performance data has been generated for this model; however, this information is presented in such a manner as to not bias CFD predictions planned for the fourth AIAA CFD Drag Prediction Workshop, which incorporates this common research model into its blind test cases. The CFD results presented include wing pressure distributions with and without the nacelle/pylon, ML/D trend lines, and drag-divergence curves; the design point for the wing/body configuration is within 1% of its max-ML/D. Plans to test the common research model in the National Transonic Facility and the Ames 11-ft wind tunnels are also discussed.
ERIC Educational Resources Information Center
Teo, Timothy; Tan, Lynde
2012-01-01
This study applies the theory of planned behavior (TPB), a theory that is commonly used in commercial settings, to the educational context to explain pre-service teachers' technology acceptance. It is also interested in examining its validity when used for this purpose. It has found evidence that the TPB is a valid model to explain pre-service…
NASA Astrophysics Data System (ADS)
Uno, Takanori; Ichikawa, Kouji; Mabuchi, Yuichi; Nakamura, Atsushi; Okazaki, Yuji; Asai, Hideki
In this paper, we studied the use of common-mode noise reduction technique for in-vehicle electronic equipment in an actual instrument design. We have improved the circuit model of the common-mode noise that flows to the wire harness to add the effect of a bypass capacitor located near the LSI. We analyzed the improved circuit model using a circuit simulator and verified the effectiveness of the noise reduction condition derived from the circuit model. It was also confirmed that offsetting the impedance mismatch in the PCB section requires to make a circuit constant larger than that necessary for doing the impedance mismatch in the LSI section. An evaluation circuit board comprising an automotive microcomputer was prototyped to experiment on the common-mode noise reduction effect of the board. The experimental results showed the noise reduction effect of the board. The experimental results also revealed that the degree of impedance mismatch in the LSI section can be estimated by using a PCB having a known impedance. We further inquired into the optimization of impedance parameters, which is difficult for actual products at present. To satisfy the noise reduction condition composed of numerous parameters, we proposed a design method using an optimization algorithm and an electromagnetic field simulator, and confirmed its effectiveness.
Design and modeling balloon-expandable coronary stent for manufacturability
NASA Astrophysics Data System (ADS)
Suryawan, D.; Suyitno
2017-02-01
Coronary artery disease (CAD) is a disease that caused by narrowing of the coronary artery. The narrowing coronary artery is usually caused by cholesterol-containing deposit (plaque) which can cause a heart attack. CAD is the most common cause mortality in Indonesia. The commonly CAD treatment use the stent to opens or alleviate the narrowing coronary artery. In this study, the stent design is optimized for the manufacturability. Modeling is used to determine the free stent expansion due to applied pressure in the inner surface of the stent. The stress distribution, outer diameter change, and dogboning phenomena are investigated in the simulation. The result of modeling and simulating was analyzed and used to optimize the stent design before it is manufactured using EDM (Electric Discharge Machine) in the next research.
The Common Factors Discrimination Model: An Integrated Approach to Counselor Supervision
ERIC Educational Resources Information Center
Crunk, A. Elizabeth; Barden, Sejal M.
2017-01-01
Numerous models of clinical supervision have been developed; however, there is little empirical support indicating that any one model is superior. Therefore, common factors approaches to supervision integrate essential components that are shared among counseling and supervision models. The purpose of this paper is to present an innovative model of…
Mouratiadou, Ioanna; Russell, Graham; Topp, Cairistiona; Louhichi, Kamel; Moran, Dominic
2010-01-01
Selecting cost-effective measures to regulate agricultural water pollution to conform to the Water Framework Directive presents multiple challenges. A bio-economic modelling approach is presented that has been used to explore the water quality and economic effects of the 2003 Common Agricultural Policy Reform and to assess the cost-effectiveness of input quotas and emission standards against nitrate leaching, in a representative case study catchment in Scotland. The approach combines a biophysical model (NDICEA) with a mathematical programming model (FSSIM-MP). The results indicate only small changes due to the Reform, with the main changes in farmers' decision making and the associated economic and water quality indicators depending on crop price changes, and suggest the use of target fertilisation in relation to crop and soil requirements, as opposed to measures targeting farm total or average nitrogen use.
NASA Common Research Model Test Envelope Extension With Active Sting Damping at NTF
NASA Technical Reports Server (NTRS)
Rivers, Melissa B.; Balakrishna, S.
2014-01-01
The NASA Common Research Model (CRM) high Reynolds number transonic wind tunnel testing program was established to generate an experimental database for applied Computational Fluid Dynamics (CFD) validation studies. During transonic wind tunnel tests, the CRM encounters large sting vibrations when the angle of attack approaches the second pitching moment break, which can sometimes become divergent. CRM transonic test data analysis suggests that sting divergent oscillations are related to negative net sting damping episodes associated with flow separation instability. The National Transonic Facility (NTF) has been addressing remedies to extend polar testing up to and beyond the second pitching moment break point of the test articles using an active piezoceramic damper system for both ambient and cryogenic temperatures. This paper reviews CRM test results to gain understanding of sting dynamics with a simple model describing the mechanics of a sting-model system and presents the performance of the damper under cryogenic conditions.
ERIC Educational Resources Information Center
Wei, Youhua; Morgan, Rick
2016-01-01
As an alternative to common-item equating when common items do not function as expected, the single-group growth model (SGGM) scaling uses common examinees or repeaters to link test scores on different forms. The SGGM scaling assumes that, for repeaters taking adjacent administrations, the conditional distribution of scale scores in later…
Life Support Baseline Values and Assumptions Document
NASA Technical Reports Server (NTRS)
Anderson, Molly S.; Ewert, Michael K.; Keener, John F.
2018-01-01
The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. This document identifies many specific physical quantities that define life support systems, serving as a general reference for spacecraft life support system technology developers.
Life Support Baseline Values and Assumptions Document
NASA Technical Reports Server (NTRS)
Anderson, Molly S.; Ewert, Michael K.; Keener, John F.; Wagner, Sandra A.
2015-01-01
The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. With the ability to accurately compare different technologies' performance for the same function, managers will be able to make better decisions regarding technology development.
Kalman filter techniques for accelerated Cartesian dynamic cardiac imaging.
Feng, Xue; Salerno, Michael; Kramer, Christopher M; Meyer, Craig H
2013-05-01
In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome, and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and signal-to-noise ratio. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view-sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction. Copyright © 2012 Wiley Periodicals, Inc.
Kalman Filter Techniques for Accelerated Cartesian Dynamic Cardiac Imaging
Feng, Xue; Salerno, Michael; Kramer, Christopher M.; Meyer, Craig H.
2012-01-01
In dynamic MRI, spatial and temporal parallel imaging can be exploited to reduce scan time. Real-time reconstruction enables immediate visualization during the scan. Commonly used view-sharing techniques suffer from limited temporal resolution, and many of the more advanced reconstruction methods are either retrospective, time-consuming, or both. A Kalman filter model capable of real-time reconstruction can be used to increase the spatial and temporal resolution in dynamic MRI reconstruction. The original study describing the use of the Kalman filter in dynamic MRI was limited to non-Cartesian trajectories, because of a limitation intrinsic to the dynamic model used in that study. Here the limitation is overcome and the model is applied to the more commonly used Cartesian trajectory with fast reconstruction. Furthermore, a combination of the Kalman filter model with Cartesian parallel imaging is presented to further increase the spatial and temporal resolution and SNR. Simulations and experiments were conducted to demonstrate that the Kalman filter model can increase the temporal resolution of the image series compared with view sharing techniques and decrease the spatial aliasing compared with TGRAPPA. The method requires relatively little computation, and thus is suitable for real-time reconstruction. PMID:22926804
Darcet, Flavie; Gardier, Alain M.; Gaillard, Raphael; David, Denis J.; Guilloux, Jean-Philippe
2016-01-01
Major Depressive Disorder (MDD) is the most common psychiatric disease, affecting millions of people worldwide. In addition to the well-defined depressive symptoms, patients suffering from MDD consistently complain about cognitive disturbances, significantly exacerbating the burden of this illness. Among cognitive symptoms, impairments in attention, working memory, learning and memory or executive functions are often reported. However, available data about the heterogeneity of MDD patients and magnitude of cognitive symptoms through the different phases of MDD remain difficult to summarize. Thus, the first part of this review briefly overviewed clinical studies, focusing on the cognitive dysfunctions depending on the MDD type. As animal models are essential translational tools for underpinning the mechanisms of cognitive deficits in MDD, the second part of this review synthetized preclinical studies observing cognitive deficits in different rodent models of anxiety/depression. For each cognitive domain, we determined whether deficits could be shared across models. Particularly, we established whether specific stress-related procedures or unspecific criteria (such as species, sex or age) could segregate common cognitive alteration across models. Finally, the role of adult hippocampal neurogenesis in rodents in cognitive dysfunctions during MDD state was also discussed. PMID:26901205
Developmental Associations Between Adolescent Alcohol Use and Dating Aggression
Reyes, H. Luz McNaughton; Foshee, Vangie A.; Bauer, Daniel J.; Ennett, Susan T.
2012-01-01
While numerous studies have established a link between alcohol use and partner violence in adulthood, little research has examined this relation during adolescence. The current study used multivariate growth models to examine relations between alcohol use and dating aggression across grades 8 through 12 controlling for shared risk factors (common causes) that predict both behaviors. Associations between trajectories of alcohol use and dating aggression were reduced substantially when common causes were controlled. Concurrent associations between the two behaviors were significant across nearly all grades but no evidence was found for prospective connections from prior alcohol use to subsequent dating aggression or vice versa. Findings suggest that prevention efforts should target common causes of alcohol use and dating aggression. PMID:23589667
Ota, Miho; Ogawa, Shintaro; Kato, Koichi; Masuda, Chiaki; Kunugi, Hiroshi
2015-12-01
Previous studies have demonstrated that patients with schizophrenia show greater sensitivity to psychostimulants than healthy subjects. Sensitization to psychostimulants and resultant alteration of dopaminergic neurotransmission in rodents has been suggested as a useful model of schizophrenia. This study sought to examine the use of methylphenidate as a psychostimulant to induce dopamine release and that of [(18)F]fallypride as a radioligand to quantify the release in a primate model of schizophrenia. Four common marmosets were scanned by positron emission tomography twice, before and after methylphenidate challenge, to evaluate dopamine release. Four other marmosets were sensitized by repeated methamphetamine (MAP) administration. Then, they were scanned twice, before and after methylphenidate challenge, to evaluate whether MAP-sensitization induced greater sensitivity to methylphenidate. We revealed a main effect of the methylphenidate challenge but not the MAP pretreatment on the striatal binding potential. These results suggest that methylphenidate-induced striatal dopamine release in the common marmoset could be evaluated by [(18)F]fallypride. Copyright © 2015 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Yurttas, Veysel; Şereflican, Murat; Erkoçoğlu, Mustafa; Terzi, Elçin Hakan; Kükner, Aysel; Oral, Mesut
2015-08-01
Allergic rhinitis is one of the most common health problems and has a major effect on quality of life. Although new-generation antihistamines and nasal steroids are the main treatment options, complete resolution cannot be obtained in some patients. Besides common side effects such as nasal irritation and epistaxis, the use of these drugs is controversial in some patients, such as pregnant or breastfeeding women. These findings highlight the need for new treatment options. Although phototherapy has been successfully used in the treatment of atopic dermatitis, which is an IgE-mediated disease and shares several common pathogenic features with allergic rhinitis, there are limited studies about its role in the treatment of allergic rhinitis. In this study, we aimed to evaluate and compare the histopathological effects of intranasal phototherapy (Rhinolight) and nasal corticosteroid treatment on the nasal mucosa in allergic rhinitis in a rabbit model and we found that both treatment options significantly reduced inflammation in the nasal mucosa without increasing apoptosis of mucosal cells. Copyright © 2015 Elsevier B.V. All rights reserved.
Impact of excipient interactions on solid dosage form stability.
Narang, Ajit S; Desai, Divyakant; Badawy, Sherif
2012-10-01
Drug-excipient interactions in solid dosage forms can affect drug product stability in physical aspects such as organoleptic changes and dissolution slowdown, or chemically by causing drug degradation. Recent research has allowed the distinction in chemical instability resulting from direct drug-excipient interactions and from drug interactions with excipient impurities. A review of chemical instability in solid dosage forms highlights common mechanistic themes applicable to multiple degradation pathways. These common themes include the role of water and microenvironmental pH. In addition, special aspects of solid-state reactions with excipients and/or excipient impurities add to the complexity in understanding and modeling reaction pathways. This paper discusses mechanistic basis of known drug-excipient interactions with case studies and provides an overview of common underlying themes. Recent developments in the understanding of degradation pathways further impact methodologies used in the pharmaceutical industry for prospective stability assessment. This paper discusses these emerging aspects in terms of limitations of drug-excipient compatibility studies, emerging paradigms in accelerated stability testing, and application of mathematical modeling for prediction of drug product stability.
Generalized Structured Component Analysis with Uniqueness Terms for Accommodating Measurement Error
Hwang, Heungsun; Takane, Yoshio; Jung, Kwanghee
2017-01-01
Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling (SEM), where latent variables are approximated by weighted composites of indicators. It has no formal mechanism to incorporate errors in indicators, which in turn renders components prone to the errors as well. We propose to extend GSCA to account for errors in indicators explicitly. This extension, called GSCAM, considers both common and unique parts of indicators, as postulated in common factor analysis, and estimates a weighted composite of indicators with their unique parts removed. Adding such unique parts or uniqueness terms serves to account for measurement errors in indicators in a manner similar to common factor analysis. Simulation studies are conducted to compare parameter recovery of GSCAM and existing methods. These methods are also applied to fit a substantively well-established model to real data. PMID:29270146
Tuu, Ho Huy; Olsen, Svein Ottar; Thao, Duong Tri; Anh, Nguyen Thi Kim
2008-11-01
The purpose of this study is to apply the conceptual framework of the theory of planned behavior (TPB) to explain the consumption of a common food (fish) in Vietnam. We seek to understand the role of norms in explaining intention to consume, and descriptive norms is included as extensions of traditional constructs such as attitude, social norms, and perceived behavioral control. The data were derived from a cross-sectional sample of 612 consumers. Structural equation modeling was applied to test the relationships between constructs, and evaluate the reliability and the validity of the constructs. The results indicate that the models fit well with the data. Attitude, social norms, descriptive norms and behavioral control all had significantly positive effect on behavioral intention. Finally, both intention and perceived behavioral control were highly associated with the frequency of consumption of the common food investigated.
Zhu, Chuankun; Tong, Jingou; Yu, Xiaomu; Guo, Wenjie
2015-08-01
Comparative mapping provides an efficient method to connect genomes of non-model and model fishes. In this study, we used flanking sequences of the 659 microsatellites on a genetic map of bighead carp (Aristichthys nobilis) to comprehensively study syntenic relationships between bighead carp and nine model and non-model fishes. Of the five model and two food fishes with whole genome data, Cyprinus carpio showed the highest rate of positive BLAST hits (95.3 %) with bighead carp map, followed by Danio rerio (70.9 %), Oreochromis niloticus (21.7 %), Tetraodon nigroviridis (6.4 %), Gasterosteus aculeatus (5.2 %), Oryzias latipes (4.7 %) and Fugu rubripes (3.5 %). Chromosomal syntenic analyses showed that inversion was the basic chromosomal rearrangement during genomic evolution of cyprinids, and the extent of inversions and translocations was found to be positively correlated with evolutionary relationships among fishes studied. Among the five investigated cyprinids, linkage groups (LGs) of bighead carp, Hypophthalmichthys molitrix and Ctenopharyngodon idella exhibited a one-to-one relationship. Besides, LG 9 of bighead carp and homologous LGs of silver carp and grass carp all corresponded to the chromosomes 10 and 22 of zebrafish, suggesting that chromosomal fission may have occurred in the ancestor of zebrafish. On the other hand, LGs of bighead carp and common carp showed an approximate one-to-two relationship with extensive translocations, confirming the occurrence of a 4th whole genome duplication in common carp. This study provides insights into the understanding of genome evolution among cyprinids and would aid in transferring positional and functional information of genes from model fish like zebrafish to non-model fish like bighead carp.
Karasz, Alison; Patel, Viraj; Kabita, Mahbhooba; Shimu, Parvin
2015-01-01
Background Though common mental disorder (CMD) is highly prevalent among South Asian immigrant women, they rarely seek mental treatment. This may be due in part to the lack of conceptual synchrony between medical models of mental disorder and the social models of distress common in South Asian communities. Furthermore, common mental health screening and diagnostic measures may not adequately capture distress in this group. CBPR is ideally suited to help address measurement issues in CMD as well as develop culturally appropriate treatment models. Objectives To use participatory methods to identify an appropriate, culturally specific mental health syndrome and develop an instrument to measure this syndrome. Methods We formed a partnership between researchers, clinicians, and community members. The partnership selected a culturally specific model of emotional distress/ illness, “Tension,” as a focus for further study. Partners developed a scale to measure Tension and tested the new scale on 162 Bangladeshi immigrant women living in the Bronx. Results The 24-item “Tension Scale” had high internal consistency (alpha =0.83). In bivariate analysis, the scale significantly correlated in the expected direction with depressed as measured by the PHQ-2, age, education, self-rated health, having seen a physician in the past year, and other variables. Conclusions Using participatory techniques, we created a new measure designed to assess common mental disorder in an isolated immigrant group. The new measure shows excellent psychometric properties and will be helpful in the implementation of a community-based, culturally synchronous intervention for depression. We describe a useful strategy for the rapid development and field testing of culturally appropriate measures of mental distress and disorder. PMID:24375184
Chan, King-Pan; Chan, Kwok-Hung; Wong, Wilfred Hing-Sang; Peiris, J. S. Malik; Wong, Chit-Ming
2011-01-01
Background Reliable estimates of disease burden associated with respiratory viruses are keys to deployment of preventive strategies such as vaccination and resource allocation. Such estimates are particularly needed in tropical and subtropical regions where some methods commonly used in temperate regions are not applicable. While a number of alternative approaches to assess the influenza associated disease burden have been recently reported, none of these models have been validated with virologically confirmed data. Even fewer methods have been developed for other common respiratory viruses such as respiratory syncytial virus (RSV), parainfluenza and adenovirus. Methods and Findings We had recently conducted a prospective population-based study of virologically confirmed hospitalization for acute respiratory illnesses in persons <18 years residing in Hong Kong Island. Here we used this dataset to validate two commonly used models for estimation of influenza disease burden, namely the rate difference model and Poisson regression model, and also explored the applicability of these models to estimate the disease burden of other respiratory viruses. The Poisson regression models with different link functions all yielded estimates well correlated with the virologically confirmed influenza associated hospitalization, especially in children older than two years. The disease burden estimates for RSV, parainfluenza and adenovirus were less reliable with wide confidence intervals. The rate difference model was not applicable to RSV, parainfluenza and adenovirus and grossly underestimated the true burden of influenza associated hospitalization. Conclusion The Poisson regression model generally produced satisfactory estimates in calculating the disease burden of respiratory viruses in a subtropical region such as Hong Kong. PMID:21412433
Saha, Kaushik; Som, Sibendu; Battistoni, Michele
2017-01-01
Flash boiling is known to be a common phenomenon for gasoline direct injection (GDI) engine sprays. The Homogeneous Relaxation Model has been adopted in many recent numerical studies for predicting cavitation and flash boiling. The Homogeneous Relaxation Model is assessed in this study. Sensitivity analysis of the model parameters has been documented to infer the driving factors for the flash-boiling predictions. The model parameters have been varied over a range and the differences in predictions of the extent of flashing have been studied. Apart from flashing in the near nozzle regions, mild cavitation is also predicted inside the gasoline injectors.more » The variation in the predicted time scales through the model parameters for predicting these two different thermodynamic phenomena (cavitation, flash) have been elaborated in this study. Turbulence model effects have also been investigated by comparing predictions from the standard and Re-Normalization Group (RNG) k-ε turbulence models.« less
Cross-cultural perspectives on physician and lay models of the common cold.
Baer, Roberta D; Weller, Susan C; de Alba García, Javier García; Rocha, Ana L Salcedo
2008-06-01
We compare physicians and laypeople within and across cultures, focusing on similarities and differences across samples, to determine whether cultural differences or lay-professional differences have a greater effect on explanatory models of the common cold. Data on explanatory models for the common cold were collected from physicians and laypeople in South Texas and Guadalajara, Mexico. Structured interview materials were developed on the basis of open-ended interviews with samples of lay informants at each locale. A structured questionnaire was used to collect information from each sample on causes, symptoms, and treatments for the common cold. Consensus analysis was used to estimate the cultural beliefs for each sample. Instead of systematic differences between samples based on nationality or level of professional training, all four samples largely shared a single-explanatory model of the common cold, with some differences on subthemes, such as the role of hot and cold forces in the etiology of the common cold. An evaluation of our findings indicates that, although there has been conjecture about whether cultural or lay-professional differences are of greater importance in understanding variation in explanatory models of disease and illness, systematic data collected on community and professional beliefs indicate that such differences may be a function of the specific illness. Further generalizations about lay-professional differences need to be based on detailed data for a variety of illnesses, to discern patterns that may be present. Finally, a systematic approach indicates that agreement across individual explanatory models is sufficient to allow for a community-level explanatory model of the common cold.
How important are rare variants in common disease?
Saint Pierre, Aude; Génin, Emmanuelle
2014-09-01
Genome-wide association studies have uncovered hundreds of common genetic variants involved in complex diseases. However, for most complex diseases, these common genetic variants only marginally contribute to disease susceptibility. It is now argued that rare variants located in different genes could in fact play a more important role in disease susceptibility than common variants. These rare genetic variants were not captured by genome-wide association studies using single nucleotide polymorphism-chips but with the advent of next-generation sequencing technologies, they have become detectable. It is now possible to study their contribution to common disease by resequencing samples of cases and controls or by using new genotyping exome arrays that cover rare alleles. In this review, we address the question of the contribution of rare variants in common disease by taking the examples of different diseases for which some resequencing studies have already been performed, and by summarizing the results of simulation studies conducted so far to investigate the genetic architecture of complex traits in human. So far, empirical data have not allowed the exclusion of many models except the most extreme ones involving only a small number of rare variants with large effects contributing to complex disease. To unravel the genetic architecture of complex disease, case-control data will not be sufficient, and alternative study designs need to be proposed together with methodological developments. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
A View of Studies on Bibliometrics and Related Subjects in Japan.
ERIC Educational Resources Information Center
Miyamoto, Sadaaki; And Others
1989-01-01
Surveys studies on bibliometrics and related subjects in Japan, classifying them into studies on bibliometrics and applications of bibliometrics. Examines applications of fuzzy set theory to document retrieval using bibliometric techniques. Emphasizes the models and methods used in common between bibliometrics and other fields of science. (SR)
Modeling Inborn Errors of Hepatic Metabolism Using Induced Pluripotent Stem Cells.
Pournasr, Behshad; Duncan, Stephen A
2017-11-01
Inborn errors of hepatic metabolism are because of deficiencies commonly within a single enzyme as a consequence of heritable mutations in the genome. Individually such diseases are rare, but collectively they are common. Advances in genome-wide association studies and DNA sequencing have helped researchers identify the underlying genetic basis of such diseases. Unfortunately, cellular and animal models that accurately recapitulate these inborn errors of hepatic metabolism in the laboratory have been lacking. Recently, investigators have exploited molecular techniques to generate induced pluripotent stem cells from patients' somatic cells. Induced pluripotent stem cells can differentiate into a wide variety of cell types, including hepatocytes, thereby offering an innovative approach to unravel the mechanisms underlying inborn errors of hepatic metabolism. Moreover, such cell models could potentially provide a platform for the discovery of therapeutics. In this mini-review, we present a brief overview of the state-of-the-art in using pluripotent stem cells for such studies. © 2017 American Heart Association, Inc.
Rodent models of glaucoma and their applicability for drug discovery.
Agarwal, Renu; Agarwal, Puneet
2017-03-01
Rodents have widely been used to represent glaucomatous changes both in the presence and absence of elevated intraocular pressure (IOP) as they offer clear advantages over other animal species. IOP elevation is commonly achieved by creating an obstruction in the aqueous outflow pathways, consequently leading to retinal ganglion cell and optic nerve (ON) damage, the hallmark of glaucoma. These changes may also be achieved in the absence of elevated IOP by directly inflicting injury to retina or ON. Areas covered: This paper presents a summary of currently used rodent models of glaucoma. The characteristics of these models from several studies are summarized. The benefits and shortcomings of these models are also discussed. Expert opinion: The choice of animal model that closely represents human disease is key for successful translational of preclinical research to clinical practice. Rodent models of rapid IOP elevation are likely to be least representative, whereas models such as steroid-induced glaucoma models more closely resemble the trabecular meshwork changes seen in glaucomatous human eyes. However, this model needs further characterization. Rodent models based on direct retinal and ON injury are also useful tools to investigate molecular mechanisms involved at the site of final common pathology and neuroprotective strategies.
Hempel, Annemarie; Kühl, Michael
2016-01-01
The African clawed frog, Xenopus, is a valuable non-mammalian model organism to investigate vertebrate heart development and to explore the underlying molecular mechanisms of human congenital heart defects (CHDs). In this review, we outline the similarities between Xenopus and mammalian cardiogenesis, and provide an overview of well-studied cardiac genes in Xenopus, which have been associated with congenital heart conditions. Additionally, we highlight advantages of modeling candidate genes derived from genome wide association studies (GWAS) in Xenopus and discuss commonly used techniques. PMID:29367567
The Future of Planetary Climate Modeling and Weather Prediction
NASA Technical Reports Server (NTRS)
Del Genio, A. D.; Domagal-Goldman, S. D.; Kiang, N. Y.; Kopparapu, R. K.; Schmidt, G. A.; Sohl, L. E.
2017-01-01
Modeling of planetary climate and weather has followed the development of tools for studying Earth, with lags of a few years. Early Earth climate studies were performed with 1-dimensionalradiative-convective models, which were soon fol-lowed by similar models for the climates of Mars and Venus and eventually by similar models for exoplan-ets. 3-dimensional general circulation models (GCMs) became common in Earth science soon after and within several years were applied to the meteorology of Mars, but it was several decades before a GCM was used to simulate extrasolar planets. Recent trends in Earth weather and and climate modeling serve as a useful guide to how modeling of Solar System and exoplanet weather and climate will evolve in the coming decade.
NASA Astrophysics Data System (ADS)
Haddad, Khaled; Rahman, Ataur; A Zaman, Mohammad; Shrestha, Surendra
2013-03-01
SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.
Zebrafish and Streptococcal Infections.
Saralahti, A; Rämet, M
2015-09-01
Streptococcal bacteria are a versatile group of gram-positive bacteria capable of infecting several host organisms, including humans and fish. Streptococcal species are common colonizers of the human respiratory and gastrointestinal tract, but they also cause some of the most common life-threatening, invasive infections in humans and aquaculture. With its unique characteristics and efficient tools for genetic and imaging applications, the zebrafish (Danio rerio) has emerged as a powerful vertebrate model for infectious diseases. Several zebrafish models introduced so far have shown that zebrafish are suitable models for both zoonotic and human-specific infections. Recently, several zebrafish models mimicking human streptococcal infections have also been developed. These models show great potential in providing novel information about the pathogenic mechanisms and host responses associated with human streptococcal infections. Here, we review the zebrafish infection models for the most relevant streptococcal species: the human-specific Streptococcus pneumoniae and Streptococcus pyogenes, and the zoonotic Streptococcus iniae and Streptococcus agalactiae. The recent success and the future potential of these models for the study of host-pathogen interactions in streptococcal infections are also discussed. © 2015 The Foundation for the Scandinavian Journal of Immunology.
NASA Astrophysics Data System (ADS)
Hall, Michael L.; Doster, J. Michael
1990-03-01
The dynamic behavior of liquid metal heat pipe models is strongly influenced by the choice of evaporation and condensation modeling techniques. Classic kinetic theory descriptions of the evaporation and condensation processes are often inadequate for real situations; empirical accommodation coefficients are commonly utilized to reflect nonideal mass transfer rates. The complex geometries and flow fields found in proposed heat pipe systems cause considerable deviation from the classical models. the THROHPUT code, which has been described in previous works, was developed to model transient liquid metal heat pipe behavior from frozen startup conditions to steady state full power operation. It is used here to evaluate the sensitivity of transient liquid metal heat pipe models to the choice of evaporation and condensation accommodation coefficients. Comparisons are made with experimental liquid metal heat pipe data. It is found that heat pipe behavior can be predicted with the proper choice of the accommodation coefficients. However, the common assumption of spatially constant accommodation coefficients is found to be a limiting factor in the model.
Common EEG features for behavioral estimation in disparate, real-world tasks.
Touryan, Jon; Lance, Brent J; Kerick, Scott E; Ries, Anthony J; McDowell, Kaleb
2016-02-01
In this study we explored the potential for capturing the behavioral dynamics observed in real-world tasks from concurrent measures of EEG. In doing so, we sought to develop models of behavior that would enable the identification of common cross-participant and cross-task EEG features. To accomplish this we had participants perform both simulated driving and guard duty tasks while we recorded their EEG. For each participant we developed models to estimate their behavioral performance during both tasks. Sequential forward floating selection was used to identify the montage of independent components for each model. Linear regression was then used on the combined power spectra from these independent components to generate a continuous estimate of behavior. Our results show that oscillatory processes, evidenced in EEG, can be used to successfully capture slow fluctuations in behavior in complex, multi-faceted tasks. The average correlation coefficients between the actual and estimated behavior was 0.548 ± 0.117 and 0.701 ± 0.154 for the driving and guard duty tasks respectively. Interestingly, through a simple clustering approach we were able to identify a number of common components, both neural and eye-movement related, across participants and tasks. We used these component clusters to quantify the relative influence of common versus participant-specific features in the models of behavior. These findings illustrate the potential for estimating complex behavioral dynamics from concurrent measures from EEG using a finite library of universal features. Published by Elsevier B.V.
Evaluation and intercomparison of five major dry deposition ...
Dry deposition of various pollutants needs to be quantified in air quality monitoring networks as well as in chemical transport models. The inferential method is the most commonly used approach in which the dry deposition velocity (Vd) is empirically parameterized as a function of meteorological and biological conditions and pollutant species’ chemical properties. Earlier model intercomparison studies suggested that existing dry deposition algorithms produce quite different Vd values, e.g., up to a factor of 2 for monthly to annual average values for ozone, and sulfur and nitrogen species (Flechard et al., 2011; Schwede et al., 2011; Wu et al., 2011). To further evaluate model discrepancies using available flux data, this study compared the five dry deposition algorithms commonly used in North America and evaluated the models using five-year Vd(O3) and Vd(SO2) data generated from concentration gradient measurements above a temperate mixed forest in Canada. The five algorithms include: (1) the one used in the Canadian Air and Precipitation Monitoring Network (CAPMoN) and several Canadian air quality models based on Zhang et al. (2003), (2) the one used in the US Clean Air Status and Trends Network (CASTNET) based on Meyers et al. (1998), (3) the one used in the Community Multiscale Air Quality (CMAQ) model described in Pleim and Ran (2011), (4) the Noah land surface model coupled with a photosynthesis-based Gas Exchange Model (Noah-GEM) described in Wu et a
3D Printed Surgical Simulation Models as educational tool by maxillofacial surgeons.
Werz, S M; Zeichner, S J; Berg, B-I; Zeilhofer, H-F; Thieringer, F
2018-02-26
The aim of this study was to evaluate whether inexpensive 3D models can be suitable to train surgical skills to dental students or oral and maxillofacial surgery residents. Furthermore, we wanted to know which of the most common filament materials, acrylonitrile butadiene styrene (ABS) or polylactic acid (PLA), can better simulate human bone according to surgeons' subjective perceptions. Upper and lower jaw models were produced with common 3D desktop printers, ABS and PLA filament and silicon rubber for soft tissue simulation. Those models were given to 10 blinded, experienced maxillofacial surgeons to perform sinus lift and wisdom teeth extraction. Evaluation was made using a questionnaire. Because of slightly different density and filament prices, each silicon-covered model costs between 1.40-1.60 USD (ABS) and 1.80-2.00 USD (PLA) based on 2017 material costs. Ten experienced raters took part in the study. All raters deemed the models suitable for surgical education. No significant differences between ABS and PLA were found, with both having distinct advantages. The study demonstrated that 3D printing with inexpensive printing filaments is a promising method for training oral and maxillofacial surgery residents or dental students in selected surgical procedures. With a simple and cost-efficient manufacturing process, models of actual patient cases can be produced on a small scale, simulating many kinds of surgical procedures. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Dimensions of service quality in healthcare: a systematic review of literature.
Fatima, Iram; Humayun, Ayesha; Iqbal, Usman; Shafiq, Muhammad
2018-06-13
Various dimensions of healthcare service quality were used and discussed in literature across the globe. This study presents an updated meaningful review of the extensive research that has been conducted on measuring dimensions of healthcare service quality. Systematic review method in current study is based on PRISMA guidelines. We searched for literature using databases such as Google, Google Scholar, PubMed and Social Science, Citation Index. In this study, we screened 1921 identified papers using search terms/phrases. Snowball strategies were adopted to extract published articles from January 1997 till December 2016. Two-hundred and fourteen papers were identified as relevant for data extraction; completed by two researchers, double checked by the other two to develop agreement in discrepancies. In total, 74 studies fulfilled our pre-defined inclusion and exclusion criteria for data analysis. Service quality is mainly measured as technical and functional, incorporating many sub-dimensions. We synthesized the information about dimensions of healthcare service quality with reference to developed and developing countries. 'Tangibility' is found to be the most common contributing factor whereas 'SERVQUAL' as the most commonly used model to measure healthcare service quality. There are core dimensions of healthcare service quality that are commonly found in all models used in current reviewed studies. We found a little difference in these core dimensions while focusing dimensions in both developed and developing countries, as mostly SERVQUAL is being used as the basic model to either generate a new one or to add further contextual dimensions. The current study ranked the contributing factors based on their frequency in literature. Based on these priorities, if factors are addressed irrespective of any context, may lead to contribute to improve healthcare quality and may provide an important information for evidence-informed decision-making.
Students' mental models on the solubility and solubility product concept
NASA Astrophysics Data System (ADS)
Rahmi, Chusnur; Katmiati, Siti; Wiji, Mulyani, Sri
2017-05-01
This study aims to obtain some information regarding profile of students' mental models on the solubility and solubility product concept. A descriptive qualitative method was the method employed in the study. The participants of the study were students XI grade of a senior high school in Bandung. To collect the data, diagnostic test on mental model-prediction, observation, explanation (TDM-POE) instrument was employed in the study. The results of the study revealed that on the concept of precipitation formation of a reaction, 30% of students were not able to explain the precipitation formation of a reaction either in submicroscopic or symbolic level although the microscopic have been shown; 26% of students were able to explain the precipitation formation of a reaction based on the relation of Qsp and Ksp, but they were not able to explain the interaction of particles that involved in the reaction and to calculate Qsp; 26% of students were able to explain the precipitation formation of a reaction based on the relation of Qsp and Ksp, and determine the particles involved, but they did not have the knowledge about the interactions occured and were uncapable of calculating Qsp; and 18% of students were able to explain the precipitation formation of a reaction based on the relation of Qsp and Ksp, and determine the interactions of the particles involved in the reactions but they were not able to calculate Qsp. On the effect of adding common ions and decreasing pH towards the solubility concept, 96% of students were not able to explain the effect of adding common ions and decreasing pH towards the solubility either in submicroscopic or symbolic level although the microscopic have been shown; while 4% of students were only able to explain the effect of adding common ions towards the solubility based on the chemical equilibrium shifts and predict the effect of decreasing pH towards the solubility. However, they were not able to calculate the solubility before and after adding common ions and explain it up to the submicroscopic level either based on the shift of equilibrium solubility or the comparison of solubility calculation results before and after decreasing pH. Overall, the present study showed that most students obtain incomplete mental model on the solubility and solubility product concept. From the findings, it is recommended for the teachers to improve students' learning activity.
Common quandaries and their practical solutions in Bayesian network modeling
Bruce G. Marcot
2017-01-01
Use and popularity of Bayesian network (BN) modeling has greatly expanded in recent years, but many common problems remain. Here, I summarize key problems in BN model construction and interpretation,along with suggested practical solutions. Problems in BN model construction include parameterizing probability values, variable definition, complex network structures,...
Fitting Residual Error Structures for Growth Models in SAS PROC MCMC
ERIC Educational Resources Information Center
McNeish, Daniel
2017-01-01
In behavioral sciences broadly, estimating growth models with Bayesian methods is becoming increasingly common, especially to combat small samples common with longitudinal data. Although Mplus is becoming an increasingly common program for applied research employing Bayesian methods, the limited selection of prior distributions for the elements of…
NASA Astrophysics Data System (ADS)
Cotic, M.; Chiu, A. W. L.; Jahromi, S. S.; Carlen, P. L.; Bardakjian, B. L.
2011-08-01
To study cell-field dynamics, physiologists simultaneously record local field potentials and the activity of individual cells from animals performing cognitive tasks, during various brain states or under pathological conditions. However, apart from spike shape and spike timing analyses, few studies have focused on elucidating the common time-frequency structure of local field activity relative to surrounding cells across different periods of phenomena. We have used two algorithms, multi-window time frequency analysis and wavelet phase coherence (WPC), to study common intracellular-extracellular (I-E) spectral features in spontaneous seizure-like events (SLEs) from rat hippocampal slices in a low magnesium epilepsy model. Both algorithms were applied to 'pairs' of simultaneously observed I-E signals from slices in the CA1 hippocampal region. Analyses were performed over a frequency range of 1-100 Hz. I-E spectral commonality varied in frequency and time. Higher commonality was observed from 1 to 15 Hz, and lower commonality was observed in the 15-100 Hz frequency range. WPC was lower in the non-SLE region compared to SLE activity; however, there was no statistical difference in the 30-45 Hz band between SLE and non-SLE modes. This work provides evidence of strong commonality in various frequency bands of I-E SLEs in the rat hippocampus, not only during SLEs but also immediately before and after.
A novel co-occurrence-based approach to predict pure associative and semantic priming.
Roelke, Andre; Franke, Nicole; Biemann, Chris; Radach, Ralph; Jacobs, Arthur M; Hofmann, Markus J
2018-03-15
The theoretical "difficulty in separating association strength from [semantic] feature overlap" has resulted in inconsistent findings of either the presence or absence of "pure" associative priming in recent literature (Hutchison, 2003, Psychonomic Bulletin & Review, 10(4), p. 787). The present study used co-occurrence statistics of words in sentences to provide a full factorial manipulation of direct association (strong/no) and the number of common associates (many/no) of the prime and target words. These common associates were proposed to serve as semantic features for a recent interactive activation model of semantic processing (i.e., the associative read-out model; Hofmann & Jacobs, 2014). With stimulus onset asynchrony (SOA) as an additional factor, our findings indicate that associative and semantic priming are indeed dissociable. Moreover, the effect of direct association was strongest at a long SOA (1,000 ms), while many common associates facilitated lexical decisions primarily at a short SOA (200 ms). This response pattern is consistent with previous performance-based accounts and suggests that associative and semantic priming can be evoked by computationally determined direct and common associations.
V. V. Rubtsov; I. A. Utkina
2003-01-01
Long-term monitoring followed by mathematical modeling was used to describe the population dynamics of the green oak leaf roller Tortrix viridana L. over a period of 30 years and to study reactions of oak stands to different levels of defoliation. The mathematical model allows us to forecast the population dynamics of the green oak leaf roller and...
A comment on priors for Bayesian occupancy models.
Northrup, Joseph M; Gerber, Brian D
2018-01-01
Understanding patterns of species occurrence and the processes underlying these patterns is fundamental to the study of ecology. One of the more commonly used approaches to investigate species occurrence patterns is occupancy modeling, which can account for imperfect detection of a species during surveys. In recent years, there has been a proliferation of Bayesian modeling in ecology, which includes fitting Bayesian occupancy models. The Bayesian framework is appealing to ecologists for many reasons, including the ability to incorporate prior information through the specification of prior distributions on parameters. While ecologists almost exclusively intend to choose priors so that they are "uninformative" or "vague", such priors can easily be unintentionally highly informative. Here we report on how the specification of a "vague" normally distributed (i.e., Gaussian) prior on coefficients in Bayesian occupancy models can unintentionally influence parameter estimation. Using both simulated data and empirical examples, we illustrate how this issue likely compromises inference about species-habitat relationships. While the extent to which these informative priors influence inference depends on the data set, researchers fitting Bayesian occupancy models should conduct sensitivity analyses to ensure intended inference, or employ less commonly used priors that are less informative (e.g., logistic or t prior distributions). We provide suggestions for addressing this issue in occupancy studies, and an online tool for exploring this issue under different contexts.
Incorporating advanced language models into the P300 speller using particle filtering
NASA Astrophysics Data System (ADS)
Speier, W.; Arnold, C. W.; Deshpande, A.; Knall, J.; Pouratian, N.
2015-08-01
Objective. The P300 speller is a common brain-computer interface (BCI) application designed to communicate language by detecting event related potentials in a subject’s electroencephalogram signal. Information about the structure of natural language can be valuable for BCI communication, but attempts to use this information have thus far been limited to rudimentary n-gram models. While more sophisticated language models are prevalent in natural language processing literature, current BCI analysis methods based on dynamic programming cannot handle their complexity. Approach. Sampling methods can overcome this complexity by estimating the posterior distribution without searching the entire state space of the model. In this study, we implement sequential importance resampling, a commonly used particle filtering (PF) algorithm, to integrate a probabilistic automaton language model. Main result. This method was first evaluated offline on a dataset of 15 healthy subjects, which showed significant increases in speed and accuracy when compared to standard classification methods as well as a recently published approach using a hidden Markov model (HMM). An online pilot study verified these results as the average speed and accuracy achieved using the PF method was significantly higher than that using the HMM method. Significance. These findings strongly support the integration of domain-specific knowledge into BCI classification to improve system performance.
The famous five factors in teamwork: a case study of fratricide.
Rafferty, Laura A; Stanton, Neville A; Walker, Guy H
2010-10-01
The purpose of this paper is to propose foundations for a theory of errors in teamwork based upon analysis of a case study of fratricide alongside a review of the existing literature. This approach may help to promote a better understanding of interactions within complex systems and help in the formulation of hypotheses and predictions concerning errors in teamwork, particularly incidents of fratricide. It is proposed that a fusion of concepts drawn from error models, with common causal categories taken from teamwork models, could allow for an in-depth exploration of incidents of fratricide. It is argued that such a model has the potential to explore the core causal categories identified as present in an incident of fratricide. This view marks fratricide as a process of errors occurring throughout the military system as a whole, particularly due to problems in teamwork within this complex system. Implications of this viewpoint for the development of a new theory of fratricide are offered. STATEMENT OF RELEVANCE: This article provides an insight into the fusion of existing error and teamwork models for the analysis of an incident of fratricide. Within this paper, a number of commonalities among models of teamwork have been identified allowing for the development of a model.
Robust Linear Models for Cis-eQTL Analysis.
Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C
2015-01-01
Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.
NASA Astrophysics Data System (ADS)
Cilip, Christopher Michael
Development of a noninvasive vasectomy technique may eliminate male fear of complications (incision, bleeding, infection, and scrotal pain) and result in a more popular procedure. These studies build off previous studies that report the ability to thermally target tissue substructures with near infrared laser radiation while maintaining a healthy superficial layer of tissue through active surface cooling. Initial studies showed the ability to increase the working depth compared to that of common dermatological procedures and the translation into an ex vivo canine model targeting the vas deferens in a noninvasive laser vasectomy. Laser and cooling parameter optimization was required to determine the best possible wavelength for a safe transition to an in vivo canine model. Optical clearing agents were investigated as a mechanism to decrease tissue scattering during in vivo procedures to increase optical penetration depth and reduce the overall power required. Optical and thermal computer models were developed to determine the efficacy for a successful transition into a human model. Common clinical imaging modalities (ultrasound, high frequency ultrasound, and optical coherence tomography) were tested as possible candidates for real-time imaging feedback to determine surgical success. Finally, a noninvasive laser vasectomy prototype clamp incorporating laser, cooling, and control in a single package was designed and tested in vivo. Occlusion of the canine vas deferens able to withstand physiological burst pressures measured postoperative was shown during acute and chronic studies. This procedure is ready for azoospermia and recanalization studies in a clinical setting.
Fezeu, Léopold K.; Batty, David G.; Gale, Catharine R.; Kivimaki, Mika; Hercberg, Serge; Czernichow, Sebastien
2015-01-01
The direction of the association between mental health and adiposity is poorly understood. Our objective was to empirically examine this link in a UK study. This is a prospective cohort study of 3 388 people (men) aged ≥ 18 years at study induction who participated in both the UK Health and Lifestyle Survey at baseline (HALS-1, 1984/1985) and the re-survey (HALS-2, 1991/1992). At both survey examinations, body mass index, waist circumference and self-reported common mental disorder (the 30-item General Health Questionnaire, GHQ) were measured. Logistic regression models were used to compute odds ratios (OR) and accompanying 95% confidence intervals (CI) for the associations between (1) baseline common mental disorder (QHQ score > 4) and subsequent general and abdominal obesity and (2) baseline general and abdominal obesity and re-survey common mental disorders. After controlling for a range of covariates, participants with common mental disorder at baseline experienced greater odds of subsequently becoming overweight (women, OR: 1.30, 1.03 – 1.64; men, 1.05, 0.81 – 1.38) and obese (women, 1.26, 0.82 – 1.94; men, OR: 2.10, 1.23 – 3.55) than those who were free of common mental disorder. Similarly, having baseline common mental health disorder was also related to a greater risk of developing moderate (1.57, 1.21 – 2.04) and severe (1.48, 1.09 – 2.01) abdominal obesity (women only). Baseline general or abdominal obesity was not associated with the risk of future common mental disorder. These findings of the present study suggest that the direction of association between common mental disorders and adiposity is from common mental disorder to increased future risk of adiposity as opposed to the converse. PMID:25993130
Understanding the Osteosarcoma Pathobiology: A Comparative Oncology Approach
Varshney, Jyotika; Scott, Milcah C.; Largaespada, David A.; Subramanian, Subbaya
2016-01-01
Osteosarcoma is an aggressive primary bone tumor in humans and is among the most common cancer afflicting dogs. Despite surgical advancements and intensification of chemo- and targeted therapies, the survival outcome for osteosarcoma patients is, as of yet, suboptimal. The presence of metastatic disease at diagnosis or its recurrence after initial therapy is a major factor for the poor outcomes. It is thought that most human and canine patients have at least microscopic metastatic lesions at diagnosis. Osteosarcoma in dogs occurs naturally with greater frequency and shares many biological and clinical similarities with osteosarcoma in humans. From a genetic perspective, osteosarcoma in both humans and dogs is characterized by complex karyotypes with highly variable structural and numerical chromosomal aberrations. Similar molecular abnormalities have been observed in human and canine osteosarcoma. For instance, loss of TP53 and RB regulated pathways are common. While there are several oncogenes that are commonly amplified in both humans and dogs, such as MYC and RAS, no commonly activated proto-oncogene has been identified that could form the basis for targeted therapies. It remains possible that recurrent aberrant gene expression changes due to gene amplification or epigenetic alterations could be uncovered and these could be used for developing new, targeted therapies. However, the remarkably high genomic complexity of osteosarcoma has precluded their definitive identification. Several advantageous murine models of osteosarcoma have been generated. These include spontaneous and genetically engineered mouse models, including a model based on forward genetics and transposon mutagenesis allowing new genes and genetic pathways to be implicated in osteosarcoma development. The proposition of this review is that careful comparative genomic studies between human, canine and mouse models of osteosarcoma may help identify commonly affected and targetable pathways for alternative therapies for osteosarcoma patients. Translational research may be found through a path that begins in mouse models, and then moves through canine patients, and then human patients. PMID:29056713
The Common Risk Model for Dams: A Portfolio Approach to Security Risk Assessments
2013-06-01
and threat estimates in a way that accounts for the relationships among these variables. The CRM -D can effectively quantify the benefits of...consequence, vulnerability, and threat estimates in a way that properly accounts for the relationships among these variables. The CRM -D can effectively...Common RiskModel ( CRM ) for evaluating and comparing risks associated with the nation’s critical infrastructure. This model incorporates commonly used risk
NASA Astrophysics Data System (ADS)
Sin, Kuek Jia; Cheong, Chin Wen; Hooi, Tan Siow
2017-04-01
This study aims to investigate the crude oil volatility using a two components autoregressive conditional heteroscedasticity (ARCH) model with the inclusion of abrupt jump feature. The model is able to capture abrupt jumps, news impact, clustering volatility, long persistence volatility and heavy-tailed distributed error which are commonly observed in the crude oil time series. For the empirical study, we have selected the WTI crude oil index from year 2000 to 2016. The results found that by including the multiple-abrupt jumps in ARCH model, there are significant improvements of estimation evaluations as compared with the standard ARCH models. The outcomes of this study can provide useful information for risk management and portfolio analysis in the crude oil markets.
NASA Technical Reports Server (NTRS)
Canfield, Richard C.; De La Beaujardiere, J.-F.; Fan, Yuhong; Leka, K. D.; Mcclymont, A. N.; Metcalf, Thomas R.; Mickey, Donald L.; Wuelser, Jean-Pierre; Lites, Bruce W.
1993-01-01
Electric current systems in solar active regions and their spatial relationship to sites of electron precipitation and high-pressure in flares were studied with the purpose of providing observational evidence for or against the flare models commonly discussed in the literature. The paper describes the instrumentation, the data used, and the data analysis methods, as well as improvements made upon earlier studies. Several flare models are overviewed, and the predictions yielded by each model for the relationships of flares to the vertical current systems are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armstrong, Robert C.; Ray, Jaideep; Malony, A.
2003-11-01
We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.
Families of Graph Algorithms: SSSP Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanewala Appuhamilage, Thejaka Amila Jay; Zalewski, Marcin J.; Lumsdaine, Andrew
2017-08-28
Single-Source Shortest Paths (SSSP) is a well-studied graph problem. Examples of SSSP algorithms include the original Dijkstra’s algorithm and the parallel Δ-stepping and KLA-SSSP algorithms. In this paper, we use a novel Abstract Graph Machine (AGM) model to show that all these algorithms share a common logic and differ from one another by the order in which they perform work. We use the AGM model to thoroughly analyze the family of algorithms that arises from the common logic. We start with the basic algorithm without any ordering (Chaotic), and then we derive the existing and new algorithms by methodically exploringmore » semantic and spatial ordering of work. Our experimental results show that new derived algorithms show better performance than the existing distributed memory parallel algorithms, especially at higher scales.« less
NASA Astrophysics Data System (ADS)
Mount, Gregory J.; Comas, Xavier; Cunningham, Kevin J.
2014-07-01
The karst Biscayne aquifer is characterized by a heterogeneous spatial arrangement of porosity and hydraulic conductivity, making conceptualization difficult. The Biscayne aquifer is the primary source of drinking water for millions of people in south Florida; thus, information concerning the distribution of karst features that concentrate the groundwater flow and affect contaminant transport is critical. The principal purpose of the study was to investigate the ability of two-dimensional ground penetrating radar (GPR) to rapidly characterize porosity variability in the karst Biscayne aquifer in south Florida. An 800-m-long GPR transect of a previously investigated area at the Long Pine Key Nature Trail in Everglades National Park, collected in fast acquisition common offset mode, shows hundreds of diffraction hyperbolae. The distribution of diffraction hyperbolae was used to estimate electromagnetic (EM) wave velocity at each diffraction location and to assess both horizontal and vertical changes in velocity within the transect. A petrophysical model (complex refractive index model or CRIM) was used to estimate total bulk porosity. A set of common midpoint surveys at selected locations distributed along the common-offset transect also were collected for comparison with the common offsets and were used to constrain one-dimensional (1-D) distributions of porosity with depth. Porosity values for the saturated Miami Limestone ranged between 25% and 41% for common offset GPR surveys, and between 23% and 39% for common midpoint GPR surveys. Laboratory measurements of porosity in five whole-core samples from the saturated part of the aquifer in the study area ranged between 7.1% and 41.8%. GPR estimates of porosity were found to be valid only under saturated conditions; other limitations are related to the vertical resolution of the GPR signal and the volume of the material considered by the measurement methodology. Overall, good correspondence between GPR estimates and the direct porosity values from the whole-core samples confirms the ability of GPR common offset surveys to provide rapid characterization of porosity variability in the Biscayne aquifer. The common offset survey method has several advantages: (1) improved time efficiency in comparison to other GPR acquisition modes such as common midpoints; and (2) enhanced lateral continuity of porosity estimates, particularly when compared to porosity measurements on 1-D samples such as rock cores. The results also support the presence of areas of low EM wave velocity or high porosity under saturated conditions, causing velocity pull-down areas and apparent sag features in the reflection record. This study shows that GPR can be a useful tool for improving understanding of the petrophysical properties of highly heterogeneous systems such as karst aquifers, and thus may assist with the development of more accurate groundwater flow models, such as those used for restoration efforts in the Everglades.
Mountain, Gregory S.; Cunningham, Kevin J.; Comas, Xavier
2014-01-01
The karst Biscayne aquifer is characterized by a heterogeneous spatial arrangement of porosity and hydraulic conductivity, making conceptualization difficult. The Biscayne aquifer is the primary source of drinking water for millions of people in south Florida; thus, information concerning the distribution of karst features that concentrate the groundwater flow and affect contaminant transport is critical. The principal purpose of the study was to investigate the ability of two-dimensional ground penetrating radar (GPR) to rapidly characterize porosity variability in the karst Biscayne aquifer in south Florida. An 800-m-long GPR transect of a previously investigated area at the Long Pine Key Nature Trail in Everglades National Park, collected in fast acquisition common offset mode, shows hundreds of diffraction hyperbolae. The distribution of diffraction hyperbolae was used to estimate electromagnetic (EM) wave velocity at each diffraction location and to assess both horizontal and vertical changes in velocity within the transect. A petrophysical model (complex refractive index model or CRIM) was used to estimate total bulk porosity. A set of common midpoint surveys at selected locations distributed along the common-offset transect also were collected for comparison with the common offsets and were used to constrain one-dimensional (1-D) distributions of porosity with depth. Porosity values for the saturated Miami Limestone ranged between 25% and 41% for common offset GPR surveys, and between 23% and 39% for common midpoint GPR surveys. Laboratory measurements of porosity in five whole-core samples from the saturated part of the aquifer in the study area ranged between 7.1% and 41.8%. GPR estimates of porosity were found to be valid only under saturated conditions; other limitations are related to the vertical resolution of the GPR signal and the volume of the material considered by the measurement methodology. Overall, good correspondence between GPR estimates and the direct porosity values from the whole-core samples confirms the ability of GPR common offset surveys to provide rapid characterization of porosity variability in the Biscayne aquifer.The common offset survey method has several advantages: (1) improved time efficiency in comparison to other GPR acquisition modes such as common midpoints; and (2) enhanced lateral continuity of porosity estimates, particularly when compared to porosity measurements on 1-D samples such as rock cores. The results also support the presence of areas of low EM wave velocity or high porosity under saturated conditions, causing velocity pull-down areas and apparent sag features in the reflection record. This study shows that GPR can be a useful tool for improving understanding of the petrophysical properties of highly heterogeneous systems such as karst aquifers, and thus may assist with the development of more accurate groundwater flow models, such as those used for restoration efforts in the Everglades.
Kim, Sujin; Kim, Sunmi; Won, Sungho; Choi, Kyungho
2017-10-01
Epidemiological studies have shown that thyroid hormone balances can be disrupted by chemical exposure. However, many association studies have often failed to consider multiple chemicals with possible common sources of exposure, rendering their conclusions less reliable. In the 2007-2008 National Health and Nutrition Examination Survey (NHANES) from the U.S.A., urinary levels of environmental phenols, parabens, and phthalate metabolites as well as serum thyroid hormones were measured in a general U.S. population (≥12years old, n=1829). Employing these data, first, the chemicals or their metabolites associated with thyroid hormone measures were identified. Then, the chemicals/metabolites with possible common exposure sources were included in the analytical model to test the sensitivities of their association with thyroid hormone levels. Benzophenone-3 (BP-3), bisphenol A (BPA), and a metabolite of di(2-ethylhexyl) phthalate (DEHP) were identified as significant determinants of decreased serum thyroid hormones. However, significant positive correlations were detected (p-value<0.05, r=0.23 to 0.45) between these chemicals/metabolites, which suggests that they might share similar exposure sources. In the subsequent sensitivity analysis, which included the chemicals/metabolite with potentially similar exposure sources in the model, we found that urinary BP-3 and DEHP exposure were associated with decreased thyroid hormones among the general population but BPA exposure was not. In association studies, the presence of possible common exposure sources should be considered to circumvent possible false-positive conclusions. Copyright © 2017 Elsevier Ltd. All rights reserved.
The GeoDataPortal: A Standards-based Environmental Modeling Data Access and Manipulation Toolkit
NASA Astrophysics Data System (ADS)
Blodgett, D. L.; Kunicki, T.; Booth, N.; Suftin, I.; Zoerb, R.; Walker, J.
2010-12-01
Environmental modelers from fields of study such as climatology, hydrology, geology, and ecology rely on many data sources and processing methods that are common across these disciplines. Interest in inter-disciplinary, loosely coupled modeling and data sharing is increasing among scientists from the USGS, other agencies, and academia. For example, hydrologic modelers need downscaled climate change scenarios and land cover data summarized for the watersheds they are modeling. Subsequently, ecological modelers are interested in soil moisture information for a particular habitat type as predicted by the hydrologic modeler. The USGS Center for Integrated Data Analytics Geo Data Portal (GDP) project seeks to facilitate this loose model coupling data sharing through broadly applicable open-source web processing services. These services simplify and streamline the time consuming and resource intensive tasks that are barriers to inter-disciplinary collaboration. The GDP framework includes a catalog describing projects, models, data, processes, and how they relate. Using newly introduced data, or sources already known to the catalog, the GDP facilitates access to sub-sets and common derivatives of data in numerous formats on disparate web servers. The GDP performs many of the critical functions needed to summarize data sources into modeling units regardless of scale or volume. A user can specify their analysis zones or modeling units as an Open Geospatial Consortium (OGC) standard Web Feature Service (WFS). Utilities to cache Shapefiles and other common GIS input formats have been developed to aid in making the geometry available for processing via WFS. Dataset access in the GDP relies primarily on the Unidata NetCDF-Java library’s common data model. Data transfer relies on methods provided by Unidata’s Thematic Real-time Environmental Data Distribution System Data Server (TDS). TDS services of interest include the Open-source Project for a Network Data Access Protocol (OPeNDAP) standard for gridded time series, the OGC’s Web Coverage Service for high-density static gridded data, and Unidata’s CDM-remote for point time series. OGC WFS and Sensor Observation Service (SOS) are being explored as mechanisms to serve and access static or time series data attributed to vector geometry. A set of standardized XML-based output formats allows easy transformation into a wide variety of “model-ready” formats. Interested users will have the option of submitting custom transformations to the GDP or transforming the XML output as a post-process. The GDP project aims to support simple, rapid development of thin user interfaces (like web portals) to commonly needed environmental modeling-related data access and manipulation tools. Standalone, service-oriented components of the GDP framework provide the metadata cataloging, data subset access, and spatial-statistics calculations needed to support interdisciplinary environmental modeling.
Developing probabilistic models to predict amphibian site occupancy in a patchy landscape
R. A. Knapp; K.R. Matthews; H. K. Preisler; R. Jellison
2003-01-01
Abstract. Human-caused fragmentation of habitats is threatening an increasing number of animal and plant species, making an understanding of the factors influencing patch occupancy ever more important. The overall goal of the current study was to develop probabilistic models of patch occupancy for the mountain yellow-legged frog (Rana muscosa). This once-common species...
Dynamic Binding of Identity and Location Information: A Serial Model of Multiple Identity Tracking
ERIC Educational Resources Information Center
Oksama, Lauri; Hyona, Jukka
2008-01-01
Tracking of multiple moving objects is commonly assumed to be carried out by a fixed-capacity parallel mechanism. The present study proposes a serial model (MOMIT) to explain performance accuracy in the maintenance of multiple moving objects with distinct identities. A serial refresh mechanism is postulated, which makes recourse to continuous…
ERIC Educational Resources Information Center
McArdle, John J.; Johnson, Ronald C.; Hishinuma, Earl S.; Miyamoto, Robin H.; Andrade, Naleen N.
2001-01-01
Analyzes differences in self-reported Center for Epidemiologic Studies Depression inventory results among ethnic Hawaiian and non-Hawaiian high school students, using different forms of latent variable structural equation models. Finds a high degree of invariance between students on depression. Discusses issues about common features and…
ERIC Educational Resources Information Center
Leung, Kim Chau
2015-01-01
Previous meta-analyses of the effects of peer tutoring on academic achievement have been plagued with theoretical and methodological flaws. Specifically, these studies have not adopted both fixed and mixed effects models for analyzing the effect size; they have not evaluated the moderating effect of some commonly used parameters, such as comparing…
Piloting a Co-Teaching Model for Mathematics Teacher Preparation: Learning to Teach Together
ERIC Educational Resources Information Center
Yopp, Ruth Helen; Ellis, Mark W.; Bonsangue, Martin V.; Duarte, Thomas; Meza, Susanna
2014-01-01
This study offers insights from an initial pilot of a co-teaching model for mathematics teacher preparation developed both to support experienced teachers in shifting their practice toward the vision set forth by NCTM and the Common Core State Standards for Mathematics (National Governors Association, 2010; NCTM, 2000, 2009) and to provide…
Effects of a Hands-on Multicultural Education Program: A Model for Student Learning.
ERIC Educational Resources Information Center
Kim, Simon; Clarke-Ekong, Sheilah; Ashmore, Pamela
1999-01-01
Describes the Center for Human Origin and Cultural Diversity program that is a model for multicultural education in which students learn about the human fossil record, the value of biological variation, and the characteristics common to all humans. Presents results from a study that support the use of this program. (CMK)
Optimal Partitioning of a Data Set Based on the "p"-Median Model
ERIC Educational Resources Information Center
Brusco, Michael J.; Kohn, Hans-Friedrich
2008-01-01
Although the "K"-means algorithm for minimizing the within-cluster sums of squared deviations from cluster centroids is perhaps the most common method for applied cluster analyses, a variety of other criteria are available. The "p"-median model is an especially well-studied clustering problem that requires the selection of "p" objects to serve as…
Sensitivity Analysis of Multiple Informant Models When Data Are Not Missing at Random
ERIC Educational Resources Information Center
Blozis, Shelley A.; Ge, Xiaojia; Xu, Shu; Natsuaki, Misaki N.; Shaw, Daniel S.; Neiderhiser, Jenae M.; Scaramella, Laura V.; Leve, Leslie D.; Reiss, David
2013-01-01
Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups can be retained for analysis even if only 1 member of a group contributes…
ERIC Educational Resources Information Center
Rhatigan, Deborah L.; Street, Amy E.
2005-01-01
This study explored the impact of violence exposure on investment-model constructs within a sample of college women involved in heterosexual dating relationships. Results generally supported the "common sense" hypothesis, suggesting that violence negatively impacts satisfaction for and commitment to one's relationship and is positively associated…
Higher Order Testlet Response Models for Hierarchical Latent Traits and Testlet-Based Items
ERIC Educational Resources Information Center
Huang, Hung-Yu; Wang, Wen-Chung
2013-01-01
Both testlet design and hierarchical latent traits are fairly common in educational and psychological measurements. This study aimed to develop a new class of higher order testlet response models that consider both local item dependence within testlets and a hierarchy of latent traits. Due to high dimensionality, the authors adopted the Bayesian…
DNA?RNA: What Do Students Think the Arrow Means?
ERIC Educational Resources Information Center
Wright, L. Kate; Fisk, J. Nick; Newman, Dina L.
2014-01-01
The central dogma of molecular biology, a model that has remained intact for decades, describes the transfer of genetic information from DNA to protein though an RNA intermediate. While recent work has illustrated many exceptions to the central dogma, it is still a common model used to describe and study the relationship between genes and protein…
ERIC Educational Resources Information Center
Newton, Jill A.; Kasten, Sarah E.
2013-01-01
The release of the Common Core State Standards for Mathematics and their adoption across the United States calls for careful attention to the alignment between mathematics standards and assessments. This study investigates 2 models that measure alignment between standards and assessments, the Surveys of Enacted Curriculum (SEC) and the Webb…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saha, Kaushik; Som, Sibendu; Battistoni, Michele
Flash boiling is known to be a common phenomenon for gasoline direct injection (GDI) engine sprays. The Homogeneous Relaxation Model has been adopted in many recent numerical studies for predicting cavitation and flash boiling. The Homogeneous Relaxation Model is assessed in this study. Sensitivity analysis of the model parameters has been documented to infer the driving factors for the flash-boiling predictions. The model parameters have been varied over a range and the differences in predictions of the extent of flashing have been studied. Apart from flashing in the near nozzle regions, mild cavitation is also predicted inside the gasoline injectors.more » The variation in the predicted time scales through the model parameters for predicting these two different thermodynamic phenomena (cavitation, flash) have been elaborated in this study. Turbulence model effects have also been investigated by comparing predictions from the standard and Re-Normalization Group (RNG) k-ε turbulence models.« less
Gomez, Rapson; Vance, Alasdair; Watson, Shaun D
2016-01-01
This study used confirmatory factor analysis to examine the factor structure for the 10 core WISC-IV subtests in a group of children (N = 812) with ADHD. The study examined oblique four- and five-factor models, higher order models with one general secondary factor and four and five primary factors, and a bifactor model with a general factor and four specific factors. The findings supported all models tested, with the bifactor model being the optimum model. For this model, only the general factor had high explained common variance and omega hierarchical value, and it predicted reading and arithmetic abilities. The findings favor the use of the FSIQ scores of the WISC-IV, but not the subscale index scores.
NASA Astrophysics Data System (ADS)
Li, Zhanjie; Yu, Jingshan; Xu, Xinyi; Sun, Wenchao; Pang, Bo; Yue, Jiajia
2018-06-01
Hydrological models are important and effective tools for detecting complex hydrological processes. Different models have different strengths when capturing the various aspects of hydrological processes. Relying on a single model usually leads to simulation uncertainties. Ensemble approaches, based on multi-model hydrological simulations, can improve application performance over single models. In this study, the upper Yalongjiang River Basin was selected for a case study. Three commonly used hydrological models (SWAT, VIC, and BTOPMC) were selected and used for independent simulations with the same input and initial values. Then, the BP neural network method was employed to combine the results from the three models. The results show that the accuracy of BP ensemble simulation is better than that of the single models.
Toward a unified approach to dose-response modeling in ecotoxicology.
Ritz, Christian
2010-01-01
This study reviews dose-response models that are used in ecotoxicology. The focus lies on clarification of differences and similarities between models, and as a side effect, their different guises in ecotoxicology are unravelled. A look at frequently used dose-response models reveals major discrepancies, among other things in naming conventions. Therefore, there is a need for a unified view on dose-response modeling in order to improve the understanding of it and to facilitate communication and comparison of findings across studies, thus realizing its full potential. This study attempts to establish a general framework that encompasses most dose-response models that are of interest to ecotoxicologists in practice. The framework includes commonly used models such as the log-logistic and Weibull models, but also features entire suites of models as found in various guidance documents. An outline on how the proposed framework can be implemented in statistical software systems is also provided.
Upadhyay, S K; Mukherjee, Bhaswati; Gupta, Ashutosh
2009-09-01
Several models for studies related to tensile strength of materials are proposed in the literature where the size or length component has been taken to be an important factor for studying the specimens' failure behaviour. An important model, developed on the basis of cumulative damage approach, is the three-parameter extension of the Birnbaum-Saunders fatigue model that incorporates size of the specimen as an additional variable. This model is a strong competitor of the commonly used Weibull model and stands better than the traditional models, which do not incorporate the size effect. The paper considers two such cumulative damage models, checks their compatibility with a real dataset, compares them with some of the recent toolkits, and finally recommends a model, which appears an appropriate one. Throughout the study is Bayesian based on Markov chain Monte Carlo simulation.
An Overview of Intellectual Property and Intangible Asset Valuation Models
ERIC Educational Resources Information Center
Matsuura, Jeffrey H.
2004-01-01
This paper reviews the economic models most commonly applied to estimate the value of intellectual property and other forms of intangible assets. It highlights the key strengths and weaknesses of these models. One of the apparent weaknesses of the most commonly used valuation models is the failure to incorporate legal rights into their…
Colinot, Darrelle L; Garbuz, Tamila; Bosland, Maarten C; Wang, Liang; Rice, Susan E; Sullivan, William J; Arrizabalaga, Gustavo; Jerde, Travis J
2017-07-01
Inflammation is the most prevalent and widespread histological finding in the human prostate, and associates with the development and progression of benign prostatic hyperplasia and prostate cancer. Several factors have been hypothesized to cause inflammation, yet the role each may play in the etiology of prostatic inflammation remains unclear. This study examined the possibility that the common protozoan parasite Toxoplasma gondii induces prostatic inflammation and reactive hyperplasia in a mouse model. Male mice were infected systemically with T. gondii parasites and prostatic inflammation was scored based on severity and focality of infiltrating leukocytes and epithelial hyperplasia. We characterized inflammatory cells with flow cytometry and the resulting epithelial proliferation with bromodeoxyuridine (BrdU) incorporation. We found that T. gondii infects the mouse prostate within the first 14 days of infection and can establish parasite cysts that persist for at least 60 days. T. gondii infection induces a substantial and chronic inflammatory reaction in the mouse prostate characterized by monocytic and lymphocytic inflammatory infiltrate. T. gondii-induced inflammation results in reactive hyperplasia, involving basal and luminal epithelial proliferation, and the exhibition of proliferative inflammatory microglandular hyperplasia in inflamed mouse prostates. This study identifies the common parasite T. gondii as a new trigger of prostatic inflammation, which we used to develop a novel mouse model of prostatic inflammation. This is the first report that T. gondii chronically encysts and induces chronic inflammation within the prostate of any species. Furthermore, T. gondii-induced prostatic inflammation persists and progresses without genetic manipulation in mice, offering a powerful new mouse model for the study of chronic prostatic inflammation and microglandular hyperplasia. © 2017 Wiley Periodicals, Inc.
Covariance Structure Models for Gene Expression Microarray Data
ERIC Educational Resources Information Center
Xie, Jun; Bentler, Peter M.
2003-01-01
Covariance structure models are applied to gene expression data using a factor model, a path model, and their combination. The factor model is based on a few factors that capture most of the expression information. A common factor of a group of genes may represent a common protein factor for the transcript of the co-expressed genes, and hence, it…
Genetics of common forms of heart failure: challenges and potential solutions.
Rau, Christoph D; Lusis, Aldons J; Wang, Yibin
2015-05-01
In contrast to many other human diseases, the use of genome-wide association studies (GWAS) to identify genes for heart failure (HF) has had limited success. We will discuss the underlying challenges as well as potential new approaches to understanding the genetics of common forms of HF. Recent research using intermediate phenotypes, more detailed and quantitative stratification of HF symptoms, founder populations and novel animal models has begun to allow researchers to make headway toward explaining the genetics underlying HF using GWAS techniques. By expanding analyses of HF to improved clinical traits, additional HF classifications and innovative model systems, the intractability of human HF GWAS should be ameliorated significantly.
[Cost-effectiveness of a TLC-NOSF polyurethane foam dressing].
Arroyo Ana, Abejón; Alvarez Vázquez, Juan Carlos; Blasco García, Carmen; Bermejo Martínez, Mariano; López Casanova, Pablo; Cuesta Cuesta, Juan José; De Haro Fernández, Francisco; Mateo Marín, Emilia; Segovia Gómez, Teresa; Villar Rojas, Antonio Erasto
2012-11-01
Chronic wounds represent a drain on the Spanish health system, nowdays is necessary an optimization of the resources used and that is for this that is necessary justify the use of the products over others through cost-effective studies for to show the economic benefit to professionals and the life quality of patient. This article compares the use of a new technology for format polyurethane foam, TLC-NOSF, with the most commonly used products for treating wounds. This comparison is made using a cost-effectiveness model (Markov Model). The results demonstrate that treatment with polyurethane foam dressing with TLC-NOSF are cost-effective versus treatments with polyurethane foams most commonly used in Spain.
Animal models to study the pathogenesis of human and animal Clostridium perfringens infections.
Uzal, Francisco A; McClane, Bruce A; Cheung, Jackie K; Theoret, James; Garcia, Jorge P; Moore, Robert J; Rood, Julian I
2015-08-31
The most common animal models used to study Clostridium perfringens infections in humans and animals are reviewed here. The classical C. perfringens-mediated histotoxic disease of humans is clostridial myonecrosis or gas gangrene and the use of a mouse myonecrosis model coupled with genetic studies has contributed greatly to our understanding of disease pathogenesis. Similarly, the use of a chicken model has enhanced our understanding of type A-mediated necrotic enteritis in poultry and has led to the identification of NetB as the primary toxin involved in disease. C. perfringens type A food poisoning is a highly prevalent bacterial illness in the USA and elsewhere. Rabbits and mice are the species most commonly used to study the action of enterotoxin, the causative toxin. Other animal models used to study the effect of this toxin are rats, non-human primates, sheep and cattle. In rabbits and mice, CPE produces severe necrosis of the small intestinal epithelium along with fluid accumulation. C. perfringens type D infection has been studied by inoculating epsilon toxin (ETX) intravenously into mice, rats, sheep, goats and cattle, and by intraduodenal inoculation of whole cultures of this microorganism in mice, sheep, goats and cattle. Molecular Koch's postulates have been fulfilled for enterotoxigenic C. perfringens type A in rabbits and mice, for C. perfringens type A necrotic enteritis and gas gangrene in chickens and mice, respectively, for C. perfringens type C in mice, rabbits and goats, and for C. perfringens type D in mice, sheep and goats. Copyright © 2015 Elsevier B.V. All rights reserved.
Animal models to study the pathogenesis of human and animal Clostridium perfringens infections
Uzal, Francisco A.; McClane, Bruce A.; Cheung, Jackie K.; Theoret, James; Garcia, Jorge P.; Moore, Robert J.; Rood, Julian I.
2016-01-01
The most common animal models used to study Clostridium perfringens infections in humans and animals are reviewed here. The classical C. perfringens-mediated histotoxic disease of humans is clostridial myonecrosis or gas gangrene and the use of a mouse myonecrosis model coupled with genetic studies has contributed greatly to our understanding of disease pathogenesis. Similarly, the use of a chicken model has enhanced our understanding of type A-mediated necrotic enteritis in poultry and has led to the identification of NetB as the primary toxin involved in disease. C. perfringens type A food poisoning is a highly prevalent bacterial illness in the USA and elsewhere. Rabbits and mice are the species most commonly used to study the action of enterotoxin, the causative toxin. Other animal models used to study the effect of this toxin are rats, non-human primates, sheep and cattle. In rabbits and mice, CPE produces severe necrosis of the small intestinal epithelium along with fluid accumulation. C. perfringens type D infection has been studied by inoculating epsilon toxin (ETX) intravenously into mice, rats, sheep, goats and cattle, and by intraduodenal inoculation of whole cultures of this microorganism in mice, sheep, goats and cattle. Molecular Koch's postulates have been fulfilled for enterotoxigenic C. perfringens type A in rabbits and mice, for C. perfringens type A necrotic enteritis and gas gangrene in chickens and mice, respectively, for C. perfringens type C in mice, rabbits and goats, and for C. perfringens type D in mice, sheep and goats. PMID:25770894
Modelling the transmission of healthcare associated infections: a systematic review
2013-01-01
Background Dynamic transmission models are increasingly being used to improve our understanding of the epidemiology of healthcare-associated infections (HCAI). However, there has been no recent comprehensive review of this emerging field. This paper summarises how mathematical models have informed the field of HCAI and how methods have developed over time. Methods MEDLINE, EMBASE, Scopus, CINAHL plus and Global Health databases were systematically searched for dynamic mathematical models of HCAI transmission and/or the dynamics of antimicrobial resistance in healthcare settings. Results In total, 96 papers met the eligibility criteria. The main research themes considered were evaluation of infection control effectiveness (64%), variability in transmission routes (7%), the impact of movement patterns between healthcare institutes (5%), the development of antimicrobial resistance (3%), and strain competitiveness or co-colonisation with different strains (3%). Methicillin-resistant Staphylococcus aureus was the most commonly modelled HCAI (34%), followed by vancomycin resistant enterococci (16%). Other common HCAIs, e.g. Clostridum difficile, were rarely investigated (3%). Very few models have been published on HCAI from low or middle-income countries. The first HCAI model has looked at antimicrobial resistance in hospital settings using compartmental deterministic approaches. Stochastic models (which include the role of chance in the transmission process) are becoming increasingly common. Model calibration (inference of unknown parameters by fitting models to data) and sensitivity analysis are comparatively uncommon, occurring in 35% and 36% of studies respectively, but their application is increasing. Only 5% of models compared their predictions to external data. Conclusions Transmission models have been used to understand complex systems and to predict the impact of control policies. Methods have generally improved, with an increased use of stochastic models, and more advanced methods for formal model fitting and sensitivity analyses. Insights gained from these models could be broadened to a wider range of pathogens and settings. Improvements in the availability of data and statistical methods could enhance the predictive ability of models. PMID:23809195
Xia, Yinglin; Morrison-Beedy, Dianne; Ma, Jingming; Feng, Changyong; Cross, Wendi; Tu, Xin
2012-01-01
Modeling count data from sexual behavioral outcomes involves many challenges, especially when the data exhibit a preponderance of zeros and overdispersion. In particular, the popular Poisson log-linear model is not appropriate for modeling such outcomes. Although alternatives exist for addressing both issues, they are not widely and effectively used in sex health research, especially in HIV prevention intervention and related studies. In this paper, we discuss how to analyze count outcomes distributed with excess of zeros and overdispersion and introduce appropriate model-fit indices for comparing the performance of competing models, using data from a real study on HIV prevention intervention. The in-depth look at these common issues arising from studies involving behavioral outcomes will promote sound statistical analyses and facilitate research in this and other related areas. PMID:22536496
Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley
2017-03-01
Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. Creative Commons Attribution License
[Categories and characteristics of BPH drug evaluation models: a comparative study].
Huang, Dong-Yan; Wu, Jian-Hui; Sun, Zu-Yue
2014-02-01
Benign prostatic hyperplasia (BPH) is a worldwide common disease in men over 50 years old, and the exact cause of BPH remains largely unknown. In order to elucidate its pathogenesis and screen effective drugs for the treatment of BPH, many BPH models have been developed at home and abroad. This article presents a comprehensive analysis of the categories and characteristics of BPH drug evaluation models, highlighting the application value of each model, to provide a theoretical basis for the development of BPH drugs.
Evaluating Topographic Effects on Ground Deformation: Insights from Finite Element Modeling
NASA Astrophysics Data System (ADS)
Ronchin, Erika; Geyer, Adelina; Martí, Joan
2015-07-01
Ground deformation has been demonstrated to be one of the most common signals of volcanic unrest. Although volcanoes are commonly associated with significant topographic relief, most analytical models assume the Earth's surface as flat. However, it has been confirmed that this approximation can lead to important misinterpretations of the recorded surface deformation data. Here we perform a systematic and quantitative analysis of how topography may influence ground deformation signals generated by a spherical pressure source embedded in an elastic homogeneous media and how these variations correlate with the different topographic parameters characterizing the terrain form (e.g., slope, aspect, curvature). For this, we bring together the results presented in previous published papers and complement them with new axisymmetric and 3D finite element (FE) model results. First, we study, in a parametric way, the influence of a volcanic edifice centered above the pressure source axis. Second, we carry out new 3D FE models simulating the real topography of three different volcanic areas representative of topographic scenarios common in volcanic regions: Rabaul caldera (Papua New Guinea) and the volcanic islands of Tenerife and El Hierro (Canary Islands). The calculated differences are then correlated with a series of topographic parameters. The final aim is to investigate the artifacts that might arise from the use of half-space models at volcanic areas due to diverse topographic features (e.g., collapse caldera structures, prominent central edifices, large landslide scars).
SPASE, Metadata, and the Heliophysics Virtual Observatories
NASA Technical Reports Server (NTRS)
Thieman, James; King, Todd; Roberts, Aaron
2010-01-01
To provide data search and access capability in the field of Heliophysics (the study of the Sun and its effects on the Solar System, especially the Earth) a number of Virtual Observatories (VO) have been established both via direct funding from the U.S. National Aeronautics and Space Administration (NASA) and through other funding agencies in the U.S. and worldwide. At least 15 systems can be labeled as Virtual Observatories in the Heliophysics community, 9 of them funded by NASA. The problem is that different metadata and data search approaches are used by these VO's and a search for data relevant to a particular research question can involve consulting with multiple VO's - needing to learn a different approach for finding and acquiring data for each. The Space Physics Archive Search and Extract (SPASE) project is intended to provide a common data model for Heliophysics data and therefore a common set of metadata for searches of the VO's. The SPASE Data Model has been developed through the common efforts of the Heliophysics Data and Model Consortium (HDMC) representatives over a number of years. We currently have released Version 2.1 of the Data Model. The advantages and disadvantages of the Data Model will be discussed along with the plans for the future. Recent changes requested by new members of the SPASE community indicate some of the directions for further development.
Kwon, Tae-Sung; Li, Fengqing; Kim, Sung-Soo; Chun, Jung Hwa; Park, Young-Seuk
2016-01-01
Global warming is likely leading to species' distributional shifts, resulting in changes in local community compositions and diversity patterns. In this study, we applied species distribution models to evaluate the potential impacts of temperature increase on ant communities in Korean temperate forests, by testing hypotheses that 1) the risk of extinction of forest ant species would increase over time, and 2) the changes in species distribution ranges could drive upward movements of ant communities and further alter patterns of species richness. We sampled ant communities at 335 evenly distributed sites across South Korea and modelled the future distribution range for each species using generalized additive models. To account for spatial autocorrelation, autocovariate regressions were conducted prior to generalized additive models. Among 29 common ant species, 12 species were estimated to shrink their suitable geographic areas, whereas five species would benefit from future global warming. Species richness was highest at low altitudes in the current period, and it was projected to be highest at the mid-altitudes in the 2080s, resulting in an upward movement of 4.9 m yr-1. This altered the altitudinal pattern of species richness from a monotonic-decrease curve (common in temperate regions) to a bell-shaped curve (common in tropical regions). Overall, ant communities in temperate forests are vulnerable to the on-going global warming and their altitudinal movements are similar to other faunal communities.
NASA Technical Reports Server (NTRS)
Hark, Frank; Britton, Paul; Ring, Robert; Novack, Steven
2015-01-01
Space Launch System (SLS) Agenda: Objective; Key Definitions; Calculating Common Cause; Examples; Defense against Common Cause; Impact of varied Common Cause Failure (CCF) and abortability; Response Surface for various CCF Beta; Takeaways.
Scientists' internal models of the greenhouse effect
NASA Astrophysics Data System (ADS)
Libarkin, J. C.; Miller, H.; Thomas, S. R.
2013-12-01
A prior study utilized exploratory factor analysis to identify models underlying drawings of the greenhouse effect made by entering university freshmen. This analysis identified four archetype models of the greenhouse effect that appear within the college enrolling population. The current study collected drawings made by 144 geoscientists, from undergraduate geoscience majors through professionals. These participants scored highly on a standardized assessment of climate change understanding and expressed confidence in their understanding; many also indicated that they teach climate change in their courses. Although geoscientists held slightly more sophisticated greenhouse effect models than entering freshmen, very few held complete, explanatory models. As with freshmen, many scientists (44%) depict greenhouse gases in a layer in the atmosphere; 52% of participants depicted this or another layer as a physical barrier to escaping energy. In addition, 32% of participants indicated that incoming light from the Sun remains unchanged at Earth's surface, in alignment with a common model held by students. Finally, 3-20% of scientists depicted physical greenhouses, ozone, or holes in the atmosphere, all of which correspond to non-explanatory models commonly seen within students and represented in popular literature. For many scientists, incomplete models of the greenhouse effect are clearly enough to allow for reasoning about climate change. These data suggest that: 1) better representations about interdisciplinary concepts, such as the greenhouse effect, are needed for both scientist and public understanding; and 2) the scientific community needs to carefully consider how much understanding of a model is needed before necessary reasoning can occur.
Language Individuation and Marker Words: Shakespeare and His Maxwell's Demon.
Marsden, John; Budden, David; Craig, Hugh; Moscato, Pablo
2013-01-01
Within the structural and grammatical bounds of a common language, all authors develop their own distinctive writing styles. Whether the relative occurrence of common words can be measured to produce accurate models of authorship is of particular interest. This work introduces a new score that helps to highlight such variations in word occurrence, and is applied to produce models of authorship of a large group of plays from the Shakespearean era. A text corpus containing 55,055 unique words was generated from 168 plays from the Shakespearean era (16th and 17th centuries) of undisputed authorship. A new score, CM1, is introduced to measure variation patterns based on the frequency of occurrence of each word for the authors John Fletcher, Ben Jonson, Thomas Middleton and William Shakespeare, compared to the rest of the authors in the study (which provides a reference of relative word usage at that time). A total of 50 WEKA methods were applied for Fletcher, Jonson and Middleton, to identify those which were able to produce models yielding over 90% classification accuracy. This ensemble of WEKA methods was then applied to model Shakespearean authorship across all 168 plays, yielding a Matthews' correlation coefficient (MCC) performance of over 90%. Furthermore, the best model yielded an MCC of 99%. Our results suggest that different authors, while adhering to the structural and grammatical bounds of a common language, develop measurably distinct styles by the tendency to over-utilise or avoid particular common words and phrasings. Considering language and the potential of words as an abstract chaotic system with a high entropy, similarities can be drawn to the Maxwell's Demon thought experiment; authors subconsciously favour or filter certain words, modifying the probability profile in ways that could reflect their individuality and style.
Language Individuation and Marker Words: Shakespeare and His Maxwell's Demon
Marsden, John; Budden, David; Craig, Hugh; Moscato, Pablo
2013-01-01
Background Within the structural and grammatical bounds of a common language, all authors develop their own distinctive writing styles. Whether the relative occurrence of common words can be measured to produce accurate models of authorship is of particular interest. This work introduces a new score that helps to highlight such variations in word occurrence, and is applied to produce models of authorship of a large group of plays from the Shakespearean era. Methodology A text corpus containing 55,055 unique words was generated from 168 plays from the Shakespearean era (16th and 17th centuries) of undisputed authorship. A new score, CM1, is introduced to measure variation patterns based on the frequency of occurrence of each word for the authors John Fletcher, Ben Jonson, Thomas Middleton and William Shakespeare, compared to the rest of the authors in the study (which provides a reference of relative word usage at that time). A total of 50 WEKA methods were applied for Fletcher, Jonson and Middleton, to identify those which were able to produce models yielding over 90% classification accuracy. This ensemble of WEKA methods was then applied to model Shakespearean authorship across all 168 plays, yielding a Matthews' correlation coefficient (MCC) performance of over 90%. Furthermore, the best model yielded an MCC of 99%. Conclusions Our results suggest that different authors, while adhering to the structural and grammatical bounds of a common language, develop measurably distinct styles by the tendency to over-utilise or avoid particular common words and phrasings. Considering language and the potential of words as an abstract chaotic system with a high entropy, similarities can be drawn to the Maxwell's Demon thought experiment; authors subconsciously favour or filter certain words, modifying the probability profile in ways that could reflect their individuality and style. PMID:23826143
A Proposal for Studying the Values/Reasoning Distinction in Moral Development and Training.
ERIC Educational Resources Information Center
Kaplan, Martin F.
Application of a common framework in studies of the development of social cognition can reduce conceptual and methodological ambiguities and enable clearer study of core issues. This paper describes the core issues and their attendant problems, outlines a model of information integration that addresses the issues, and describes some illustrative…
ERIC Educational Resources Information Center
Kaufman, Peter A.; Melton, Horace L.; Varner, Iris I.; Hoelscher, Mark; Schmidt, Klaus; Spaulding, Aslihan D.
2011-01-01
Using an experiential learning model as a conceptual background, this article discusses characteristics and learning objectives for well-known foreign study programs such as study tours, study abroad, and internships and compares them with a less common overseas program called the "Global Marketing Program" (GMP). GMP involves…
NASA Astrophysics Data System (ADS)
Malard, J. J.; Adamowski, J. F.; Wang, L. Y.; Rojas, M.; Carrera, J.; Gálvez, J.; Tuy, H. A.; Melgar-Quiñonez, H.
2015-12-01
The modelling of the impacts of climate change on agriculture requires the inclusion of socio-economic factors. However, while cropping models and economic models of agricultural systems are common, dynamically coupled socio-economic-biophysical models have not received as much success. A promising methodology for modelling the socioeconomic aspects of coupled natural-human systems is participatory system dynamics modelling, in which stakeholders develop mental maps of the socio-economic system that are then turned into quantified simulation models. This methodology has been successful in the water resources management field. However, while the stocks and flows of water resources have also been represented within the system dynamics modelling framework and thus coupled to the socioeconomic portion of the model, cropping models are ill-suited for such reformulation. In addition, most of these system dynamics models were developed without stakeholder input, limiting the scope for the adoption and implementation of their results. We therefore propose a new methodology for the analysis of climate change variability on agroecosystems which uses dynamically coupled system dynamics (socio-economic) and biophysical (cropping) models to represent both physical and socioeconomic aspects of the agricultural system, using two case studies (intensive market-based agricultural development versus subsistence crop-based development) from rural Guatemala. The system dynamics model component is developed with relevant governmental and NGO stakeholders from rural and agricultural development in the case study regions and includes such processes as education, poverty and food security. Common variables with the cropping models (yield and agricultural management choices) are then used to dynamically couple the two models together, allowing for the analysis of the agroeconomic system's response to and resilience against various climatic and socioeconomic shocks.
Shen, Meiyu; Russek-Cohen, Estelle; Slud, Eric V
2016-08-12
Bioequivalence (BE) studies are an essential part of the evaluation of generic drugs. The most common in vivo BE study design is the two-period two-treatment crossover design. AUC (area under the concentration-time curve) and Cmax (maximum concentration) are obtained from the observed concentration-time profiles for each subject from each treatment under each sequence. In the BE evaluation of pharmacokinetic crossover studies, the normality of the univariate response variable, e.g. log(AUC) 1 or log(Cmax), is often assumed in the literature without much evidence. Therefore, we investigate the distributional assumption of the normality of response variables, log(AUC) and log(Cmax), by simulating concentration-time profiles from two-stage pharmacokinetic models (commonly used in pharmacokinetic research) for a wide range of pharmacokinetic parameters and measurement error structures. Our simulations show that, under reasonable distributional assumptions on the pharmacokinetic parameters, log(AUC) has heavy tails and log(Cmax) is skewed. Sensitivity analyses are conducted to investigate how the distribution of the standardized log(AUC) (or the standardized log(Cmax)) for a large number of simulated subjects deviates from normality if distributions of errors in the pharmacokinetic model for plasma concentrations deviate from normality and if the plasma concentration can be described by different compartmental models.
Hokkanen, Laura; Lettner, Sandra; Barbosa, Fernando; Constantinou, Marios; Harper, Lauren; Kasten, Erich; Mondini, Sara; Persson, Bengt; Varako, Nataliya; Hessen, Erik
2018-06-20
The aims of the study were to analyze the current European situation of specialist education and training within clinical neuropsychology, and the legal and professional status of clinical neuropsychologists in different European countries. An online survey was prepared in 2016 by a Task Force established by the European Federation of Psychological Associations, and representatives of 30 countries gave their responses. Response rate was 76%. Only three countries were reported to regulate the title of clinical neuropsychologist as well as the education and practice of clinical neuropsychologists by law. The most common university degree required to practice clinical neuropsychology was the master's degree; a doctoral degree was required in two countries. The length of the specialist education after the master's degree varied between 12 and 60 months. In one third of the countries, no commonly agreed upon model for specialist education existed. A more systematic training model and a longer duration of training were associated with independence in the work of clinical neuropsychologists. As legal regulation is mostly absent and training models differ, those actively practicing clinical neuropsychology in Europe have a very heterogeneous educational background and skill level. There is a need for a European standardization of specialist training in clinical neuropsychology. Guiding principles for establishing the common core requirements are presented.
Lin, Meihua; Li, Haoli; Zhao, Xiaolei; Qin, Jiheng
2013-01-01
Genome-wide analysis of gene-gene interactions has been recognized as a powerful avenue to identify the missing genetic components that can not be detected by using current single-point association analysis. Recently, several model-free methods (e.g. the commonly used information based metrics and several logistic regression-based metrics) were developed for detecting non-linear dependence between genetic loci, but they are potentially at the risk of inflated false positive error, in particular when the main effects at one or both loci are salient. In this study, we proposed two conditional entropy-based metrics to challenge this limitation. Extensive simulations demonstrated that the two proposed metrics, provided the disease is rare, could maintain consistently correct false positive rate. In the scenarios for a common disease, our proposed metrics achieved better or comparable control of false positive error, compared to four previously proposed model-free metrics. In terms of power, our methods outperformed several competing metrics in a range of common disease models. Furthermore, in real data analyses, both metrics succeeded in detecting interactions and were competitive with the originally reported results or the logistic regression approaches. In conclusion, the proposed conditional entropy-based metrics are promising as alternatives to current model-based approaches for detecting genuine epistatic effects. PMID:24339984
Common neighbour structure and similarity intensity in complex networks
NASA Astrophysics Data System (ADS)
Hou, Lei; Liu, Kecheng
2017-10-01
Complex systems as networks always exhibit strong regularities, implying underlying mechanisms governing their evolution. In addition to the degree preference, the similarity has been argued to be another driver for networks. Assuming a network is randomly organised without similarity preference, the present paper studies the expected number of common neighbours between vertices. A symmetrical similarity index is accordingly developed by removing such expected number from the observed common neighbours. The developed index can not only describe the similarities between vertices, but also the dissimilarities. We further apply the proposed index to measure of the influence of similarity on the wring patterns of networks. Fifteen empirical networks as well as artificial networks are examined in terms of similarity intensity and degree heterogeneity. Results on real networks indicate that, social networks are strongly governed by the similarity as well as the degree preference, while the biological networks and infrastructure networks show no apparent similarity governance. Particularly, classical network models, such as the Barabási-Albert model, the Erdös-Rényi model and the Ring Lattice, cannot well describe the social networks in terms of the degree heterogeneity and similarity intensity. The findings may shed some light on the modelling and link prediction of different classes of networks.
Endogenous Crisis Waves: Stochastic Model with Synchronized Collective Behavior
NASA Astrophysics Data System (ADS)
Gualdi, Stanislao; Bouchaud, Jean-Philippe; Cencetti, Giulia; Tarzia, Marco; Zamponi, Francesco
2015-02-01
We propose a simple framework to understand commonly observed crisis waves in macroeconomic agent-based models, which is also relevant to a variety of other physical or biological situations where synchronization occurs. We compute exactly the phase diagram of the model and the location of the synchronization transition in parameter space. Many modifications and extensions can be studied, confirming that the synchronization transition is extremely robust against various sources of noise or imperfections.
Residual Structures in Latent Growth Curve Modeling
ERIC Educational Resources Information Center
Grimm, Kevin J.; Widaman, Keith F.
2010-01-01
Several alternatives are available for specifying the residual structure in latent growth curve modeling. Two specifications involve uncorrelated residuals and represent the most commonly used residual structures. The first, building on repeated measures analysis of variance and common specifications in multilevel models, forces residual variances…
NASA Astrophysics Data System (ADS)
Tofel-Grehl, Colby
This dissertation is comprised of three independently conducted analyses of a larger investigation into the practices and features of specialized STEM high schools. While educators and policy makers advocate the development of many new specialized STEM high schools, little is known about the unique features and practices of these schools. The results of these manuscripts add to the literature exploring the promise of specialized STEM schools. Manuscript 1¹ is a qualitative investigation of the common features of STEM schools across multiple school model types. Schools were found to possess common cultural and academic features regardless of model type. Manuscript 2² builds on the findings of manuscript 1. With no meaningful differences found attributable to model type, the researchers used grounded theory to explore the relationships between observed differences among programs as related to the intensity of the STEM experience offered at schools. Schools were found to fall into two categories, high STEM intensity (HSI) and low STEM intensity (LSI), based on five major traits. Manuscript 3³ examines the commonalities and differences in classroom discourse and teachers' questioning techniques in STEM schools. It explicates these discursive practices in order to explore instructional practices across schools. It also examines factors that may influence classroom discourse such as discipline, level of teacher education, and course status as required or elective. Collectively, this research furthers the agenda of better understanding the potential advantages of specialized STEM high schools for preparing a future scientific workforce. ¹Tofel-Grehl, C., Callahan, C., & Gubbins, E. (2012). STEM high school communities: Common and differing features. Manuscript in preparation. ²Tofel-Grehl, C., Callahan, C., & Gubbins, E. (2012). Variations in the intensity of specialized science, technology, engineering, and mathematics (STEM) high schools. Manuscript in preparation. ³Tofel-Grehl, C., Callahan, C., & Gubbins, E. (2012). Comparative analyses of discourse in specialized STEM school classes. Manuscript in preparation.
Regression Models For Multivariate Count Data
Zhang, Yiwen; Zhou, Hua; Zhou, Jin; Sun, Wei
2016-01-01
Data with multivariate count responses frequently occur in modern applications. The commonly used multinomial-logit model is limiting due to its restrictive mean-variance structure. For instance, analyzing count data from the recent RNA-seq technology by the multinomial-logit model leads to serious errors in hypothesis testing. The ubiquity of over-dispersion and complicated correlation structures among multivariate counts calls for more flexible regression models. In this article, we study some generalized linear models that incorporate various correlation structures among the counts. Current literature lacks a treatment of these models, partly due to the fact that they do not belong to the natural exponential family. We study the estimation, testing, and variable selection for these models in a unifying framework. The regression models are compared on both synthetic and real RNA-seq data. PMID:28348500
Regression Models For Multivariate Count Data.
Zhang, Yiwen; Zhou, Hua; Zhou, Jin; Sun, Wei
2017-01-01
Data with multivariate count responses frequently occur in modern applications. The commonly used multinomial-logit model is limiting due to its restrictive mean-variance structure. For instance, analyzing count data from the recent RNA-seq technology by the multinomial-logit model leads to serious errors in hypothesis testing. The ubiquity of over-dispersion and complicated correlation structures among multivariate counts calls for more flexible regression models. In this article, we study some generalized linear models that incorporate various correlation structures among the counts. Current literature lacks a treatment of these models, partly due to the fact that they do not belong to the natural exponential family. We study the estimation, testing, and variable selection for these models in a unifying framework. The regression models are compared on both synthetic and real RNA-seq data.
Cure rate model with interval censored data.
Kim, Yang-Jin; Jhun, Myoungshic
2008-01-15
In cancer trials, a significant fraction of patients can be cured, that is, the disease is completely eliminated, so that it never recurs. In general, treatments are developed to both increase the patients' chances of being cured and prolong the survival time among non-cured patients. A cure rate model represents a combination of cure fraction and survival model, and can be applied to many clinical studies over several types of cancer. In this article, the cure rate model is considered in the interval censored data composed of two time points, which include the event time of interest. Interval censored data commonly occur in the studies of diseases that often progress without symptoms, requiring clinical evaluation for detection (Encyclopedia of Biostatistics. Wiley: New York, 1998; 2090-2095). In our study, an approximate likelihood approach suggested by Goetghebeur and Ryan (Biometrics 2000; 56:1139-1144) is used to derive the likelihood in interval censored data. In addition, a frailty model is introduced to characterize the association between the cure fraction and survival model. In particular, the positive association between the cure fraction and the survival time is incorporated by imposing a common normal frailty effect. The EM algorithm is used to estimate parameters and a multiple imputation based on the profile likelihood is adopted for variance estimation. The approach is applied to the smoking cessation study in which the event of interest is a smoking relapse and several covariates including an intensive care treatment are evaluated to be effective for both the occurrence of relapse and the non-smoking duration. Copyright (c) 2007 John Wiley & Sons, Ltd.
Segal, N L; Feng, R; McGuire, S A; Allison, D B; Miller, S
2009-01-01
Earlier studies have established that a substantial percentage of variance in obesity-related phenotypes is explained by genetic components. However, only one study has used both virtual twins (VTs) and biological twins and was able to simultaneously estimate additive genetic, non-additive genetic, shared environmental and unshared environmental components in body mass index (BMI). Our current goal was to re-estimate four components of variance in BMI, applying a more rigorous model to biological and virtual multiples with additional data. Virtual multiples share the same family environment, offering unique opportunities to estimate common environmental influence on phenotypes that cannot be separated from the non-additive genetic component using only biological multiples. Data included 929 individuals from 164 monozygotic twin pairs, 156 dizygotic twin pairs, five triplet sets, one quadruplet set, 128 VT pairs, two virtual triplet sets and two virtual quadruplet sets. Virtual multiples consist of one biological child (or twins or triplets) plus one same-aged adoptee who are all raised together since infancy. We estimated the additive genetic, non-additive genetic, shared environmental and unshared random components in BMI using a linear mixed model. The analysis was adjusted for age, age(2), age(3), height, height(2), height(3), gender and race. Both non-additive genetic and common environmental contributions were significant in our model (P-values<0.0001). No significant additive genetic contribution was found. In all, 63.6% (95% confidence interval (CI) 51.8-75.3%) of the total variance of BMI was explained by a non-additive genetic component, 25.7% (95% CI 13.8-37.5%) by a common environmental component and the remaining 10.7% by an unshared component. Our results suggest that genetic components play an essential role in BMI and that common environmental factors such as diet or exercise also affect BMI. This conclusion is consistent with our earlier study using a smaller sample and shows the utility of virtual multiples for separating non-additive genetic variance from common environmental variance.
Alwee, Razana; Hj Shamsuddin, Siti Mariyam; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models. PMID:23766729
Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.
Marschollek, M; Nemitz, G; Gietzelt, M; Wolf, K H; Meyer Zu Schwabedissen, H; Haux, R
2009-08-01
Falls are among the predominant causes for morbidity and mortality in elderly persons and occur most often in geriatric clinics. Despite several studies that have identified parameters associated with elderly patients' fall risk, prediction models -- e.g., based on geriatric assessment data -- are currently not used on a regular basis. Furthermore, technical aids to objectively assess mobility-associated parameters are currently not used. To assess group differences in clinical as well as common geriatric assessment data and sensory gait measurements between fallers and non-fallers in a geriatric sample, and to derive and compare two prediction models based on assessment data alone (model #1) and added sensory measurement data (model #2). For a sample of n=110 geriatric in-patients (81 women, 29 men) the following fall risk-associated assessments were performed: Timed 'Up & Go' (TUG) test, STRATIFY score and Barthel index. During the TUG test the subjects wore a triaxial accelerometer, and sensory gait parameters were extracted from the data recorded. Group differences between fallers (n=26) and non-fallers (n=84) were compared using Student's t-test. Two classification tree prediction models were computed and compared. Significant differences between the two groups were found for the following parameters: time to complete the TUG test, transfer item (Barthel), recent falls (STRATIFY), pelvic sway while walking and step length. Prediction model #1 (using common assessment data only) showed a sensitivity of 38.5% and a specificity of 97.6%, prediction model #2 (assessment data plus sensory gait parameters) performed with 57.7% and 100%, respectively. Significant differences between fallers and non-fallers among geriatric in-patients can be detected for several assessment subscores as well as parameters recorded by simple accelerometric measurements during a common mobility test. Existing geriatric assessment data may be used for falls prediction on a regular basis. Adding sensory data improves the specificity of our test markedly.
Lu, Tao
2017-01-01
The joint modeling of mean and variance for longitudinal data is an active research area. This type of model has the advantage of accounting for heteroscedasticity commonly observed in between and within subject variations. Most of researches focus on improving the estimating efficiency but ignore many data features frequently encountered in practice. In this article, we develop a mixed-effects location scale joint model that concurrently accounts for longitudinal data with multiple features. Specifically, our joint model handles heterogeneity, skewness, limit of detection, measurement errors in covariates which are typically observed in the collection of longitudinal data from many studies. We employ a Bayesian approach for making inference on the joint model. The proposed model and method are applied to an AIDS study. Simulation studies are performed to assess the performance of the proposed method. Alternative models under different conditions are compared.
Constructing service-oriented architecture adoption maturity matrix using Kano model
NASA Astrophysics Data System (ADS)
Hamzah, Mohd Hamdi Irwan; Baharom, Fauziah; Mohd, Haslina
2017-10-01
Commonly, organizations adopted Service-Oriented Architecture (SOA) because it can provide a flexible reconfiguration and can reduce the development time and cost. In order to guide the SOA adoption, previous industry and academia have constructed SOA maturity model. However, there is a limited number of works on how to construct the matrix in the previous SOA maturity model. Therefore, this study is going to provide a method that can be used in order to construct the matrix in the SOA maturity model. This study adapts Kano Model to construct the cross evaluation matrix focused on SOA adoption IT and business benefits. This study found that Kano Model can provide a suitable and appropriate method for constructing the cross evaluation matrix in SOA maturity model. Kano model also can be used to plot, organize and better represent the evaluation dimension for evaluating the SOA adoption.
Osier, Nicole D.; Dixon, C. Edward
2016-01-01
Controlled cortical impact (CCI) is a mechanical model of traumatic brain injury (TBI) that was developed nearly 30 years ago with the goal of creating a testing platform to determine the biomechanical properties of brain tissue exposed to direct mechanical deformation. Initially used to model TBIs produced by automotive crashes, the CCI model rapidly transformed into a standardized technique to study TBI mechanisms and evaluate therapies. CCI is most commonly produced using a device that rapidly accelerates a rod to impact the surgically exposed cortical dural surface. The tip of the rod can be varied in size and geometry to accommodate scalability to difference species. Typically, the rod is actuated by a pneumatic piston or electromagnetic actuator. With some limits, CCI devices can control the velocity, depth, duration, and site of impact. The CCI model produces morphologic and cerebrovascular injury responses that resemble certain aspects of human TBI. Commonly observed are graded histologic and axonal derangements, disruption of the blood–brain barrier, subdural and intra-parenchymal hematoma, edema, inflammation, and alterations in cerebral blood flow. The CCI model also produces neurobehavioral and cognitive impairments similar to those observed clinically. In contrast to other TBI models, the CCI device induces a significantly pronounced cortical contusion, but is limited in the extent to which it models the diffuse effects of TBI; a related limitation is that not all clinical TBI cases are characterized by a contusion. Another perceived limitation is that a non-clinically relevant craniotomy is performed. Biomechanically, this is irrelevant at the tissue level. However, craniotomies are not atraumatic and the effects of surgery should be controlled by including surgical sham control groups. CCI devices have also been successfully used to impact closed skulls to study mild and repetitive TBI. Future directions for CCI research surround continued refinements to the model through technical improvements in the devices (e.g., minimizing mechanical sources of variation). Like all TBI models, publications should report key injury parameters as outlined in the NIH common data elements (CDEs) for pre-clinical TBI. PMID:27582726
Model Data Interoperability for the United States Integrated Ocean Observing System (IOOS)
NASA Astrophysics Data System (ADS)
Signell, Richard P.
2010-05-01
Model data interoperability for the United States Integrated Ocean Observing System (IOOS) was initiated with a focused one year project. The problem was that there were many regional and national providers of oceanographic model data; each had unique file conventions, distribution techniques and analysis tools that made it difficult to compare model results and observational data. To solve this problem, a distributed system was built utilizing a customized middleware layer and a common data model. This allowed each model data provider to keep their existing model and data files unchanged, yet deliver model data via web services in a common form. With standards-based applications that used these web services, end users then had a common way to access data from any of the models. These applications included: (1) a 2D mapping and animation using a web browser application, (2) an advanced 3D visualization and animation using a desktop application, and (3) a toolkit for a common scientific analysis environment. Due to the flexibility and low impact of the approach on providers, rapid progress was made. The system was implemented in all eleven US IOOS regions and at the NOAA National Coastal Data Development Center, allowing common delivery of regional and national oceanographic model forecast and archived results that cover all US waters. The system, based heavily on software technology from the NSF-sponsored Unidata Program Center, is applicable to any structured gridded data, not just oceanographic model data. There is a clear pathway to expand the system to include unstructured grid (e.g. triangular grid) data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Labby, Z.
Physicists are often expected to have a solid grounding in experimental design and statistical analysis, sometimes filling in when biostatisticians or other experts are not available for consultation. Unfortunately, graduate education on these topics is seldom emphasized and few opportunities for continuing education exist. Clinical physicists incorporate new technology and methods into their practice based on published literature. A poor understanding of experimental design and analysis could Result in inappropriate use of new techniques. Clinical physicists also improve current practice through quality initiatives that require sound experimental design and analysis. Academic physicists with a poor understanding of design and analysismore » may produce ambiguous (or misleading) results. This can Result in unnecessary rewrites, publication rejection, and experimental redesign (wasting time, money, and effort). This symposium will provide a practical review of error and uncertainty, common study designs, and statistical tests. Instruction will primarily focus on practical implementation through examples and answer questions such as: where would you typically apply the test/design and where is the test/design typically misapplied (i.e., common pitfalls)? An analysis of error and uncertainty will also be explored using biological studies and associated modeling as a specific use case. Learning Objectives: Understand common experimental testing and clinical trial designs, what questions they can answer, and how to interpret the results Determine where specific statistical tests are appropriate and identify common pitfalls Understand the how uncertainty and error are addressed in biological testing and associated biological modeling.« less
Ma, Zuliang; Wang, Guanghai; Chen, Xuejiao; Ou, Zejin; Zou, Fei
2014-01-01
Signal transducer and activator of transcription 3 (STAT3) plays an important role in energy metabolism. Here we explore whether STAT3 common variations influence risks of obesity and other metabolic disorders in a Chinese Han population. Two tagging single nucleotide polymorphisms (tagSNPs), rs1053005 and rs957970, were used to capture the common variations of STAT3. Relationships between genotypes and obesity, body mass index, plasma triglyceride and other metabolic diseases related parameters were analyzed for association study in 1742 subjects. Generalized linear model and logistic regression model were used for quantitative data analysis and case-control study, respectively. rs1053005 was significantly associated with body mass index and waist circumference (p = 0.013 and p = 0.02, respectively). rs957970 was significantly associated with plasma level of triglyceride (p = 0.007). GG genotype at rs1053005 had lower risks of both general obesity and central obesity (OR = 0.40, p = 0.034; OR = 0.42, p = 0.007, respectively) compared with AA genotype. CT genotype at rs957970 had a higher risk of hypertriglyceridemia (OR = 1.43, p = 0.015) compared with TT genotype. Neither of the two SNPs was associated with othermetabolic diseases related parameters. Our observations indicated that common variations of STAT3 could significantly affect the risk of obesity and hypertriglyceridemia in Chinese Han population. PMID:25014397
Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models
ERIC Educational Resources Information Center
Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai
2011-01-01
Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…
75 FR 78594 - Airworthiness Directives; The Boeing Company Model 777-200 Series Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-16
... which a T/R is installed with a design change known as ``Commonality T/R,'' which is common to Model 777... Airworthiness Directives; The Boeing Company Model 777-200 Series Airplanes AGENCY: Federal Aviation... certain Model 777-200 series airplanes. This AD requires installing a new insulation blanket on the latch...
ERIC Educational Resources Information Center
Stohlmann, Micah; Maiorca, Cathrine; Olson, Travis A.
2015-01-01
Mathematical modeling is an essential integrated piece of the Common Core State Standards. However, researchers have shown that mathematical modeling activities can be difficult for teachers to implement. Teachers are more likely to implement mathematical modeling activities if they have their own successful experiences with such activities. This…
Study on the standard architecture for geoinformation common services
NASA Astrophysics Data System (ADS)
Zha, Z.; Zhang, L.; Wang, C.; Jiang, J.; Huang, W.
2014-04-01
The construction of platform for geoinformation common services was completed or on going in in most provinces and cities in these years in China, and the platforms plays an important role in the economic and social activities. Geoinfromation and geoinfromation based services are the key issues in the platform. The standards on geoinormation common services play as bridges among the users, systems and designers of the platform. The standard architecture for geoinformation common services is the guideline for designing and using the standard system in which the standards integrated to each other to promote the development, sharing and services of geoinformation resources. To establish the standard architecture for geoinformation common services is one of the tasks of "Study on important standards for geonformation common services and management of public facilities in city". The scope of the standard architecture is defined, such as data or information model, interoperability interface or service, information management. Some Research work on the status of international standards of geoinormation common services in organization and countries, like ISO/TC 211, OGC and other countries or unions like USA, EU, Japan have done. Some principles are set up to evaluate the standard, such as availability, suitability and extensible ability. Then the development requirement and practical situation are analyzed, and a framework of the standard architecture for geoinformation common services are proposed. Finally, a summary and prospects of the geoinformation standards are made.
ERIC Educational Resources Information Center
Wisconsin Department of Public Instruction, 2011
2011-01-01
Wisconsin's adoption of the Common Core State Standards provides an excellent opportunity for Wisconsin school districts and communities to define expectations from birth through preparation for college and work. By aligning the existing Wisconsin Model Early Learning Standards with the Wisconsin Common Core State Standards, expectations can be…
Why Do Adolescents Use Drugs? A Common Sense Explanatory Model from the Social Actor's Perspective
ERIC Educational Resources Information Center
Nuno-Gutierrez, Bertha Lidia; Rodriguez-Cerda, Oscar; Alvarez-Nemegyei, Jose
2006-01-01
Analysis was made of the common sense explanations of 60 Mexican teenage illicit drug users in rehabilitation to determine their drug use debut. The explanatory model was separated into three blocks, two of which contained common sense aspects: interaction between subject's plane and the collectivity; and relationship between subject's interior…
Arif, Anmar; Wang, Zhaoyu; Wang, Jianhui; ...
2017-05-02
Load modeling has significant impact on power system studies. This paper presents a review on load modeling and identification techniques. Load models can be classified into two broad categories: static and dynamic models, while there are two types of approaches to identify model parameters: measurement-based and component-based. Load modeling has received more attention in recent years because of the renewable integration, demand-side management, and smart metering devices. However, the commonly used load models are outdated, and cannot represent emerging loads. There is a need to systematically review existing load modeling techniques and suggest future research directions to meet the increasingmore » interests from industry and academia. In this study, we provide a thorough survey on the academic research progress and industry practices, and highlight existing issues and new trends in load modeling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arif, Anmar; Wang, Zhaoyu; Wang, Jianhui
Load modeling has significant impact on power system studies. This paper presents a review on load modeling and identification techniques. Load models can be classified into two broad categories: static and dynamic models, while there are two types of approaches to identify model parameters: measurement-based and component-based. Load modeling has received more attention in recent years because of the renewable integration, demand-side management, and smart metering devices. However, the commonly used load models are outdated, and cannot represent emerging loads. There is a need to systematically review existing load modeling techniques and suggest future research directions to meet the increasingmore » interests from industry and academia. In this study, we provide a thorough survey on the academic research progress and industry practices, and highlight existing issues and new trends in load modeling.« less
A selection model for accounting for publication bias in a full network meta-analysis.
Mavridis, Dimitris; Welton, Nicky J; Sutton, Alex; Salanti, Georgia
2014-12-30
Copas and Shi suggested a selection model to explore the potential impact of publication bias via sensitivity analysis based on assumptions for the probability of publication of trials conditional on the precision of their results. Chootrakool et al. extended this model to three-arm trials but did not fully account for the implications of the consistency assumption, and their model is difficult to generalize for complex network structures with more than three treatments. Fitting these selection models within a frequentist setting requires maximization of a complex likelihood function, and identification problems are common. We have previously presented a Bayesian implementation of the selection model when multiple treatments are compared with a common reference treatment. We now present a general model suitable for complex, full network meta-analysis that accounts for consistency when adjusting results for publication bias. We developed a design-by-treatment selection model to describe the mechanism by which studies with different designs (sets of treatments compared in a trial) and precision may be selected for publication. We fit the model in a Bayesian setting because it avoids the numerical problems encountered in the frequentist setting, it is generalizable with respect to the number of treatments and study arms, and it provides a flexible framework for sensitivity analysis using external knowledge. Our model accounts for the additional uncertainty arising from publication bias more successfully compared to the standard Copas model or its previous extensions. We illustrate the methodology using a published triangular network for the failure of vascular graft or arterial patency. Copyright © 2014 John Wiley & Sons, Ltd.
Chiu, Chi-yang; Jung, Jeesun; Wang, Yifan; Weeks, Daniel E.; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Amos, Christopher I.; Mills, James L.; Boehnke, Michael; Xiong, Momiao; Fan, Ruzong
2016-01-01
In this paper, extensive simulations are performed to compare two statistical methods to analyze multiple correlated quantitative phenotypes: (1) approximate F-distributed tests of multivariate functional linear models (MFLM) and additive models of multivariate analysis of variance (MANOVA), and (2) Gene Association with Multiple Traits (GAMuT) for association testing of high-dimensional genotype data. It is shown that approximate F-distributed tests of MFLM and MANOVA have higher power and are more appropriate for major gene association analysis (i.e., scenarios in which some genetic variants have relatively large effects on the phenotypes); GAMuT has higher power and is more appropriate for analyzing polygenic effects (i.e., effects from a large number of genetic variants each of which contributes a small amount to the phenotypes). MFLM and MANOVA are very flexible and can be used to perform association analysis for: (i) rare variants, (ii) common variants, and (iii) a combination of rare and common variants. Although GAMuT was designed to analyze rare variants, it can be applied to analyze a combination of rare and common variants and it performs well when (1) the number of genetic variants is large and (2) each variant contributes a small amount to the phenotypes (i.e., polygenes). MFLM and MANOVA are fixed effect models which perform well for major gene association analysis. GAMuT can be viewed as an extension of sequence kernel association tests (SKAT). Both GAMuT and SKAT are more appropriate for analyzing polygenic effects and they perform well not only in the rare variant case, but also in the case of a combination of rare and common variants. Data analyses of European cohorts and the Trinity Students Study are presented to compare the performance of the two methods. PMID:27917525
Frost, Ram
2012-10-01
I have argued that orthographic processing cannot be understood and modeled without considering the manner in which orthographic structure represents phonological, semantic, and morphological information in a given writing system. A reading theory, therefore, must be a theory of the interaction of the reader with his/her linguistic environment. This outlines a novel approach to studying and modeling visual word recognition, an approach that focuses on the common cognitive principles involved in processing printed words across different writing systems. These claims were challenged by several commentaries that contested the merits of my general theoretical agenda, the relevance of the evolution of writing systems, and the plausibility of finding commonalities in reading across orthographies. Other commentaries extended the scope of the debate by bringing into the discussion additional perspectives. My response addresses all these issues. By considering the constraints of neurobiology on modeling reading, developmental data, and a large scope of cross-linguistic evidence, I argue that front-end implementations of orthographic processing that do not stem from a comprehensive theory of the complex information conveyed by writing systems do not present a viable approach for understanding reading. The common principles by which writing systems have evolved to represent orthographic, phonological, and semantic information in a language reveal the critical distributional characteristics of orthographic structure that govern reading behavior. Models of reading should thus be learning models, primarily constrained by cross-linguistic developmental evidence that describes how the statistical properties of writing systems shape the characteristics of orthographic processing. When this approach is adopted, a universal model of reading is possible.
In vitro cell culture models to study the corneal drug absorption.
Reichl, Stephan; Kölln, Christian; Hahne, Matthias; Verstraelen, Jessica
2011-05-01
Many diseases of the anterior eye segment are treated using topically applied ophthalmic drugs. For these drugs, the cornea is the main barrier to reaching the interior of the eye. In vitro studies regarding transcorneal drug absorption are commonly performed using excised corneas from experimental animals. Due to several disadvantages and limitations of these animal experiments, establishing corneal cell culture models has been attempted as an alternative. This review summarizes the development of in vitro models based on corneal cell cultures for permeation studies during the last 20 years, starting with simple epithelial models and moving toward complex organotypical 3D corneal equivalents. Current human 3D corneal cell culture models have the potential to replace excised animal corneas in drug absorption studies. However, for widespread use, the contemporary validation of existent systems is required.
Ouchi, Eriko; Niu, Kaijun; Kobayashi, Yoritoshi; Guan, Lei; Momma, Haruki; Guo, Hui; Chujo, Masahiko; Otomo, Atsushi; Cui, Yufei; Nagatomi, Ryoichi
2012-11-16
Alcohol intake has been associated with reduced incidence of common cold symptoms in 2 European studies. However, no study has addressed the association between the frequency of alcohol intake and the incidence of common cold. This study aimed to investigate the association between the amount and frequency of alcohol drinking and the retrospective prevalence of common cold in Japanese men. This retrospective study included men who participated in an annual health examination conducted in Sendai, Japan. The frequency of common cold episodes in the previous year was self-reported. The weekly frequency and amount of alcohol consumed, as well as the type of alcoholic drink, were reported by a brief-type self-administered diet history questionnaire. Logistic regression models were used to analyze the association between the amount and frequency of alcohol intake and the retrospective prevalence of common cold. Among 899 men, 83.4% of the subjects reported drinking alcohol, and 55.4% of the subjects reported having experienced at least one episode of common cold in the previous year. Compared with non-drinkers, the adjusted odds ratios (ORs) with 95% confidence intervals (CIs) for having had 1 or more episodes of common cold during the past year across categories of alcohol intake frequency of 3 or less, 4-6, and 7 days/week were 0.827 (0.541-1.266), 0.703 (0.439-1.124), and 0.621 (0.400-0.965), respectively (P for trend = 0.025); the adjusted ORs with 95% CIs for having had of 2 or more episodes of common cold across the same categories were 0.642 (0.395-1.045), 0.557 (0.319-0.973), and 0.461 (0.270-0.787), respectively (P for trend = 0.006). Compared with subjects who consumed 11.5-35.8 g of alcohol per day, the non-drinkers were significantly more likely to experience 2 or more episodes of common cold (OR, 1.843; 95% CI, 1.115-3.047). The frequency, not the amount, of alcohol intake was significantly related to lower prevalence of self-reported common cold episodes in Japanese men.
Neumeyer, Courtney H; Gerlach, Jamie L; Ruggiero, Kristin M; Covi, Joseph A
2015-03-01
The brine shrimp, Artemia (Crustacea, Anostraca), is a zooplankton that is commonly used in both basic and applied research. Unfortunately, Artemia embryos are often cultured under conditions that alter early development, and reports based on these cultures oversimplify or fail to describe morphological phenotypes. This is due in part to the lack of a comprehensive developmental model that is applicable to observations of live specimens. The objective of this study was to build and test a descriptive model of post-diapause development in Artemia franciscana using observations made with a standard dissecting microscope. The working model presented is the first to comprehensively place all known "abnormal" embryonic and naupliar phenotypes within the context of a classic hatching profile. Contrary to previous reports, embryos and nauplii with aberrant phenotypes often recover and develop normally. Oval prenauplii may emerge as normal prenauplii (E2 stage). A delay of this transition leads to incomplete hatching or direct hatching of first instar larvae with a curved thoracoabdomen. When hatching is incomplete, retained cuticular remnants are shed during the next molt, and a "normal" second instar larva is produced. By differentiating between molting events and gross embryonic patterning in live embryos, this new model facilitates fine time-scale analyses of chemical and environmental impacts on early development. A small increase in salinity within what is commonly believed to be a permissive range (20‰-35‰) produced aberrant morphology by delaying emergence without slowing development. A similar effect was observed by decreasing culture density within a range commonly applied in toxicological studies. These findings clearly demonstrate that morphological data from end-point studies are highly dependent on the time points chosen. An alternate assessment method is proposed, and the potential impact of heavy metals, hexachlorobenzene, Mirex, and cis-nonachlor detected in commercial embryos is discussed. © 2014 Wiley Periodicals, Inc.
Gerald Rehfeldt
1991-01-01
Models were developed to describe genetic variation among 201 seedling populations of Pinus ponderosa var. ponderosa in the Inland Northwest of the United States. Common-garden studies provided three variables Jhat reflected growth and development in field environments and three principal components of six variables that reflected patterns of shoot elongation....
ERIC Educational Resources Information Center
Ma, Min; Zi, Fei
2015-01-01
This manuscript aims to explore and delineate the common characteristics of college students with perfectionism, to promote an in-depth understanding of dynamic personality development of perfectionists from the views of life story model proposed by McAdams (1985). The researchers adopted a narrative qualitative research method. The life stories…
Modeling the spatially dynamic distribution of humans in the Oregon (USA) coast range.
Jeffrey D. Kline; David L. Azuma; Alissa Moses
2003-01-01
A common approach to land use change analyses in multidisciplinary landscape-level studies is to delineate discrete forest and non-forest or urban and non-urban land use categories to serve as inputs into sets of integrated sub-models describing socioeconomic and ecological processes. Such discrete land use categories, however, may be inappropriate when the...
ERIC Educational Resources Information Center
Karl, Andrew T.; Yang, Yan; Lohr, Sharon L.
2013-01-01
Value-added models have been widely used to assess the contributions of individual teachers and schools to students' academic growth based on longitudinal student achievement outcomes. There is concern, however, that ignoring the presence of missing values, which are common in longitudinal studies, can bias teachers' value-added scores.…
ERIC Educational Resources Information Center
Radford, Julie
2010-01-01
Although word searching in children is very common, very little is known about how adults support children in the turns following the child's search behaviours, an important topic because of the social, educational, and clinical implications. This study characterizes, in detail, teachers' use of prompting, hinting, and supplying a model. From a…
ERIC Educational Resources Information Center
Lombardi, Sara A.; Hicks, Reimi E.; Thompson, Katerina V.; Marbach-Ad, Gili
2014-01-01
This study investigated the impact of three commonly used cardiovascular model-assisted activities on student learning and student attitudes and perspectives about science. College students enrolled in a Human Anatomy and Physiology course were randomly assigned to one of three experimental groups (organ dissections, virtual dissections, or…
ERIC Educational Resources Information Center
Fragouli, Stiliani; Rokka, Aggeliki
2017-01-01
In this study we introduce an infusion model to "inject" ammonites and ammonite fossils in current subjects of Greek primary curriculum. Paleontology and mainly fossils attract more and more elementary students and teachers, yet in Greece this trend is solely about dinosaurs, despite the fact that the most common Greek fossils are not…
The Utilization of Social Services by the Mexican-American Elderly.
ERIC Educational Resources Information Center
Starrett, Richard A.; Decker, James T.
The study tested the Andersen-Newman causal model of social service use as a means of determining patterns for social service use by Mexican American elderly. The model was shown to have applicability for identifying common and unique determinants of service use. Thirty-seven variables and data from a 1979-80 15-state survey were selected to form…
ERIC Educational Resources Information Center
Hsiao, Yu-Yu; Kwok, Oi-Man; Lai, Mark H. C.
2018-01-01
Path models with observed composites based on multiple items (e.g., mean or sum score of the items) are commonly used to test interaction effects. Under this practice, researchers generally assume that the observed composites are measured without errors. In this study, we reviewed and evaluated two alternative methods within the structural…
Kim, Eun Sook; Wang, Yan
2017-01-01
Population heterogeneity in growth trajectories can be detected with growth mixture modeling (GMM). It is common that researchers compute composite scores of repeated measures and use them as multiple indicators of growth factors (baseline performance and growth) assuming measurement invariance between latent classes. Considering that the assumption of measurement invariance does not always hold, we investigate the impact of measurement noninvariance on class enumeration and parameter recovery in GMM through a Monte Carlo simulation study (Study 1). In Study 2, we examine the class enumeration and parameter recovery of the second-order growth mixture modeling (SOGMM) that incorporates measurement models at the first order level. Thus, SOGMM estimates growth trajectory parameters with reliable sources of variance, that is, common factor variance of repeated measures and allows heterogeneity in measurement parameters between latent classes. The class enumeration rates are examined with information criteria such as AIC, BIC, sample-size adjusted BIC, and hierarchical BIC under various simulation conditions. The results of Study 1 showed that the parameter estimates of baseline performance and growth factor means were biased to the degree of measurement noninvariance even when the correct number of latent classes was extracted. In Study 2, the class enumeration accuracy of SOGMM depended on information criteria, class separation, and sample size. The estimates of baseline performance and growth factor mean differences between classes were generally unbiased but the size of measurement noninvariance was underestimated. Overall, SOGMM is advantageous in that it yields unbiased estimates of growth trajectory parameters and more accurate class enumeration compared to GMM by incorporating measurement models. PMID:28928691